categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG cs.AI cs.SY stat.ML | null | 1203.1007 | null | null | http://arxiv.org/pdf/1203.1007v2 | 2012-07-03T13:48:40Z | 2012-03-05T18:58:49Z | Agnostic System Identification for Model-Based Reinforcement Learning | A fundamental problem in control is to learn a model of a system from
observations that is useful for controller synthesis. To provide good
performance guarantees, existing methods must assume that the real system is in
the class of models considered during learning. We present an iterative method
with strong guarantees even in the agnostic case where the system is not in the
class. In particular, we show that any no-regret online learning algorithm can
be used to obtain a near-optimal policy, provided some model achieves low
training error and access to a good exploration distribution. Our approach
applies to both discrete and continuous domains. We demonstrate its efficacy
and scalability on a challenging helicopter domain from the literature.
| [
"Stephane Ross, J. Andrew Bagnell",
"['Stephane Ross' 'J. Andrew Bagnell']"
] |
cs.CV cs.LG | null | 1203.1483 | null | null | http://arxiv.org/pdf/1203.1483v1 | 2012-03-07T14:33:26Z | 2012-03-07T14:33:26Z | Learning Random Kernel Approximations for Object Recognition | Approximations based on random Fourier features have recently emerged as an
efficient and formally consistent methodology to design large-scale kernel
machines. By expressing the kernel as a Fourier expansion, features are
generated based on a finite set of random basis projections, sampled from the
Fourier transform of the kernel, with inner products that are Monte Carlo
approximations of the original kernel. Based on the observation that different
kernel-induced Fourier sampling distributions correspond to different kernel
parameters, we show that an optimization process in the Fourier domain can be
used to identify the different frequency bands that are useful for prediction
on training data. Moreover, the application of group Lasso to random feature
vectors corresponding to a linear combination of multiple kernels, leads to
efficient and scalable reformulations of the standard multiple kernel learning
model \cite{Varma09}. In this paper we develop the linear Fourier approximation
methodology for both single and multiple gradient-based kernel learning and
show that it produces fast and accurate predictors on a complex dataset such as
the Visual Object Challenge 2011 (VOC2011).
| [
"Eduard Gabriel B\\u{a}z\\u{a}van, Fuxin Li and Cristian Sminchisescu",
"['Eduard Gabriel Băzăvan' 'Fuxin Li' 'Cristian Sminchisescu']"
] |
stat.ML cs.LG | null | 1203.1596 | null | null | http://arxiv.org/pdf/1203.1596v2 | 2012-06-14T18:44:49Z | 2012-03-07T20:31:17Z | Multiple Operator-valued Kernel Learning | Positive definite operator-valued kernels generalize the well-known notion of
reproducing kernels, and are naturally adapted to multi-output learning
situations. This paper addresses the problem of learning a finite linear
combination of infinite-dimensional operator-valued kernels which are suitable
for extending functional data analysis methods to nonlinear contexts. We study
this problem in the case of kernel ridge regression for functional responses
with an lr-norm constraint on the combination coefficients. The resulting
optimization problem is more involved than those of multiple scalar-valued
kernel learning since operator-valued kernels pose more technical and
theoretical issues. We propose a multiple operator-valued kernel learning
algorithm based on solving a system of linear operator equations by using a
block coordinatedescent procedure. We experimentally validate our approach on a
functional regression task in the context of finger movement prediction in
brain-computer interfaces.
| [
"Hachem Kadri (INRIA Lille - Nord Europe), Alain Rakotomamonjy (LITIS),\n Francis Bach (INRIA Paris - Rocquencourt, LIENS), Philippe Preux (INRIA Lille\n - Nord Europe)",
"['Hachem Kadri' 'Alain Rakotomamonjy' 'Francis Bach' 'Philippe Preux']"
] |
cs.LG cs.DB | null | 1203.2002 | null | null | http://arxiv.org/pdf/1203.2002v1 | 2012-03-09T07:08:10Z | 2012-03-09T07:08:10Z | Graph partitioning advance clustering technique | Clustering is a common technique for statistical data analysis, Clustering is
the process of grouping the data into classes or clusters so that objects
within a cluster have high similarity in comparison to one another, but are
very dissimilar to objects in other clusters. Dissimilarities are assessed
based on the attribute values describing the objects. Often, distance measures
are used. Clustering is an unsupervised learning technique, where interesting
patterns and structures can be found directly from very large data sets with
little or none of the background knowledge. This paper also considers the
partitioning of m-dimensional lattice graphs using Fiedler's approach, which
requires the determination of the eigenvector belonging to the second smallest
Eigenvalue of the Laplacian with K-means partitioning algorithm.
| [
"T Soni Madhulatha",
"['T Soni Madhulatha']"
] |
cs.LG stat.ML | null | 1203.2177 | null | null | http://arxiv.org/pdf/1203.2177v1 | 2012-03-09T20:51:37Z | 2012-03-09T20:51:37Z | Regret Bounds for Deterministic Gaussian Process Bandits | This paper analyses the problem of Gaussian process (GP) bandits with
deterministic observations. The analysis uses a branch and bound algorithm that
is related to the UCB algorithm of (Srinivas et al., 2010). For GPs with
Gaussian observation noise, with variance strictly greater than zero, (Srinivas
et al., 2010) proved that the regret vanishes at the approximate rate of
$O(\frac{1}{\sqrt{t}})$, where t is the number of observations. To complement
their result, we attack the deterministic case and attain a much faster
exponential convergence rate. Under some regularity assumptions, we show that
the regret decreases asymptotically according to $O(e^{-\frac{\tau t}{(\ln
t)^{d/4}}})$ with high probability. Here, d is the dimension of the search
space and $\tau$ is a constant that depends on the behaviour of the objective
function near its global maximum.
| [
"Nando de Freitas, Alex Smola, Masrour Zoghi",
"['Nando de Freitas' 'Alex Smola' 'Masrour Zoghi']"
] |
cs.SI cs.AI cs.LG stat.ML | null | 1203.2200 | null | null | http://arxiv.org/pdf/1203.2200v1 | 2012-03-09T22:45:34Z | 2012-03-09T22:45:34Z | Role-Dynamics: Fast Mining of Large Dynamic Networks | To understand the structural dynamics of a large-scale social, biological or
technological network, it may be useful to discover behavioral roles
representing the main connectivity patterns present over time. In this paper,
we propose a scalable non-parametric approach to automatically learn the
structural dynamics of the network and individual nodes. Roles may represent
structural or behavioral patterns such as the center of a star, peripheral
nodes, or bridge nodes that connect different communities. Our novel approach
learns the appropriate structural role dynamics for any arbitrary network and
tracks the changes over time. In particular, we uncover the specific global
network dynamics and the local node dynamics of a technological, communication,
and social network. We identify interesting node and network patterns such as
stationary and non-stationary roles, spikes/steps in role-memberships (perhaps
indicating anomalies), increasing/decreasing role trends, among many others.
Our results indicate that the nodes in each of these networks have distinct
connectivity patterns that are non-stationary and evolve considerably over
time. Overall, the experiments demonstrate the effectiveness of our approach
for fast mining and tracking of the dynamics in large networks. Furthermore,
the dynamic structural representation provides a basis for building more
sophisticated models and tools that are fast for exploring large dynamic
networks.
| [
"['Ryan Rossi' 'Brian Gallagher' 'Jennifer Neville' 'Keith Henderson']",
"Ryan Rossi, Brian Gallagher, Jennifer Neville, Keith Henderson"
] |
stat.ML cs.LG stat.CO | null | 1203.2394 | null | null | http://arxiv.org/pdf/1203.2394v1 | 2012-03-12T02:09:32Z | 2012-03-12T02:09:32Z | Decentralized, Adaptive, Look-Ahead Particle Filtering | The decentralized particle filter (DPF) was proposed recently to increase the
level of parallelism of particle filtering. Given a decomposition of the state
space into two nested sets of variables, the DPF uses a particle filter to
sample the first set and then conditions on this sample to generate a set of
samples for the second set of variables. The DPF can be understood as a variant
of the popular Rao-Blackwellized particle filter (RBPF), where the second step
is carried out using Monte Carlo approximations instead of analytical
inference. As a result, the range of applications of the DPF is broader than
the one for the RBPF. In this paper, we improve the DPF in two ways. First, we
derive a Monte Carlo approximation of the optimal proposal distribution and,
consequently, design and implement a more efficient look-ahead DPF. Although
the decentralized filters were initially designed to capitalize on parallel
implementation, we show that the look-ahead DPF can outperform the standard
particle filter even on a single machine. Second, we propose the use of bandit
algorithms to automatically configure the state space decomposition of the DPF.
| [
"['Mohamed Osama Ahmed' 'Pouyan T. Bibalan' 'Nando de Freitas'\n 'Simon Fauvel']",
"Mohamed Osama Ahmed, Pouyan T. Bibalan, Nando de Freitas and Simon\n Fauvel"
] |
math.ST cs.LG stat.ML stat.TH | 10.1214/12-AOS1025 | 1203.2507 | null | null | http://arxiv.org/abs/1203.2507v2 | 2012-12-12T10:11:08Z | 2012-03-12T14:50:55Z | Deviation optimal learning using greedy Q-aggregation | Given a finite family of functions, the goal of model selection aggregation
is to construct a procedure that mimics the function from this family that is
the closest to an unknown regression function. More precisely, we consider a
general regression model with fixed design and measure the distance between
functions by the mean squared error at the design points. While procedures
based on exponential weights are known to solve the problem of model selection
aggregation in expectation, they are, surprisingly, sub-optimal in deviation.
We propose a new formulation called Q-aggregation that addresses this
limitation; namely, its solution leads to sharp oracle inequalities that are
optimal in a minimax sense. Moreover, based on the new formulation, we design
greedy Q-aggregation procedures that produce sparse aggregation models
achieving the optimal rate. The convergence and performance of these greedy
procedures are illustrated and compared with other standard methods on
simulated examples.
| [
"Dong Dai, Philippe Rigollet, Tong Zhang",
"['Dong Dai' 'Philippe Rigollet' 'Tong Zhang']"
] |
cs.LG cs.CE cs.NI cs.SY stat.AP | 10.5121/ijasuc.2012.3105 | 1203.2511 | null | null | http://arxiv.org/abs/1203.2511v1 | 2012-03-09T18:08:34Z | 2012-03-09T18:08:34Z | A Simple Flood Forecasting Scheme Using Wireless Sensor Networks | This paper presents a forecasting model designed using WSNs (Wireless Sensor
Networks) to predict flood in rivers using simple and fast calculations to
provide real-time results and save the lives of people who may be affected by
the flood. Our prediction model uses multiple variable robust linear regression
which is easy to understand and simple and cost effective in implementation, is
speed efficient, but has low resource utilization and yet provides real time
predictions with reliable accuracy, thus having features which are desirable in
any real world algorithm. Our prediction model is independent of the number of
parameters, i.e. any number of parameters may be added or removed based on the
on-site requirements. When the water level rises, we represent it using a
polynomial whose nature is used to determine if the water level may exceed the
flood line in the near future. We compare our work with a contemporary
algorithm to demonstrate our improvements over it. Then we present our
simulation results for the predicted water level compared to the actual water
level.
| [
"['Victor Seal' 'Arnab Raha' 'Shovan Maity' 'Souvik Kr Mitra'\n 'Amitava Mukherjee' 'Mrinal Kanti Naskar']",
"Victor Seal, Arnab Raha, Shovan Maity, Souvik Kr Mitra, Amitava\n Mukherjee and Mrinal Kanti Naskar"
] |
cs.LG | null | 1203.2557 | null | null | http://arxiv.org/pdf/1203.2557v3 | 2012-06-08T23:50:17Z | 2012-03-12T17:17:34Z | On the Necessity of Irrelevant Variables | This work explores the effects of relevant and irrelevant boolean variables
on the accuracy of classifiers. The analysis uses the assumption that the
variables are conditionally independent given the class, and focuses on a
natural family of learning algorithms for such sources when the relevant
variables have a small advantage over random guessing. The main result is that
algorithms relying predominately on irrelevant variables have error
probabilities that quickly go to 0 in situations where algorithms that limit
the use of irrelevant variables have errors bounded below by a positive
constant. We also show that accurate learning is possible even when there are
so few examples that one cannot determine with high confidence whether or not
any individual variable is relevant.
| [
"['David P. Helmbold' 'Philip M. Long']",
"David P. Helmbold and Philip M. Long"
] |
stat.ML cs.LG | null | 1203.2570 | null | null | http://arxiv.org/pdf/1203.2570v1 | 2012-03-12T18:00:49Z | 2012-03-12T18:00:49Z | Differential Privacy for Functions and Functional Data | Differential privacy is a framework for privately releasing summaries of a
database. Previous work has focused mainly on methods for which the output is a
finite dimensional vector, or an element of some discrete set. We develop
methods for releasing functions while preserving differential privacy.
Specifically, we show that adding an appropriate Gaussian process to the
function of interest yields differential privacy. When the functions lie in the
same RKHS as the Gaussian process, then the correct noise level is established
by measuring the "sensitivity" of the function in the RKHS norm. As examples we
consider kernel density estimation, kernel support vector machines, and
functions in reproducing kernel Hilbert spaces.
| [
"Rob Hall, Alessandro Rinaldo, Larry Wasserman",
"['Rob Hall' 'Alessandro Rinaldo' 'Larry Wasserman']"
] |
stat.ME cs.LG cs.SI physics.soc-ph | null | 1203.2821 | null | null | http://arxiv.org/pdf/1203.2821v1 | 2012-03-13T14:18:56Z | 2012-03-13T14:18:56Z | Graphlet decomposition of a weighted network | We introduce the graphlet decomposition of a weighted network, which encodes
a notion of social information based on social structure. We develop a scalable
inference algorithm, which combines EM with Bron-Kerbosch in a novel fashion,
for estimating the parameters of the model underlying graphlets using one
network sample. We explore some theoretical properties of the graphlet
decomposition, including computational complexity, redundancy and expected
accuracy. We demonstrate graphlets on synthetic and real data. We analyze
messaging patterns on Facebook and criminal associations in the 19th century.
| [
"Hossein Azari Soufiani, Edoardo M Airoldi",
"['Hossein Azari Soufiani' 'Edoardo M Airoldi']"
] |
cs.LG cs.DB | null | 1203.2987 | null | null | http://arxiv.org/pdf/1203.2987v1 | 2012-03-14T02:23:22Z | 2012-03-14T02:23:22Z | Mining Education Data to Predict Student's Retention: A comparative
Study | The main objective of higher education is to provide quality education to
students. One way to achieve highest level of quality in higher education
system is by discovering knowledge for prediction regarding enrolment of
students in a course. This paper presents a data mining project to generate
predictive models for student retention management. Given new records of
incoming students, these predictive models can produce short accurate
prediction lists identifying students who tend to need the support from the
student retention program most. This paper examines the quality of the
predictive models generated by the machine learning algorithms. The results
show that some of the machines learning algorithms are able to establish
effective predictive models from the existing student retention data.
| [
"Surjeet Kumar Yadav, Brijesh Bharadwaj and Saurabh Pal",
"['Surjeet Kumar Yadav' 'Brijesh Bharadwaj' 'Saurabh Pal']"
] |
cs.LG cs.AI | null | 1203.2990 | null | null | http://arxiv.org/pdf/1203.2990v2 | 2012-11-29T20:02:48Z | 2012-03-14T02:38:35Z | Evolving Culture vs Local Minima | We propose a theory that relates difficulty of learning in deep architectures
to culture and language. It is articulated around the following hypotheses: (1)
learning in an individual human brain is hampered by the presence of effective
local minima; (2) this optimization difficulty is particularly important when
it comes to learning higher-level abstractions, i.e., concepts that cover a
vast and highly-nonlinear span of sensory configurations; (3) such high-level
abstractions are best represented in brains by the composition of many levels
of representation, i.e., by deep architectures; (4) a human brain can learn
such high-level abstractions if guided by the signals produced by other humans,
which act as hints or indirect supervision for these high-level abstractions;
and (5), language and the recombination and optimization of mental concepts
provide an efficient evolutionary recombination operator, and this gives rise
to rapid search in the space of communicable ideas that help humans build up
better high-level internal representations of their world. These hypotheses put
together imply that human culture and the evolution of ideas have been crucial
to counter an optimization difficulty: this optimization difficulty would
otherwise make it very difficult for human brains to capture high-level
knowledge of the world. The theory is grounded in experimental observations of
the difficulties of training deep artificial neural networks. Plausible
consequences of this theory for the efficiency of cultural evolutions are
sketched.
| [
"['Yoshua Bengio']",
"Yoshua Bengio"
] |
cs.AI cs.LG nlin.AO | 10.1007/978-3-642-30870-3_18 | 1203.3376 | null | null | http://arxiv.org/abs/1203.3376v1 | 2012-03-15T14:47:26Z | 2012-03-15T14:47:26Z | Learning, Social Intelligence and the Turing Test - why an
"out-of-the-box" Turing Machine will not pass the Turing Test | The Turing Test (TT) checks for human intelligence, rather than any putative
general intelligence. It involves repeated interaction requiring learning in
the form of adaption to the human conversation partner. It is a macro-level
post-hoc test in contrast to the definition of a Turing Machine (TM), which is
a prior micro-level definition. This raises the question of whether learning is
just another computational process, i.e. can be implemented as a TM. Here we
argue that learning or adaption is fundamentally different from computation,
though it does involve processes that can be seen as computations. To
illustrate this difference we compare (a) designing a TM and (b) learning a TM,
defining them for the purpose of the argument. We show that there is a
well-defined sequence of problems which are not effectively designable but are
learnable, in the form of the bounded halting problem. Some characteristics of
human intelligence are reviewed including it's: interactive nature, learning
abilities, imitative tendencies, linguistic ability and context-dependency. A
story that explains some of these is the Social Intelligence Hypothesis. If
this is broadly correct, this points to the necessity of a considerable period
of acculturation (social learning in context) if an artificial intelligence is
to pass the TT. Whilst it is always possible to 'compile' the results of
learning into a TM, this would not be a designed TM and would not be able to
continually adapt (pass future TTs). We conclude three things, namely that: a
purely "designed" TM will never pass the TT; that there is no such thing as a
general intelligence since it necessary involves learning; and that
learning/adaption and computation should be clearly distinguished.
| [
"Bruce Edmonds and Carlos Gershenson",
"['Bruce Edmonds' 'Carlos Gershenson']"
] |
cs.LG stat.ML | null | 1203.3461 | null | null | http://arxiv.org/pdf/1203.3461v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Robust Metric Learning by Smooth Optimization | Most existing distance metric learning methods assume perfect side
information that is usually given in pairwise or triplet constraints. Instead,
in many real-world applications, the constraints are derived from side
information, such as users' implicit feedbacks and citations among articles. As
a result, these constraints are usually noisy and contain many mistakes. In
this work, we aim to learn a distance metric from noisy constraints by robust
optimization in a worst-case scenario, to which we refer as robust metric
learning. We formulate the learning task initially as a combinatorial
optimization problem, and show that it can be elegantly transformed to a convex
programming problem. We present an efficient learning algorithm based on smooth
optimization [7]. It has a worst-case convergence rate of
O(1/{\surd}{\varepsilon}) for smooth optimization problems, where {\varepsilon}
is the desired error of the approximate solution. Finally, our empirical study
with UCI data sets demonstrate the effectiveness of the proposed method in
comparison to state-of-the-art methods.
| [
"Kaizhu Huang, Rong Jin, Zenglin Xu, Cheng-Lin Liu",
"['Kaizhu Huang' 'Rong Jin' 'Zenglin Xu' 'Cheng-Lin Liu']"
] |
cs.LG stat.ML | null | 1203.3462 | null | null | http://arxiv.org/pdf/1203.3462v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Gaussian Process Topic Models | We introduce Gaussian Process Topic Models (GPTMs), a new family of topic
models which can leverage a kernel among documents while extracting correlated
topics. GPTMs can be considered a systematic generalization of the Correlated
Topic Models (CTMs) using ideas from Gaussian Process (GP) based embedding.
Since GPTMs work with both a topic covariance matrix and a document kernel
matrix, learning GPTMs involves a novel component-solving a suitable Sylvester
equation capturing both topic and document dependencies. The efficacy of GPTMs
is demonstrated with experiments evaluating the quality of both topic modeling
and embedding.
| [
"['Amrudin Agovic' 'Arindam Banerjee']",
"Amrudin Agovic, Arindam Banerjee"
] |
cs.IR cs.LG stat.ML | null | 1203.3463 | null | null | http://arxiv.org/pdf/1203.3463v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Timeline: A Dynamic Hierarchical Dirichlet Process Model for Recovering
Birth/Death and Evolution of Topics in Text Stream | Topic models have proven to be a useful tool for discovering latent
structures in document collections. However, most document collections often
come as temporal streams and thus several aspects of the latent structure such
as the number of topics, the topics' distribution and popularity are
time-evolving. Several models exist that model the evolution of some but not
all of the above aspects. In this paper we introduce infinite dynamic topic
models, iDTM, that can accommodate the evolution of all the aforementioned
aspects. Our model assumes that documents are organized into epochs, where the
documents within each epoch are exchangeable but the order between the
documents is maintained across epochs. iDTM allows for unbounded number of
topics: topics can die or be born at any epoch, and the representation of each
topic can evolve according to a Markovian dynamics. We use iDTM to analyze the
birth and evolution of topics in the NIPS community and evaluated the efficacy
of our model on both simulated and real datasets with favorable outcome.
| [
"Amr Ahmed, Eric P. Xing",
"['Amr Ahmed' 'Eric P. Xing']"
] |
cs.LG stat.ML | null | 1203.3468 | null | null | http://arxiv.org/pdf/1203.3468v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Bayesian Rose Trees | Hierarchical structure is ubiquitous in data across many domains. There are
many hierarchical clustering methods, frequently used by domain experts, which
strive to discover this structure. However, most of these methods limit
discoverable hierarchies to those with binary branching structure. This
limitation, while computationally convenient, is often undesirable. In this
paper we explore a Bayesian hierarchical clustering algorithm that can produce
trees with arbitrary branching structure at each node, known as rose trees. We
interpret these trees as mixtures over partitions of a data set, and use a
computationally efficient, greedy agglomerative algorithm to find the rose
trees which have high marginal likelihood given the data. Lastly, we perform
experiments which demonstrate that rose trees are better models of data than
the typical binary trees returned by other hierarchical clustering algorithms.
| [
"Charles Blundell, Yee Whye Teh, Katherine A. Heller",
"['Charles Blundell' 'Yee Whye Teh' 'Katherine A. Heller']"
] |
cs.LG cs.AI stat.ML | null | 1203.3471 | null | null | http://arxiv.org/pdf/1203.3471v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | An Online Learning-based Framework for Tracking | We study the tracking problem, namely, estimating the hidden state of an
object over time, from unreliable and noisy measurements. The standard
framework for the tracking problem is the generative framework, which is the
basis of solutions such as the Bayesian algorithm and its approximation, the
particle filters. However, these solutions can be very sensitive to model
mismatches. In this paper, motivated by online learning, we introduce a new
framework for tracking. We provide an efficient tracking algorithm for this
framework. We provide experimental results comparing our algorithm to the
Bayesian algorithm on simulated data. Our experiments show that when there are
slight model mismatches, our algorithm outperforms the Bayesian algorithm.
| [
"Kamalika Chaudhuri, Yoav Freund, Daniel Hsu",
"['Kamalika Chaudhuri' 'Yoav Freund' 'Daniel Hsu']"
] |
cs.LG stat.ML | null | 1203.3472 | null | null | http://arxiv.org/pdf/1203.3472v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Super-Samples from Kernel Herding | We extend the herding algorithm to continuous spaces by using the kernel
trick. The resulting "kernel herding" algorithm is an infinite memory
deterministic process that learns to approximate a PDF with a collection of
samples. We show that kernel herding decreases the error of expectations of
functions in the Hilbert space at a rate O(1/T) which is much faster than the
usual O(1/pT) for iid random samples. We illustrate kernel herding by
approximating Bayesian predictive distributions.
| [
"['Yutian Chen' 'Max Welling' 'Alex Smola']",
"Yutian Chen, Max Welling, Alex Smola"
] |
cs.LG stat.ML | null | 1203.3475 | null | null | http://arxiv.org/pdf/1203.3475v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Inferring deterministic causal relations | We consider two variables that are related to each other by an invertible
function. While it has previously been shown that the dependence structure of
the noise can provide hints to determine which of the two variables is the
cause, we presently show that even in the deterministic (noise-free) case,
there are asymmetries that can be exploited for causal inference. Our method is
based on the idea that if the function and the probability density of the cause
are chosen independently, then the distribution of the effect will, in a
certain sense, depend on the function. We provide a theoretical analysis of
this method, showing that it also works in the low noise regime, and link it to
information geometry. We report strong empirical results on various real-world
data sets from different domains.
| [
"['Povilas Daniusis' 'Dominik Janzing' 'Joris Mooij' 'Jakob Zscheischler'\n 'Bastian Steudel' 'Kun Zhang' 'Bernhard Schoelkopf']",
"Povilas Daniusis, Dominik Janzing, Joris Mooij, Jakob Zscheischler,\n Bastian Steudel, Kun Zhang, Bernhard Schoelkopf"
] |
cs.LG stat.ML | null | 1203.3476 | null | null | http://arxiv.org/pdf/1203.3476v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Inference-less Density Estimation using Copula Bayesian Networks | We consider learning continuous probabilistic graphical models in the face of
missing data. For non-Gaussian models, learning the parameters and structure of
such models depends on our ability to perform efficient inference, and can be
prohibitive even for relatively modest domains. Recently, we introduced the
Copula Bayesian Network (CBN) density model - a flexible framework that
captures complex high-dimensional dependency structures while offering direct
control over the univariate marginals, leading to improved generalization. In
this work we show that the CBN model also offers significant computational
advantages when training data is partially observed. Concretely, we leverage on
the specialized form of the model to derive a computationally amenable learning
objective that is a lower bound on the log-likelihood function. Importantly,
our energy-like bound circumvents the need for costly inference of an auxiliary
distribution, thus facilitating practical learning of highdimensional
densities. We demonstrate the effectiveness of our approach for learning the
structure and parameters of a CBN model for two reallife continuous domains.
| [
"['Gal Elidan']",
"Gal Elidan"
] |
cs.LG cs.AI stat.ML | null | 1203.3481 | null | null | http://arxiv.org/pdf/1203.3481v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Real-Time Scheduling via Reinforcement Learning | Cyber-physical systems, such as mobile robots, must respond adaptively to
dynamic operating conditions. Effective operation of these systems requires
that sensing and actuation tasks are performed in a timely manner.
Additionally, execution of mission specific tasks such as imaging a room must
be balanced against the need to perform more general tasks such as obstacle
avoidance. This problem has been addressed by maintaining relative utilization
of shared resources among tasks near a user-specified target level. Producing
optimal scheduling strategies requires complete prior knowledge of task
behavior, which is unlikely to be available in practice. Instead, suitable
scheduling strategies must be learned online through interaction with the
system. We consider the sample complexity of reinforcement learning in this
domain, and demonstrate that while the problem state space is countably
infinite, we may leverage the problem's structure to guarantee efficient
learning.
| [
"['Robert Glaubius' 'Terry Tidwell' 'Christopher Gill' 'William D. Smart']",
"Robert Glaubius, Terry Tidwell, Christopher Gill, William D. Smart"
] |
cs.LG stat.ML | null | 1203.3483 | null | null | http://arxiv.org/pdf/1203.3483v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Regularized Maximum Likelihood for Intrinsic Dimension Estimation | We propose a new method for estimating the intrinsic dimension of a dataset
by applying the principle of regularized maximum likelihood to the distances
between close neighbors. We propose a regularization scheme which is motivated
by divergence minimization principles. We derive the estimator by a Poisson
process approximation, argue about its convergence properties and apply it to a
number of simulated and real datasets. We also show it has the best overall
performance compared with two other intrinsic dimension estimators.
| [
"Mithun Das Gupta, Thomas S. Huang",
"['Mithun Das Gupta' 'Thomas S. Huang']"
] |
cs.LG stat.ML | null | 1203.3485 | null | null | http://arxiv.org/pdf/1203.3485v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | The Hierarchical Dirichlet Process Hidden Semi-Markov Model | There is much interest in the Hierarchical Dirichlet Process Hidden Markov
Model (HDP-HMM) as a natural Bayesian nonparametric extension of the
traditional HMM. However, in many settings the HDP-HMM's strict Markovian
constraints are undesirable, particularly if we wish to learn or encode
non-geometric state durations. We can extend the HDP-HMM to capture such
structure by drawing upon explicit-duration semi-Markovianity, which has been
developed in the parametric setting to allow construction of highly
interpretable models that admit natural prior information on state durations.
In this paper we introduce the explicitduration HDP-HSMM and develop posterior
sampling algorithms for efficient inference in both the direct-assignment and
weak-limit approximation settings. We demonstrate the utility of the model and
our inference methods on synthetic data as well as experiments on a speaker
diarization problem and an example of learning the patterns in Morse code.
| [
"['Matthew J. Johnson' 'Alan Willsky']",
"Matthew J. Johnson, Alan Willsky"
] |
cs.LG stat.ML | null | 1203.3486 | null | null | http://arxiv.org/pdf/1203.3486v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Combining Spatial and Telemetric Features for Learning Animal Movement
Models | We introduce a new graphical model for tracking radio-tagged animals and
learning their movement patterns. The model provides a principled way to
combine radio telemetry data with an arbitrary set of userdefined, spatial
features. We describe an efficient stochastic gradient algorithm for fitting
model parameters to data and demonstrate its effectiveness via asymptotic
analysis and synthetic experiments. We also apply our model to real datasets,
and show that it outperforms the most popular radio telemetry software package
used in ecology. We conclude that integration of different data sources under a
single statistical framework, coupled with appropriate parameter and state
estimation procedures, produces both accurate location estimates and an
interpretable statistical model of animal movement.
| [
"Berk Kapicioglu, Robert E. Schapire, Martin Wikelski, Tamara Broderick",
"['Berk Kapicioglu' 'Robert E. Schapire' 'Martin Wikelski'\n 'Tamara Broderick']"
] |
cs.LG cs.AI stat.ML | null | 1203.3488 | null | null | http://arxiv.org/pdf/1203.3488v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Causal Conclusions that Flip Repeatedly and Their Justification | Over the past two decades, several consistent procedures have been designed
to infer causal conclusions from observational data. We prove that if the true
causal network might be an arbitrary, linear Gaussian network or a discrete
Bayes network, then every unambiguous causal conclusion produced by a
consistent method from non-experimental data is subject to reversal as the
sample size increases any finite number of times. That result, called the
causal flipping theorem, extends prior results to the effect that causal
discovery cannot be reliable on a given sample size. We argue that since
repeated flipping of causal conclusions is unavoidable in principle for
consistent methods, the best possible discovery methods are consistent methods
that retract their earlier conclusions no more than necessary. A series of
simulations of various methods across a wide range of sample sizes illustrates
concretely both the theorem and the principle of comparing methods in terms of
retractions.
| [
"['Kevin T. Kelly' 'Conor Mayo-Wilson']",
"Kevin T. Kelly, Conor Mayo-Wilson"
] |
cs.LG stat.ML | null | 1203.3489 | null | null | http://arxiv.org/pdf/1203.3489v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Bayesian exponential family projections for coupled data sources | Exponential family extensions of principal component analysis (EPCA) have
received a considerable amount of attention in recent years, demonstrating the
growing need for basic modeling tools that do not assume the squared loss or
Gaussian distribution. We extend the EPCA model toolbox by presenting the first
exponential family multi-view learning methods of the partial least squares and
canonical correlation analysis, based on a unified representation of EPCA as
matrix factorization of the natural parameters of exponential family. The
models are based on a new family of priors that are generally usable for all
such factorizations. We also introduce new inference strategies, and
demonstrate how the methods outperform earlier ones when the Gaussianity
assumption does not hold.
| [
"Arto Klami, Seppo Virtanen, Samuel Kaski",
"['Arto Klami' 'Seppo Virtanen' 'Samuel Kaski']"
] |
cs.LG stat.ML | null | 1203.3491 | null | null | http://arxiv.org/pdf/1203.3491v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Robust LogitBoost and Adaptive Base Class (ABC) LogitBoost | Logitboost is an influential boosting algorithm for classification. In this
paper, we develop robust logitboost to provide an explicit formulation of
tree-split criterion for building weak learners (regression trees) for
logitboost. This formulation leads to a numerically stable implementation of
logitboost. We then propose abc-logitboost for multi-class classification, by
combining robust logitboost with the prior work of abc-boost. Previously,
abc-boost was implemented as abc-mart using the mart algorithm. Our extensive
experiments on multi-class classification compare four algorithms: mart,
abcmart, (robust) logitboost, and abc-logitboost, and demonstrate the
superiority of abc-logitboost. Comparisons with other learning methods
including SVM and deep learning are also available through prior publications.
| [
"Ping Li",
"['Ping Li']"
] |
cs.LG stat.ML | null | 1203.3492 | null | null | http://arxiv.org/pdf/1203.3492v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Approximating Higher-Order Distances Using Random Projections | We provide a simple method and relevant theoretical analysis for efficiently
estimating higher-order lp distances. While the analysis mainly focuses on l4,
our methodology extends naturally to p = 6,8,10..., (i.e., when p is even).
Distance-based methods are popular in machine learning. In large-scale
applications, storing, computing, and retrieving the distances can be both
space and time prohibitive. Efficient algorithms exist for estimating lp
distances if 0 < p <= 2. The task for p > 2 is known to be difficult. Our work
partially fills this gap.
| [
"['Ping Li' 'Michael W. Mahoney' 'Yiyuan She']",
"Ping Li, Michael W. Mahoney, Yiyuan She"
] |
cs.LG stat.ML | null | 1203.3494 | null | null | http://arxiv.org/pdf/1203.3494v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Negative Tree Reweighted Belief Propagation | We introduce a new class of lower bounds on the log partition function of a
Markov random field which makes use of a reversed Jensen's inequality. In
particular, our method approximates the intractable distribution using a linear
combination of spanning trees with negative weights. This technique is a
lower-bound counterpart to the tree-reweighted belief propagation algorithm,
which uses a convex combination of spanning trees with positive weights to
provide corresponding upper bounds. We develop algorithms to optimize and
tighten the lower bounds over the non-convex set of valid parameter values. Our
algorithm generalizes mean field approaches (including naive and structured
mean field approximations), which it includes as a limiting case.
| [
"Qiang Liu, Alexander T. Ihler",
"['Qiang Liu' 'Alexander T. Ihler']"
] |
cs.LG stat.ML | null | 1203.3495 | null | null | http://arxiv.org/pdf/1203.3495v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Parameter-Free Spectral Kernel Learning | Due to the growing ubiquity of unlabeled data, learning with unlabeled data
is attracting increasing attention in machine learning. In this paper, we
propose a novel semi-supervised kernel learning method which can seamlessly
combine manifold structure of unlabeled data and Regularized Least-Squares
(RLS) to learn a new kernel. Interestingly, the new kernel matrix can be
obtained analytically with the use of spectral decomposition of graph Laplacian
matrix. Hence, the proposed algorithm does not require any numerical
optimization solvers. Moreover, by maximizing kernel target alignment on
labeled data, we can also learn model parameters automatically with a
closed-form solution. For a given graph Laplacian matrix, our proposed method
does not need to tune any model parameter including the tradeoff parameter in
RLS and the balance parameter for unlabeled data. Extensive experiments on ten
benchmark datasets show that our proposed two-stage parameter-free spectral
kernel learning algorithm can obtain comparable performance with fine-tuned
manifold regularization methods in transductive setting, and outperform
multiple kernel learning in supervised setting.
| [
"['Qi Mao' 'Ivor W. Tsang']",
"Qi Mao, Ivor W. Tsang"
] |
cs.LG stat.ML | null | 1203.3496 | null | null | http://arxiv.org/pdf/1203.3496v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Dirichlet Process Mixtures of Generalized Mallows Models | We present a Dirichlet process mixture model over discrete incomplete
rankings and study two Gibbs sampling inference techniques for estimating
posterior clusterings. The first approach uses a slice sampling subcomponent
for estimating cluster parameters. The second approach marginalizes out several
cluster parameters by taking advantage of approximations to the conditional
posteriors. We empirically demonstrate (1) the effectiveness of this
approximation for improving convergence, (2) the benefits of the Dirichlet
process model over alternative clustering techniques for ranked data, and (3)
the applicability of the approach to exploring large realworld ranking
datasets.
| [
"['Marina Meila' 'Harr Chen']",
"Marina Meila, Harr Chen"
] |
cs.LG stat.ML | null | 1203.3497 | null | null | http://arxiv.org/pdf/1203.3497v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Parametric Return Density Estimation for Reinforcement Learning | Most conventional Reinforcement Learning (RL) algorithms aim to optimize
decision-making rules in terms of the expected returns. However, especially for
risk management purposes, other risk-sensitive criteria such as the
value-at-risk or the expected shortfall are sometimes preferred in real
applications. Here, we describe a parametric method for estimating density of
the returns, which allows us to handle various criteria in a unified manner. We
first extend the Bellman equation for the conditional expected return to cover
a conditional probability density of the returns. Then we derive an extension
of the TD-learning algorithm for estimating the return densities in an unknown
environment. As test instances, several parametric density estimation
algorithms are presented for the Gaussian, Laplace, and skewed Laplace
distributions. We show that these algorithms lead to risk-sensitive as well as
robust RL paradigms through numerical experiments.
| [
"Tetsuro Morimura, Masashi Sugiyama, Hisashi Kashima, Hirotaka Hachiya,\n Toshiyuki Tanaka",
"['Tetsuro Morimura' 'Masashi Sugiyama' 'Hisashi Kashima'\n 'Hirotaka Hachiya' 'Toshiyuki Tanaka']"
] |
cs.LG cs.DS stat.ML | null | 1203.3501 | null | null | http://arxiv.org/pdf/1203.3501v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Algorithms and Complexity Results for Exact Bayesian Structure Learning | Bayesian structure learning is the NP-hard problem of discovering a Bayesian
network that optimally represents a given set of training data. In this paper
we study the computational worst-case complexity of exact Bayesian structure
learning under graph theoretic restrictions on the super-structure. The
super-structure (a concept introduced by Perrier, Imoto, and Miyano, JMLR 2008)
is an undirected graph that contains as subgraphs the skeletons of solution
networks. Our results apply to several variants of score-based Bayesian
structure learning where the score of a network decomposes into local scores of
its nodes. Results: We show that exact Bayesian structure learning can be
carried out in non-uniform polynomial time if the super-structure has bounded
treewidth and in linear time if in addition the super-structure has bounded
maximum degree. We complement this with a number of hardness results. We show
that both restrictions (treewidth and degree) are essential and cannot be
dropped without loosing uniform polynomial time tractability (subject to a
complexity-theoretic assumption). Furthermore, we show that the restrictions
remain essential if we do not search for a globally optimal network but we aim
to improve a given network by means of at most k arc additions, arc deletions,
or arc reversals (k-neighborhood local search).
| [
"['Sebastian Ordyniak' 'Stefan Szeider']",
"Sebastian Ordyniak, Stefan Szeider"
] |
cs.LG stat.ML | null | 1203.3506 | null | null | http://arxiv.org/pdf/1203.3506v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | A Family of Computationally Efficient and Simple Estimators for
Unnormalized Statistical Models | We introduce a new family of estimators for unnormalized statistical models.
Our family of estimators is parameterized by two nonlinear functions and uses a
single sample from an auxiliary distribution, generalizing Maximum Likelihood
Monte Carlo estimation of Geyer and Thompson (1992). The family is such that we
can estimate the partition function like any other parameter in the model. The
estimation is done by optimizing an algebraically simple, well defined
objective function, which allows for the use of dedicated optimization methods.
We establish consistency of the estimator family and give an expression for the
asymptotic covariance matrix, which enables us to further analyze the influence
of the nonlinearities and the auxiliary density on estimation performance. Some
estimators in our family are particularly stable for a wide range of auxiliary
densities. Interestingly, a specific choice of the nonlinearity establishes a
connection between density estimation and classification by nonlinear logistic
regression. Finally, the optimal amount of auxiliary samples relative to the
given amount of the data is considered from the perspective of computational
efficiency.
| [
"['Miika Pihlaja' 'Michael Gutmann' 'Aapo Hyvarinen']",
"Miika Pihlaja, Michael Gutmann, Aapo Hyvarinen"
] |
cs.LG stat.ML | null | 1203.3507 | null | null | http://arxiv.org/pdf/1203.3507v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Sparse-posterior Gaussian Processes for general likelihoods | Gaussian processes (GPs) provide a probabilistic nonparametric representation
of functions in regression, classification, and other problems. Unfortunately,
exact learning with GPs is intractable for large datasets. A variety of
approximate GP methods have been proposed that essentially map the large
dataset into a small set of basis points. Among them, two state-of-the-art
methods are sparse pseudo-input Gaussian process (SPGP) (Snelson and
Ghahramani, 2006) and variablesigma GP (VSGP) Walder et al. (2008), which
generalizes SPGP and allows each basis point to have its own length scale.
However, VSGP was only derived for regression. In this paper, we propose a new
sparse GP framework that uses expectation propagation to directly approximate
general GP likelihoods using a sparse and smooth basis. It includes both SPGP
and VSGP for regression as special cases. Plus as an EP algorithm, it inherits
the ability to process data online. As a particular choice of approximating
family, we blur each basis point with a Gaussian distribution that has a full
covariance matrix representing the data distribution around that basis point;
as a result, we can summarize local data manifold information with a small set
of basis points. Our experiments demonstrate that this framework outperforms
previous GP classification methods on benchmark datasets in terms of minimizing
divergence to the non-sparse GP solution as well as lower misclassification
rate.
| [
"Yuan (Alan) Qi, Ahmed H. Abdel-Gawad, Thomas P. Minka",
"['Yuan' 'Qi' 'Ahmed H. Abdel-Gawad' 'Thomas P. Minka']"
] |
cs.AI cs.LG stat.ML | null | 1203.3510 | null | null | http://arxiv.org/pdf/1203.3510v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Irregular-Time Bayesian Networks | In many fields observations are performed irregularly along time, due to
either measurement limitations or lack of a constant immanent rate. While
discrete-time Markov models (as Dynamic Bayesian Networks) introduce either
inefficient computation or an information loss to reasoning about such
processes, continuous-time Markov models assume either a discrete state space
(as Continuous-Time Bayesian Networks), or a flat continuous state space (as
stochastic differential equations). To address these problems, we present a new
modeling class called Irregular-Time Bayesian Networks (ITBNs), generalizing
Dynamic Bayesian Networks, allowing substantially more compact representations,
and increasing the expressivity of the temporal dynamics. In addition, a
globally optimal solution is guaranteed when learning temporal systems,
provided that they are fully observed at the same irregularly spaced
time-points, and a semiparametric subclass of ITBNs is introduced to allow
further adaptation to the irregular nature of the available data.
| [
"Michael Ramati, Yuval Shahar",
"['Michael Ramati' 'Yuval Shahar']"
] |
cs.LG cs.CL stat.ML | null | 1203.3511 | null | null | http://arxiv.org/pdf/1203.3511v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Inference by Minimizing Size, Divergence, or their Sum | We speed up marginal inference by ignoring factors that do not significantly
contribute to overall accuracy. In order to pick a suitable subset of factors
to ignore, we propose three schemes: minimizing the number of model factors
under a bound on the KL divergence between pruned and full models; minimizing
the KL divergence under a bound on factor count; and minimizing the weighted
sum of KL divergence and factor count. All three problems are solved using an
approximation of the KL divergence than can be calculated in terms of marginals
computed on a simple seed graph. Applied to synthetic image denoising and to
three different types of NLP parsing models, this technique performs marginal
inference up to 11 times faster than loopy BP, with graph sizes reduced up to
98%-at comparable error in marginals and parsing accuracy. We also show that
minimizing the weighted sum of divergence and size is substantially faster than
minimizing either of the other objectives based on the approximation to
divergence presented here.
| [
"Sebastian Riedel, David A. Smith, Andrew McCallum",
"['Sebastian Riedel' 'David A. Smith' 'Andrew McCallum']"
] |
cs.LG cs.AI stat.ML | null | 1203.3516 | null | null | http://arxiv.org/pdf/1203.3516v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Modeling Events with Cascades of Poisson Processes | We present a probabilistic model of events in continuous time in which each
event triggers a Poisson process of successor events. The ensemble of observed
events is thereby modeled as a superposition of Poisson processes. Efficient
inference is feasible under this model with an EM algorithm. Moreover, the EM
algorithm can be implemented as a distributed algorithm, permitting the model
to be applied to very large datasets. We apply these techniques to the modeling
of Twitter messages and the revision history of Wikipedia.
| [
"['Aleksandr Simma' 'Michael I. Jordan']",
"Aleksandr Simma, Michael I. Jordan"
] |
cs.LG stat.ML | null | 1203.3517 | null | null | http://arxiv.org/pdf/1203.3517v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | A Bayesian Matrix Factorization Model for Relational Data | Relational learning can be used to augment one data source with other
correlated sources of information, to improve predictive accuracy. We frame a
large class of relational learning problems as matrix factorization problems,
and propose a hierarchical Bayesian model. Training our Bayesian model using
random-walk Metropolis-Hastings is impractically slow, and so we develop a
block Metropolis-Hastings sampler which uses the gradient and Hessian of the
likelihood to dynamically tune the proposal. We demonstrate that a predictive
model of brain response to stimuli can be improved by augmenting it with side
information about the stimuli.
| [
"['Ajit P. Singh' 'Geoffrey Gordon']",
"Ajit P. Singh, Geoffrey Gordon"
] |
cs.LG cs.AI stat.ML | null | 1203.3518 | null | null | http://arxiv.org/pdf/1203.3518v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Variance-Based Rewards for Approximate Bayesian Reinforcement Learning | The explore{exploit dilemma is one of the central challenges in Reinforcement
Learning (RL). Bayesian RL solves the dilemma by providing the agent with
information in the form of a prior distribution over environments; however,
full Bayesian planning is intractable. Planning with the mean MDP is a common
myopic approximation of Bayesian planning. We derive a novel reward bonus that
is a function of the posterior distribution over environments, which, when
added to the reward in planning with the mean MDP, results in an agent which
explores efficiently and effectively. Although our method is similar to
existing methods when given an uninformative or unstructured prior, unlike
existing methods, our method can exploit structured priors. We prove that our
method results in a polynomial sample complexity and empirically demonstrate
its advantages in a structured exploration task.
| [
"Jonathan Sorg, Satinder Singh, Richard L. Lewis",
"['Jonathan Sorg' 'Satinder Singh' 'Richard L. Lewis']"
] |
cs.LG cs.AI stat.ML | null | 1203.3519 | null | null | http://arxiv.org/pdf/1203.3519v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Bayesian Inference in Monte-Carlo Tree Search | Monte-Carlo Tree Search (MCTS) methods are drawing great interest after
yielding breakthrough results in computer Go. This paper proposes a Bayesian
approach to MCTS that is inspired by distributionfree approaches such as UCT
[13], yet significantly differs in important respects. The Bayesian framework
allows potentially much more accurate (Bayes-optimal) estimation of node values
and node uncertainties from a limited number of simulation trials. We further
propose propagating inference in the tree via fast analytic Gaussian
approximation methods: this can make the overhead of Bayesian inference
manageable in domains such as Go, while preserving high accuracy of
expected-value estimates. We find substantial empirical outperformance of UCT
in an idealized bandit-tree test environment, where we can obtain valuable
insights by comparing with known ground truth. Additionally we rigorously prove
on-policy and off-policy convergence of the proposed methods.
| [
"Gerald Tesauro, V T Rajan, Richard Segal",
"['Gerald Tesauro' 'V T Rajan' 'Richard Segal']"
] |
cs.LG cs.AI stat.ML | null | 1203.3520 | null | null | http://arxiv.org/pdf/1203.3520v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Bayesian Model Averaging Using the k-best Bayesian Network Structures | We study the problem of learning Bayesian network structures from data. We
develop an algorithm for finding the k-best Bayesian network structures. We
propose to compute the posterior probabilities of hypotheses of interest by
Bayesian model averaging over the k-best Bayesian networks. We present
empirical results on structural discovery over several real and synthetic data
sets and show that the method outperforms the model selection method and the
state of-the-art MCMC methods.
| [
"Jin Tian, Ru He, Lavanya Ram",
"['Jin Tian' 'Ru He' 'Lavanya Ram']"
] |
cs.LG stat.ML | null | 1203.3521 | null | null | http://arxiv.org/pdf/1203.3521v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Learning networks determined by the ratio of prior and data | Recent reports have described that the equivalent sample size (ESS) in a
Dirichlet prior plays an important role in learning Bayesian networks. This
paper provides an asymptotic analysis of the marginal likelihood score for a
Bayesian network. Results show that the ratio of the ESS and sample size
determine the penalty of adding arcs in learning Bayesian networks. The number
of arcs increases monotonically as the ESS increases; the number of arcs
monotonically decreases as the ESS decreases. Furthermore, the marginal
likelihood score provides a unified expression of various score metrics by
changing prior knowledge.
| [
"['Maomi Ueno']",
"Maomi Ueno"
] |
cs.LG stat.ML | null | 1203.3522 | null | null | http://arxiv.org/pdf/1203.3522v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Online Semi-Supervised Learning on Quantized Graphs | In this paper, we tackle the problem of online semi-supervised learning
(SSL). When data arrive in a stream, the dual problems of computation and data
storage arise for any SSL method. We propose a fast approximate online SSL
algorithm that solves for the harmonic solution on an approximate graph. We
show, both empirically and theoretically, that good behavior can be achieved by
collapsing nearby points into a set of local "representative points" that
minimize distortion. Moreover, we regularize the harmonic solution to achieve
better stability properties. We apply our algorithm to face recognition and
optical character recognition applications to show that we can take advantage
of the manifold structure to outperform the previous methods. Unlike previous
heuristic approaches, we show that our method yields provable performance
bounds.
| [
"Michal Valko, Branislav Kveton, Ling Huang, Daniel Ting",
"['Michal Valko' 'Branislav Kveton' 'Ling Huang' 'Daniel Ting']"
] |
stat.ML cs.LG | null | 1203.3524 | null | null | http://arxiv.org/pdf/1203.3524v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Speeding up the binary Gaussian process classification | Gaussian processes (GP) are attractive building blocks for many probabilistic
models. Their drawbacks, however, are the rapidly increasing inference time and
memory requirement alongside increasing data. The problem can be alleviated
with compactly supported (CS) covariance functions, which produce sparse
covariance matrices that are fast in computations and cheap to store. CS
functions have previously been used in GP regression but here the focus is in a
classification problem. This brings new challenges since the posterior
inference has to be done approximately. We utilize the expectation propagation
algorithm and show how its standard implementation has to be modified to obtain
computational benefits from the sparse covariance matrices. We study four CS
covariance functions and show that they may lead to substantial speed up in the
inference time compared to globally supported functions.
| [
"['Jarno Vanhatalo' 'Aki Vehtari']",
"Jarno Vanhatalo, Aki Vehtari"
] |
cs.LG cs.AI stat.ML | null | 1203.3526 | null | null | http://arxiv.org/pdf/1203.3526v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Primal View on Belief Propagation | It is known that fixed points of loopy belief propagation (BP) correspond to
stationary points of the Bethe variational problem, where we minimize the Bethe
free energy subject to normalization and marginalization constraints.
Unfortunately, this does not entirely explain BP because BP is a dual rather
than primal algorithm to solve the Bethe variational problem -- beliefs are
infeasible before convergence. Thus, we have no better understanding of BP than
as an algorithm to seek for a common zero of a system of non-linear functions,
not explicitly related to each other. In this theoretical paper, we show that
these functions are in fact explicitly related -- they are the partial
derivatives of a single function of reparameterizations. That means, BP seeks
for a stationary point of a single function, without any constraints. This
function has a very natural form: it is a linear combination of local
log-partition functions, exactly as the Bethe entropy is the same linear
combination of local entropies.
| [
"Tomas Werner",
"['Tomas Werner']"
] |
cs.LG cs.AI stat.ML | null | 1203.3529 | null | null | http://arxiv.org/pdf/1203.3529v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Modeling Multiple Annotator Expertise in the Semi-Supervised Learning
Scenario | Learning algorithms normally assume that there is at most one annotation or
label per data point. However, in some scenarios, such as medical diagnosis and
on-line collaboration,multiple annotations may be available. In either case,
obtaining labels for data points can be expensive and time-consuming (in some
circumstances ground-truth may not exist). Semi-supervised learning approaches
have shown that utilizing the unlabeled data is often beneficial in these
cases. This paper presents a probabilistic semi-supervised model and algorithm
that allows for learning from both unlabeled and labeled data in the presence
of multiple annotators. We assume that it is known what annotator labeled which
data points. The proposed approach produces annotator models that allow us to
provide (1) estimates of the true label and (2) annotator variable expertise
for both labeled and unlabeled data. We provide numerical comparisons under
various scenarios and with respect to standard semi-supervised learning.
Experiments showed that the presented approach provides clear advantages over
multi-annotator methods that do not use the unlabeled data and over methods
that do not use multi-labeler information.
| [
"['Yan Yan' 'Romer Rosales' 'Glenn Fung' 'Jennifer Dy']",
"Yan Yan, Romer Rosales, Glenn Fung, Jennifer Dy"
] |
cs.LG cs.CV stat.ML | null | 1203.3530 | null | null | http://arxiv.org/pdf/1203.3530v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Hybrid Generative/Discriminative Learning for Automatic Image Annotation | Automatic image annotation (AIA) raises tremendous challenges to machine
learning as it requires modeling of data that are both ambiguous in input and
output, e.g., images containing multiple objects and labeled with multiple
semantic tags. Even more challenging is that the number of candidate tags is
usually huge (as large as the vocabulary size) yet each image is only related
to a few of them. This paper presents a hybrid generative-discriminative
classifier to simultaneously address the extreme data-ambiguity and
overfitting-vulnerability issues in tasks such as AIA. Particularly: (1) an
Exponential-Multinomial Mixture (EMM) model is established to capture both the
input and output ambiguity and in the meanwhile to encourage prediction
sparsity; and (2) the prediction ability of the EMM model is explicitly
maximized through discriminative learning that integrates variational inference
of graphical models and the pairwise formulation of ordinal regression.
Experiments show that our approach achieves both superior annotation
performance and better tag scalability.
| [
"['Shuang Hong Yang' 'Jiang Bian' 'Hongyuan Zha']",
"Shuang Hong Yang, Jiang Bian, Hongyuan Zha"
] |
cs.LG stat.ML | null | 1203.3532 | null | null | http://arxiv.org/pdf/1203.3532v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Learning Structural Changes of Gaussian Graphical Models in Controlled
Experiments | Graphical models are widely used in scienti fic and engineering research to
represent conditional independence structures between random variables. In many
controlled experiments, environmental changes or external stimuli can often
alter the conditional dependence between the random variables, and potentially
produce significant structural changes in the corresponding graphical models.
Therefore, it is of great importance to be able to detect such structural
changes from data, so as to gain novel insights into where and how the
structural changes take place and help the system adapt to the new environment.
Here we report an effective learning strategy to extract structural changes in
Gaussian graphical model using l1-regularization based convex optimization. We
discuss the properties of the problem formulation and introduce an efficient
implementation by the block coordinate descent algorithm. We demonstrate the
principle of the approach on a numerical simulation experiment, and we then
apply the algorithm to the modeling of gene regulatory networks under different
conditions and obtain promising yet biologically plausible results.
| [
"Bai Zhang, Yue Wang",
"['Bai Zhang' 'Yue Wang']"
] |
cs.LG stat.ML | null | 1203.3533 | null | null | http://arxiv.org/pdf/1203.3533v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Source Separation and Higher-Order Causal Analysis of MEG and EEG | Separation of the sources and analysis of their connectivity have been an
important topic in EEG/MEG analysis. To solve this problem in an automatic
manner, we propose a two-layer model, in which the sources are conditionally
uncorrelated from each other, but not independent; the dependence is caused by
the causality in their time-varying variances (envelopes). The model is
identified in two steps. We first propose a new source separation technique
which takes into account the autocorrelations (which may be time-varying) and
time-varying variances of the sources. The causality in the envelopes is then
discovered by exploiting a special kind of multivariate GARCH (generalized
autoregressive conditional heteroscedasticity) model. The resulting causal
diagram gives the effective connectivity between the separated sources; in our
experimental results on MEG data, sources with similar functions are grouped
together, with negative influences between groups, and the groups are connected
via some interesting sources.
| [
"['Kun Zhang' 'Aapo Hyvarinen']",
"Kun Zhang, Aapo Hyvarinen"
] |
cs.LG stat.ML | null | 1203.3534 | null | null | http://arxiv.org/pdf/1203.3534v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Invariant Gaussian Process Latent Variable Models and Application in
Causal Discovery | In nonlinear latent variable models or dynamic models, if we consider the
latent variables as confounders (common causes), the noise dependencies imply
further relations between the observed variables. Such models are then closely
related to causal discovery in the presence of nonlinear confounders, which is
a challenging problem. However, generally in such models the observation noise
is assumed to be independent across data dimensions, and consequently the noise
dependencies are ignored. In this paper we focus on the Gaussian process latent
variable model (GPLVM), from which we develop an extended model called
invariant GPLVM (IGPLVM), which can adapt to arbitrary noise covariances. With
the Gaussian process prior put on a particular transformation of the latent
nonlinear functions, instead of the original ones, the algorithm for IGPLVM
involves almost the same computational loads as that for the original GPLVM.
Besides its potential application in causal discovery, IGPLVM has the advantage
that its estimated latent nonlinear manifold is invariant to any nonsingular
linear transformation of the data. Experimental results on both synthetic and
realworld data show its encouraging performance in nonlinear manifold learning
and causal discovery.
| [
"['Kun Zhang' 'Bernhard Schoelkopf' 'Dominik Janzing']",
"Kun Zhang, Bernhard Schoelkopf, Dominik Janzing"
] |
cs.LG cs.AI stat.ML | null | 1203.3536 | null | null | http://arxiv.org/pdf/1203.3536v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | A Convex Formulation for Learning Task Relationships in Multi-Task
Learning | Multi-task learning is a learning paradigm which seeks to improve the
generalization performance of a learning task with the help of some other
related tasks. In this paper, we propose a regularization formulation for
learning the relationships between tasks in multi-task learning. This
formulation can be viewed as a novel generalization of the regularization
framework for single-task learning. Besides modeling positive task correlation,
our method, called multi-task relationship learning (MTRL), can also describe
negative task correlation and identify outlier tasks based on the same
underlying principle. Under this regularization framework, the objective
function of MTRL is convex. For efficiency, we use an alternating method to
learn the optimal model parameters for each task as well as the relationships
between tasks. We study MTRL in the symmetric multi-task learning setting and
then generalize it to the asymmetric setting as well. We also study the
relationships between MTRL and some existing multi-task learning methods.
Experiments conducted on a toy problem as well as several benchmark data sets
demonstrate the effectiveness of MTRL.
| [
"Yu Zhang, Dit-Yan Yeung",
"['Yu Zhang' 'Dit-Yan Yeung']"
] |
cs.LG cs.CV stat.ML | null | 1203.3537 | null | null | http://arxiv.org/pdf/1203.3537v1 | 2012-03-15T11:17:56Z | 2012-03-15T11:17:56Z | Automatic Tuning of Interactive Perception Applications | Interactive applications incorporating high-data rate sensing and computer
vision are becoming possible due to novel runtime systems and the use of
parallel computation resources. To allow interactive use, such applications
require careful tuning of multiple application parameters to meet required
fidelity and latency bounds. This is a nontrivial task, often requiring expert
knowledge, which becomes intractable as resources and application load
characteristics change. This paper describes a method for automatic performance
tuning that learns application characteristics and effects of tunable
parameters online, and constructs models that are used to maximize fidelity for
a given latency constraint. The paper shows that accurate latency models can be
learned online, knowledge of application structure can be used to reduce the
complexity of the learning task, and operating points can be found that achieve
90% of the optimal fidelity by exploring the parameter space only 3% of the
time.
| [
"['Qian Zhu' 'Branislav Kveton' 'Lily Mummert' 'Padmanabhan Pillai']",
"Qian Zhu, Branislav Kveton, Lily Mummert, Padmanabhan Pillai"
] |
stat.ML cs.AI cs.LG | 10.1007/978-3-642-35289-8_33 | 1203.3783 | null | null | http://arxiv.org/abs/1203.3783v1 | 2012-03-16T19:01:10Z | 2012-03-16T19:01:10Z | Learning Feature Hierarchies with Centered Deep Boltzmann Machines | Deep Boltzmann machines are in principle powerful models for extracting the
hierarchical structure of data. Unfortunately, attempts to train layers jointly
(without greedy layer-wise pretraining) have been largely unsuccessful. We
propose a modification of the learning algorithm that initially recenters the
output of the activation functions to zero. This modification leads to a better
conditioned Hessian and thus makes learning easier. We test the algorithm on
real data and demonstrate that our suggestion, the centered deep Boltzmann
machine, learns a hierarchy of increasingly abstract representations and a
better generative model of data.
| [
"Gr\\'egoire Montavon and Klaus-Robert M\\\"uller",
"['Grégoire Montavon' 'Klaus-Robert Müller']"
] |
cs.LG | null | 1203.3832 | null | null | http://arxiv.org/pdf/1203.3832v1 | 2012-03-17T02:06:41Z | 2012-03-17T02:06:41Z | Data Mining: A Prediction for Performance Improvement of Engineering
Students using Classification | Now-a-days the amount of data stored in educational database increasing
rapidly. These databases contain hidden information for improvement of
students' performance. Educational data mining is used to study the data
available in the educational field and bring out the hidden knowledge from it.
Classification methods like decision trees, Bayesian network etc can be applied
on the educational data for predicting the student's performance in
examination. This prediction will help to identify the weak students and help
them to score better marks. The C4.5, ID3 and CART decision tree algorithms are
applied on engineering student's data to predict their performance in the final
exam. The outcome of the decision tree predicted the number of students who are
likely to pass, fail or promoted to next year. The results provide steps to
improve the performance of the students who were predicted to fail or promoted.
After the declaration of the results in the final examination the marks
obtained by the students are fed into the system and the results were analyzed
for the next session. The comparative analysis of the results states that the
prediction has helped the weaker students to improve and brought out betterment
in the result.
| [
"Surjeet Kumar Yadav and Saurabh Pal",
"['Surjeet Kumar Yadav' 'Saurabh Pal']"
] |
stat.ML cs.AI cs.LG math.ST stat.TH | 10.1214/12-AOS1070 | 1203.3887 | null | null | http://arxiv.org/abs/1203.3887v4 | 2013-04-22T13:43:39Z | 2012-03-17T19:09:41Z | Learning loopy graphical models with latent variables: Efficient methods
and guarantees | The problem of structure estimation in graphical models with latent variables
is considered. We characterize conditions for tractable graph estimation and
develop efficient methods with provable guarantees. We consider models where
the underlying Markov graph is locally tree-like, and the model is in the
regime of correlation decay. For the special case of the Ising model, the
number of samples $n$ required for structural consistency of our method scales
as $n=\Omega(\theta_{\min}^{-\delta\eta(\eta+1)-2}\log p)$, where p is the
number of variables, $\theta_{\min}$ is the minimum edge potential, $\delta$ is
the depth (i.e., distance from a hidden node to the nearest observed nodes),
and $\eta$ is a parameter which depends on the bounds on node and edge
potentials in the Ising model. Necessary conditions for structural consistency
under any algorithm are derived and our method nearly matches the lower bound
on sample requirements. Further, the proposed method is practical to implement
and provides flexibility to control the number of latent variables and the
cycle lengths in the output graph.
| [
"['Animashree Anandkumar' 'Ragupathyraj Valluvan']",
"Animashree Anandkumar, Ragupathyraj Valluvan"
] |
cs.LG cs.GT | null | 1203.3935 | null | null | http://arxiv.org/pdf/1203.3935v1 | 2012-03-18T10:30:54Z | 2012-03-18T10:30:54Z | Distributed Cooperative Q-learning for Power Allocation in Cognitive
Femtocell Networks | In this paper, we propose a distributed reinforcement learning (RL) technique
called distributed power control using Q-learning (DPC-Q) to manage the
interference caused by the femtocells on macro-users in the downlink. The DPC-Q
leverages Q-Learning to identify the sub-optimal pattern of power allocation,
which strives to maximize femtocell capacity, while guaranteeing macrocell
capacity level in an underlay cognitive setting. We propose two different
approaches for the DPC-Q algorithm: namely, independent, and cooperative. In
the former, femtocells learn independently from each other while in the latter,
femtocells share some information during learning in order to enhance their
performance. Simulation results show that the independent approach is capable
of mitigating the interference generated by the femtocells on macro-users.
Moreover, the results show that cooperation enhances the performance of the
femtocells in terms of speed of convergence, fairness and aggregate femtocell
capacity.
| [
"['Hussein Saad' 'Amr Mohamed' 'Tamer ElBatt']",
"Hussein Saad, Amr Mohamed and Tamer ElBatt"
] |
cs.NE cs.AI cs.LG | null | 1203.4416 | null | null | http://arxiv.org/pdf/1203.4416v1 | 2012-03-20T12:59:15Z | 2012-03-20T12:59:15Z | On Training Deep Boltzmann Machines | The deep Boltzmann machine (DBM) has been an important development in the
quest for powerful "deep" probabilistic models. To date, simultaneous or joint
training of all layers of the DBM has been largely unsuccessful with existing
training methods. We introduce a simple regularization scheme that encourages
the weight vectors associated with each hidden unit to have similar norms. We
demonstrate that this regularization can be easily combined with standard
stochastic maximum likelihood to yield an effective training strategy for the
simultaneous training of all layers of the deep Boltzmann machine.
| [
"Guillaume Desjardins and Aaron Courville and Yoshua Bengio",
"['Guillaume Desjardins' 'Aaron Courville' 'Yoshua Bengio']"
] |
stat.ML cs.LG | null | 1203.4422 | null | null | http://arxiv.org/pdf/1203.4422v1 | 2012-03-20T13:11:32Z | 2012-03-20T13:11:32Z | Semi-Supervised Single- and Multi-Domain Regression with Multi-Domain
Training | We address the problems of multi-domain and single-domain regression based on
distinct and unpaired labeled training sets for each of the domains and a large
unlabeled training set from all domains. We formulate these problems as a
Bayesian estimation with partial knowledge of statistical relations. We propose
a worst-case design strategy and study the resulting estimators. Our analysis
explicitly accounts for the cardinality of the labeled sets and includes the
special cases in which one of the labeled sets is very large or, in the other
extreme, completely missing. We demonstrate our estimators in the context of
removing expressions from facial images and in the context of audio-visual word
recognition, and provide comparisons to several recently proposed multi-modal
learning algorithms.
| [
"Tomer Michaeli, Yonina C. Eldar, Guillermo Sapiro",
"['Tomer Michaeli' 'Yonina C. Eldar' 'Guillermo Sapiro']"
] |
cs.LG math.OC stat.ML | null | 1203.4523 | null | null | http://arxiv.org/pdf/1203.4523v2 | 2012-09-11T08:35:39Z | 2012-03-20T17:49:56Z | On the Equivalence between Herding and Conditional Gradient Algorithms | We show that the herding procedure of Welling (2009) takes exactly the form
of a standard convex optimization algorithm--namely a conditional gradient
algorithm minimizing a quadratic moment discrepancy. This link enables us to
invoke convergence results from convex optimization and to consider faster
alternatives for the task of approximating integrals in a reproducing kernel
Hilbert space. We study the behavior of the different variants through
numerical simulations. The experiments indicate that while we can improve over
herding on the task of approximating integrals, the original herding algorithm
tends to approach more often the maximum entropy distribution, shedding more
light on the learning bias behind herding.
| [
"Francis Bach (INRIA Paris - Rocquencourt, LIENS), Simon Lacoste-Julien\n (INRIA Paris - Rocquencourt, LIENS), Guillaume Obozinski (INRIA Paris -\n Rocquencourt, LIENS)",
"['Francis Bach' 'Simon Lacoste-Julien' 'Guillaume Obozinski']"
] |
cs.LG stat.ML | null | 1203.4597 | null | null | http://arxiv.org/pdf/1203.4597v1 | 2012-03-20T21:31:48Z | 2012-03-20T21:31:48Z | A Novel Training Algorithm for HMMs with Partial and Noisy Access to the
States | This paper proposes a new estimation algorithm for the parameters of an HMM
as to best account for the observed data. In this model, in addition to the
observation sequence, we have \emph{partial} and \emph{noisy} access to the
hidden state sequence as side information. This access can be seen as "partial
labeling" of the hidden states. Furthermore, we model possible mislabeling in
the side information in a joint framework and derive the corresponding EM
updates accordingly. In our simulations, we observe that using this side
information, we considerably improve the state recognition performance, up to
70%, with respect to the "achievable margin" defined by the baseline
algorithms. Moreover, our algorithm is shown to be robust to the training
conditions.
| [
"Huseyin Ozkan, Arda Akman, Suleyman S. Kozat",
"['Huseyin Ozkan' 'Arda Akman' 'Suleyman S. Kozat']"
] |
cs.LG | 10.1016/j.dsp.2012.09.006 | 1203.4598 | null | null | http://arxiv.org/abs/1203.4598v1 | 2012-03-20T21:32:33Z | 2012-03-20T21:32:33Z | Adaptive Mixture Methods Based on Bregman Divergences | We investigate adaptive mixture methods that linearly combine outputs of $m$
constituent filters running in parallel to model a desired signal. We use
"Bregman divergences" and obtain certain multiplicative updates to train the
linear combination weights under an affine constraint or without any
constraints. We use unnormalized relative entropy and relative entropy to
define two different Bregman divergences that produce an unnormalized
exponentiated gradient update and a normalized exponentiated gradient update on
the mixture weights, respectively. We then carry out the mean and the
mean-square transient analysis of these adaptive algorithms when they are used
to combine outputs of $m$ constituent filters. We illustrate the accuracy of
our results and demonstrate the effectiveness of these updates for sparse
mixture systems.
| [
"Mehmet A. Donmez, Huseyin A. Inan, Suleyman S. Kozat",
"['Mehmet A. Donmez' 'Huseyin A. Inan' 'Suleyman S. Kozat']"
] |
cs.LG | null | 1203.4788 | null | null | http://arxiv.org/pdf/1203.4788v1 | 2012-03-21T17:29:17Z | 2012-03-21T17:29:17Z | Very Short Literature Survey From Supervised Learning To Surrogate
Modeling | The past century was era of linear systems. Either systems (especially
industrial ones) were simple (quasi)linear or linear approximations were
accurate enough. In addition, just at the ending decades of the century
profusion of computing devices were available, before then due to lack of
computational resources it was not easy to evaluate available nonlinear system
studies. At the moment both these two conditions changed, systems are highly
complex and also pervasive amount of computation strength is cheap and easy to
achieve. For recent era, a new branch of supervised learning well known as
surrogate modeling (meta-modeling, surface modeling) has been devised which
aimed at answering new needs of modeling realm. This short literature survey is
on to introduce surrogate modeling to whom is familiar with the concepts of
supervised learning. Necessity, challenges and visions of the topic are
considered.
| [
"['Altay Brusan']",
"Altay Brusan"
] |
cs.LG stat.AP | null | 1203.5124 | null | null | http://arxiv.org/pdf/1203.5124v1 | 2012-03-22T20:54:53Z | 2012-03-22T20:54:53Z | Parallel Matrix Factorization for Binary Response | Predicting user affinity to items is an important problem in applications
like content optimization, computational advertising, and many more. While
bilinear random effect models (matrix factorization) provide state-of-the-art
performance when minimizing RMSE through a Gaussian response model on explicit
ratings data, applying it to imbalanced binary response data presents
additional challenges that we carefully study in this paper. Data in many
applications usually consist of users' implicit response that are often binary
-- clicking an item or not; the goal is to predict click rates, which is often
combined with other measures to calculate utilities to rank items at runtime of
the recommender systems. Because of the implicit nature, such data are usually
much larger than explicit rating data and often have an imbalanced distribution
with a small fraction of click events, making accurate click rate prediction
difficult. In this paper, we address two problems. First, we show previous
techniques to estimate bilinear random effect models with binary data are less
accurate compared to our new approach based on adaptive rejection sampling,
especially for imbalanced response. Second, we develop a parallel bilinear
random effect model fitting framework using Map-Reduce paradigm that scales to
massive datasets. Our parallel algorithm is based on a "divide and conquer"
strategy coupled with an ensemble approach. Through experiments on the
benchmark MovieLens data, a small Yahoo! Front Page data set, and a large
Yahoo! Front Page data set that contains 8M users and 1B binary observations,
we show that careful handling of binary response as well as identifiability
issues are needed to achieve good performance for click rate prediction, and
that the proposed adaptive rejection sampler and the partitioning as well as
ensemble techniques significantly improve model performance.
| [
"['Rajiv Khanna' 'Liang Zhang' 'Deepak Agarwal' 'Beechung Chen']",
"Rajiv Khanna, Liang Zhang, Deepak Agarwal, Beechung Chen"
] |
cs.LG stat.ML | 10.1109/ICASSP.2012.6288022 | 1203.5181 | null | null | http://arxiv.org/abs/1203.5181v1 | 2012-03-23T06:11:24Z | 2012-03-23T06:11:24Z | $k$-MLE: A fast algorithm for learning statistical mixture models | We describe $k$-MLE, a fast and efficient local search algorithm for learning
finite statistical mixtures of exponential families such as Gaussian mixture
models. Mixture models are traditionally learned using the
expectation-maximization (EM) soft clustering technique that monotonically
increases the incomplete (expected complete) likelihood. Given prescribed
mixture weights, the hard clustering $k$-MLE algorithm iteratively assigns data
to the most likely weighted component and update the component models using
Maximum Likelihood Estimators (MLEs). Using the duality between exponential
families and Bregman divergences, we prove that the local convergence of the
complete likelihood of $k$-MLE follows directly from the convergence of a dual
additively weighted Bregman hard clustering. The inner loop of $k$-MLE can be
implemented using any $k$-means heuristic like the celebrated Lloyd's batched
or Hartigan's greedy swap updates. We then show how to update the mixture
weights by minimizing a cross-entropy criterion that implies to update weights
by taking the relative proportion of cluster points, and reiterate the mixture
parameter update and mixture weight update processes until convergence. Hard EM
is interpreted as a special case of $k$-MLE when both the component update and
the weight update are performed successively in the inner loop. To initialize
$k$-MLE, we propose $k$-MLE++, a careful initialization of $k$-MLE guaranteeing
probabilistically a global bound on the best possible complete likelihood.
| [
"['Frank Nielsen']",
"Frank Nielsen"
] |
stat.ME cs.LG math.ST stat.TH | null | 1203.5422 | null | null | http://arxiv.org/pdf/1203.5422v1 | 2012-03-24T15:04:02Z | 2012-03-24T15:04:02Z | Distribution Free Prediction Bands | We study distribution free, nonparametric prediction bands with a special
focus on their finite sample behavior. First we investigate and develop
different notions of finite sample coverage guarantees. Then we give a new
prediction band estimator by combining the idea of "conformal prediction" (Vovk
et al. 2009) with nonparametric conditional density estimation. The proposed
estimator, called COPS (Conformal Optimized Prediction Set), always has finite
sample guarantee in a stronger sense than the original conformal prediction
estimator. Under regularity conditions the estimator converges to an oracle
band at a minimax optimal rate. A fast approximation algorithm and a data
driven method for selecting the bandwidth are developed. The method is
illustrated first in simulated data. Then, an application shows that the
proposed method gives desirable prediction intervals in an automatic way, as
compared to the classical linear regression modeling.
| [
"['Jing Lei' 'Larry Wasserman']",
"Jing Lei and Larry Wasserman"
] |
cs.LG stat.ML | null | 1203.5438 | null | null | http://arxiv.org/pdf/1203.5438v1 | 2012-03-24T18:59:55Z | 2012-03-24T18:59:55Z | A Regularization Approach for Prediction of Edges and Node Features in
Dynamic Graphs | We consider the two problems of predicting links in a dynamic graph sequence
and predicting functions defined at each node of the graph. In many
applications, the solution of one problem is useful for solving the other.
Indeed, if these functions reflect node features, then they are related through
the graph structure. In this paper, we formulate a hybrid approach that
simultaneously learns the structure of the graph and predicts the values of the
node-related functions. Our approach is based on the optimization of a joint
regularization objective. We empirically test the benefits of the proposed
method with both synthetic and real data. The results indicate that joint
regularization improves prediction performance over the graph evolution and the
node features.
| [
"['Emile Richard' 'Andreas Argyriou' 'Theodoros Evgeniou' 'Nicolas Vayatis']",
"Emile Richard, Andreas Argyriou, Theodoros Evgeniou and Nicolas\n Vayatis"
] |
cs.NE cs.AI cs.LG | null | 1203.5443 | null | null | http://arxiv.org/pdf/1203.5443v2 | 2012-06-21T12:47:30Z | 2012-03-24T20:11:21Z | Transfer Learning, Soft Distance-Based Bias, and the Hierarchical BOA | An automated technique has recently been proposed to transfer learning in the
hierarchical Bayesian optimization algorithm (hBOA) based on distance-based
statistics. The technique enables practitioners to improve hBOA efficiency by
collecting statistics from probabilistic models obtained in previous hBOA runs
and using the obtained statistics to bias future hBOA runs on similar problems.
The purpose of this paper is threefold: (1) test the technique on several
classes of NP-complete problems, including MAXSAT, spin glasses and minimum
vertex cover; (2) demonstrate that the technique is effective even when
previous runs were done on problems of different size; (3) provide empirical
evidence that combining transfer learning with other efficiency enhancement
techniques can often yield nearly multiplicative speedups.
| [
"Martin Pelikan, Mark W. Hauschild, and Pier Luca Lanzi",
"['Martin Pelikan' 'Mark W. Hauschild' 'Pier Luca Lanzi']"
] |
stat.AP cs.LG | null | 1203.5446 | null | null | http://arxiv.org/pdf/1203.5446v1 | 2012-03-24T20:58:48Z | 2012-03-24T20:58:48Z | A Bayesian Model Committee Approach to Forecasting Global Solar
Radiation | This paper proposes to use a rather new modelling approach in the realm of
solar radiation forecasting. In this work, two forecasting models:
Autoregressive Moving Average (ARMA) and Neural Network (NN) models are
combined to form a model committee. The Bayesian inference is used to affect a
probability to each model in the committee. Hence, each model's predictions are
weighted by their respective probability. The models are fitted to one year of
hourly Global Horizontal Irradiance (GHI) measurements. Another year (the test
set) is used for making genuine one hour ahead (h+1) out-of-sample forecast
comparisons. The proposed approach is benchmarked against the persistence
model. The very first results show an improvement brought by this approach.
| [
"['Philippe Lauret' 'Auline Rodler' 'Marc Muselli' 'Mathieu David'\n 'Hadja Diagne' 'Cyril Voyant']",
"Philippe Lauret (PIMENT), Auline Rodler (SPE), Marc Muselli (SPE),\n Mathieu David (PIMENT), Hadja Diagne (PIMENT), Cyril Voyant (SPE, CHD\n Castellucio)"
] |
cs.LG | null | 1203.5716 | null | null | http://arxiv.org/pdf/1203.5716v2 | 2012-03-27T10:27:30Z | 2012-03-26T16:25:35Z | Credal Classification based on AODE and compression coefficients | Bayesian model averaging (BMA) is an approach to average over alternative
models; yet, it usually gets excessively concentrated around the single most
probable model, therefore achieving only sub-optimal classification
performance. The compression-based approach (Boulle, 2007) overcomes this
problem, averaging over the different models by applying a logarithmic
smoothing over the models' posterior probabilities. This approach has shown
excellent performances when applied to ensembles of naive Bayes classifiers.
AODE is another ensemble of models with high performance (Webb, 2005), based on
a collection of non-naive classifiers (called SPODE) whose probabilistic
predictions are aggregated by simple arithmetic mean. Aggregating the SPODEs
via BMA rather than by arithmetic mean deteriorates the performance; instead,
we aggregate the SPODEs via the compression coefficients and we show that the
resulting classifier obtains a slight but consistent improvement over AODE.
However, an important issue in any Bayesian ensemble of models is the
arbitrariness in the choice of the prior over the models. We address this
problem by the paradigm of credal classification, namely by substituting the
unique prior with a set of priors. Credal classifier automatically recognize
the prior-dependent instances, namely the instances whose most probable class
varies, when different priors are considered; in these cases, credal
classifiers remain reliable by returning a set of classes rather than a single
class. We thus develop the credal version of both the BMA-based and the
compression-based ensemble of SPODEs, substituting the single prior over the
models by a set of priors. Experiments show that both credal classifiers
provide higher classification reliability than their determinate counterparts;
moreover the compression-based credal classifier compares favorably to previous
credal classifiers.
| [
"Giorgio Corani and Alessandro Antonucci",
"['Giorgio Corani' 'Alessandro Antonucci']"
] |
stat.ML cs.LG | null | 1203.6130 | null | null | http://arxiv.org/pdf/1203.6130v1 | 2012-03-28T01:56:32Z | 2012-03-28T01:56:32Z | Spectral dimensionality reduction for HMMs | Hidden Markov Models (HMMs) can be accurately approximated using
co-occurrence frequencies of pairs and triples of observations by using a fast
spectral method in contrast to the usual slow methods like EM or Gibbs
sampling. We provide a new spectral method which significantly reduces the
number of model parameters that need to be estimated, and generates a sample
complexity that does not depend on the size of the observation vocabulary. We
present an elementary proof giving bounds on the relative accuracy of
probability estimates from our model. (Correlaries show our bounds can be
weakened to provide either L1 bounds or KL bounds which provide easier direct
comparisons to previous work.) Our theorem uses conditions that are checkable
from the data, instead of putting conditions on the unobservable Markov
transition matrix.
| [
"Dean P. Foster, Jordan Rodu, Lyle H. Ungar",
"['Dean P. Foster' 'Jordan Rodu' 'Lyle H. Ungar']"
] |
cond-mat.dis-nn cond-mat.stat-mech cs.IT cs.LG math.IT | 10.1209/0295-5075/103/28008 | 1203.6178 | null | null | http://arxiv.org/abs/1203.6178v3 | 2013-01-25T13:01:29Z | 2012-03-28T07:01:29Z | Statistical Mechanics of Dictionary Learning | Finding a basis matrix (dictionary) by which objective signals are
represented sparsely is of major relevance in various scientific and
technological fields. We consider a problem to learn a dictionary from a set of
training signals. We employ techniques of statistical mechanics of disordered
systems to evaluate the size of the training set necessary to typically succeed
in the dictionary learning. The results indicate that the necessary size is
much smaller than previously estimated, which theoretically supports and/or
encourages the use of dictionary learning in practical situations.
| [
"['Ayaka Sakata' 'Yoshiyuki Kabashima']",
"Ayaka Sakata and Yoshiyuki Kabashima"
] |
stat.ML cs.AI cs.LG cs.SI | null | 1204.0033 | null | null | http://arxiv.org/pdf/1204.0033v1 | 2012-03-30T21:38:52Z | 2012-03-30T21:38:52Z | Transforming Graph Representations for Statistical Relational Learning | Relational data representations have become an increasingly important topic
due to the recent proliferation of network datasets (e.g., social, biological,
information networks) and a corresponding increase in the application of
statistical relational learning (SRL) algorithms to these domains. In this
article, we examine a range of representation issues for graph-based relational
data. Since the choice of relational data representation for the nodes, links,
and features can dramatically affect the capabilities of SRL algorithms, we
survey approaches and opportunities for relational representation
transformation designed to improve the performance of these algorithms. This
leads us to introduce an intuitive taxonomy for data representation
transformations in relational domains that incorporates link transformation and
node transformation as symmetric representation tasks. In particular, the
transformation tasks for both nodes and links include (i) predicting their
existence, (ii) predicting their label or type, (iii) estimating their weight
or importance, and (iv) systematically constructing their relevant features. We
motivate our taxonomy through detailed examples and use it to survey and
compare competing approaches for each of these tasks. We also discuss general
conditions for transforming links, nodes, and features. Finally, we highlight
challenges that remain to be addressed.
| [
"Ryan A. Rossi, Luke K. McDowell, David W. Aha and Jennifer Neville",
"['Ryan A. Rossi' 'Luke K. McDowell' 'David W. Aha' 'Jennifer Neville']"
] |
cs.LG stat.ML | null | 1204.0047 | null | null | http://arxiv.org/pdf/1204.0047v2 | 2013-07-16T18:03:20Z | 2012-03-30T23:39:29Z | A Lipschitz Exploration-Exploitation Scheme for Bayesian Optimization | The problem of optimizing unknown costly-to-evaluate functions has been
studied for a long time in the context of Bayesian Optimization. Algorithms in
this field aim to find the optimizer of the function by asking only a few
function evaluations at locations carefully selected based on a posterior
model. In this paper, we assume the unknown function is Lipschitz continuous.
Leveraging the Lipschitz property, we propose an algorithm with a distinct
exploration phase followed by an exploitation phase. The exploration phase aims
to select samples that shrink the search space as much as possible. The
exploitation phase then focuses on the reduced search space and selects samples
closest to the optimizer. Considering the Expected Improvement (EI) as a
baseline, we empirically show that the proposed algorithm significantly
outperforms EI.
| [
"['Ali Jalali' 'Javad Azimi' 'Xiaoli Fern' 'Ruofei Zhang']",
"Ali Jalali, Javad Azimi, Xiaoli Fern and Ruofei Zhang"
] |
cs.LG cs.DS | null | 1204.0136 | null | null | http://arxiv.org/pdf/1204.0136v1 | 2012-03-31T21:15:28Z | 2012-03-31T21:15:28Z | Near-Optimal Algorithms for Online Matrix Prediction | In several online prediction problems of recent interest the comparison class
is composed of matrices with bounded entries. For example, in the online
max-cut problem, the comparison class is matrices which represent cuts of a
given graph and in online gambling the comparison class is matrices which
represent permutations over n teams. Another important example is online
collaborative filtering in which a widely used comparison class is the set of
matrices with a small trace norm. In this paper we isolate a property of
matrices, which we call (beta,tau)-decomposability, and derive an efficient
online learning algorithm, that enjoys a regret bound of O*(sqrt(beta tau T))
for all problems in which the comparison class is composed of
(beta,tau)-decomposable matrices. By analyzing the decomposability of cut
matrices, triangular matrices, and low trace-norm matrices, we derive near
optimal regret bounds for online max-cut, online gambling, and online
collaborative filtering. In particular, this resolves (in the affirmative) an
open problem posed by Abernethy (2010); Kleinberg et al (2010). Finally, we
derive lower bounds for the three problems and show that our upper bounds are
optimal up to logarithmic factors. In particular, our lower bound for the
online collaborative filtering problem resolves another open problem posed by
Shamir and Srebro (2011).
| [
"['Elad Hazan' 'Satyen Kale' 'Shai Shalev-Shwartz']",
"Elad Hazan, Satyen Kale, Shai Shalev-Shwartz"
] |
cs.LG cs.IR | null | 1204.0170 | null | null | http://arxiv.org/pdf/1204.0170v2 | 2014-04-08T02:17:47Z | 2012-04-01T07:07:27Z | A New Approach to Speeding Up Topic Modeling | Latent Dirichlet allocation (LDA) is a widely-used probabilistic topic
modeling paradigm, and recently finds many applications in computer vision and
computational biology. In this paper, we propose a fast and accurate batch
algorithm, active belief propagation (ABP), for training LDA. Usually batch LDA
algorithms require repeated scanning of the entire corpus and searching the
complete topic space. To process massive corpora having a large number of
topics, the training iteration of batch LDA algorithms is often inefficient and
time-consuming. To accelerate the training speed, ABP actively scans the subset
of corpus and searches the subset of topic space for topic modeling, therefore
saves enormous training time in each iteration. To ensure accuracy, ABP selects
only those documents and topics that contribute to the largest residuals within
the residual belief propagation (RBP) framework. On four real-world corpora,
ABP performs around $10$ to $100$ times faster than state-of-the-art batch LDA
algorithms with a comparable topic modeling accuracy.
| [
"Jia Zeng, Zhi-Qiang Liu and Xiao-Qin Cao",
"['Jia Zeng' 'Zhi-Qiang Liu' 'Xiao-Qin Cao']"
] |
cs.LG cs.CV | null | 1204.0171 | null | null | http://arxiv.org/pdf/1204.0171v5 | 2013-08-12T21:13:37Z | 2012-04-01T07:16:47Z | A New Fuzzy Stacked Generalization Technique and Analysis of its
Performance | In this study, a new Stacked Generalization technique called Fuzzy Stacked
Generalization (FSG) is proposed to minimize the difference between N -sample
and large-sample classification error of the Nearest Neighbor classifier. The
proposed FSG employs a new hierarchical distance learning strategy to minimize
the error difference. For this purpose, we first construct an ensemble of
base-layer fuzzy k- Nearest Neighbor (k-NN) classifiers, each of which receives
a different feature set extracted from the same sample set. The fuzzy
membership values computed at the decision space of each fuzzy k-NN classifier
are concatenated to form the feature vectors of a fusion space. Finally, the
feature vectors are fed to a meta-layer classifier to learn the degree of
accuracy of the decisions of the base-layer classifiers for meta-layer
classification. Rather than the power of the individual base layer-classifiers,
diversity and cooperation of the classifiers become an important issue to
improve the overall performance of the proposed FSG. A weak base-layer
classifier may boost the overall performance more than a strong classifier, if
it is capable of recognizing the samples, which are not recognized by the rest
of the classifiers, in its own feature space. The experiments explore the type
of the collaboration among the individual classifiers required for an improved
performance of the suggested architecture. Experiments on multiple feature
real-world datasets show that the proposed FSG performs better than the state
of the art ensemble learning algorithms such as Adaboost, Random Subspace and
Rotation Forest. On the other hand, compatible performances are observed in the
experiments on single feature multi-attribute datasets.
| [
"['Mete Ozay' 'Fatos T. Yarman Vural']",
"Mete Ozay, Fatos T. Yarman Vural"
] |
cs.LG | null | 1204.0566 | null | null | http://arxiv.org/pdf/1204.0566v2 | 2012-06-21T12:14:24Z | 2012-04-03T00:33:53Z | The Kernelized Stochastic Batch Perceptron | We present a novel approach for training kernel Support Vector Machines,
establish learning runtime guarantees for our method that are better then those
of any other known kernelized SVM optimization approach, and show that our
method works well in practice compared to existing alternatives.
| [
"Andrew Cotter, Shai Shalev-Shwartz, Nathan Srebro",
"['Andrew Cotter' 'Shai Shalev-Shwartz' 'Nathan Srebro']"
] |
cs.LG cs.AI cs.CV stat.ML | 10.1007/s11063-012-9220-6 | 1204.0684 | null | null | http://arxiv.org/abs/1204.0684v1 | 2012-04-03T13:22:07Z | 2012-04-03T13:22:07Z | Validation of nonlinear PCA | Linear principal component analysis (PCA) can be extended to a nonlinear PCA
by using artificial neural networks. But the benefit of curved components
requires a careful control of the model complexity. Moreover, standard
techniques for model selection, including cross-validation and more generally
the use of an independent test set, fail when applied to nonlinear PCA because
of its inherent unsupervised characteristics. This paper presents a new
approach for validating the complexity of nonlinear PCA models by using the
error in missing data estimation as a criterion for model selection. It is
motivated by the idea that only the model of optimal complexity is able to
predict missing values with the highest accuracy. While standard test set
validation usually favours over-fitted nonlinear PCA models, the proposed model
validation approach correctly selects the optimal model complexity.
| [
"['Matthias Scholz']",
"Matthias Scholz"
] |
cs.LG cs.GT stat.ML | null | 1204.0870 | null | null | http://arxiv.org/pdf/1204.0870v1 | 2012-04-04T05:49:56Z | 2012-04-04T05:49:56Z | Relax and Localize: From Value to Algorithms | We show a principled way of deriving online learning algorithms from a
minimax analysis. Various upper bounds on the minimax value, previously thought
to be non-constructive, are shown to yield algorithms. This allows us to
seamlessly recover known methods and to derive new ones. Our framework also
captures such "unorthodox" methods as Follow the Perturbed Leader and the R^2
forecaster. We emphasize that understanding the inherent complexity of the
learning problem leads to the development of algorithms.
We define local sequential Rademacher complexities and associated algorithms
that allow us to obtain faster rates in online learning, similarly to
statistical learning theory. Based on these localized complexities we build a
general adaptive method that can take advantage of the suboptimality of the
observed sequence.
We present a number of new algorithms, including a family of randomized
methods that use the idea of a "random playout". Several new versions of the
Follow-the-Perturbed-Leader algorithms are presented, as well as methods based
on the Littlestone's dimension, efficient methods for matrix completion with
trace norm, and algorithms for the problems of transductive learning and
prediction with static experts.
| [
"Alexander Rakhlin, Ohad Shamir, Karthik Sridharan",
"['Alexander Rakhlin' 'Ohad Shamir' 'Karthik Sridharan']"
] |
cs.SY cs.LG cs.NE | null | 1204.0885 | null | null | http://arxiv.org/pdf/1204.0885v1 | 2012-04-04T08:17:32Z | 2012-04-04T08:17:32Z | PID Parameters Optimization by Using Genetic Algorithm | Time delays are components that make time-lag in systems response. They arise
in physical, chemical, biological and economic systems, as well as in the
process of measurement and computation. In this work, we implement Genetic
Algorithm (GA) in determining PID controller parameters to compensate the delay
in First Order Lag plus Time Delay (FOLPD) and compare the results with
Iterative Method and Ziegler-Nichols rule results.
| [
"['Andri Mirzal' 'Shinichiro Yoshii' 'Masashi Furukawa']",
"Andri Mirzal, Shinichiro Yoshii, Masashi Furukawa"
] |
cs.LG cs.IR cs.NA | 10.1007/978-3-642-33486-3_5 | 1204.1259 | null | null | http://arxiv.org/abs/1204.1259v2 | 2013-04-04T15:33:31Z | 2012-04-05T15:34:30Z | Fast ALS-based tensor factorization for context-aware recommendation
from implicit feedback | Albeit, the implicit feedback based recommendation problem - when only the
user history is available but there are no ratings - is the most typical
setting in real-world applications, it is much less researched than the
explicit feedback case. State-of-the-art algorithms that are efficient on the
explicit case cannot be straightforwardly transformed to the implicit case if
scalability should be maintained. There are few if any implicit feedback
benchmark datasets, therefore new ideas are usually experimented on explicit
benchmarks. In this paper, we propose a generic context-aware implicit feedback
recommender algorithm, coined iTALS. iTALS apply a fast, ALS-based tensor
factorization learning method that scales linearly with the number of non-zero
elements in the tensor. The method also allows us to incorporate diverse
context information into the model while maintaining its computational
efficiency. In particular, we present two such context-aware implementation
variants of iTALS. The first incorporates seasonality and enables to
distinguish user behavior in different time intervals. The other views the user
history as sequential information and has the ability to recognize usage
pattern typical to certain group of items, e.g. to automatically tell apart
product types or categories that are typically purchased repetitively
(collectibles, grocery goods) or once (household appliances). Experiments
performed on three implicit datasets (two proprietary ones and an implicit
variant of the Netflix dataset) show that by integrating context-aware
information with our factorization framework into the state-of-the-art implicit
recommender algorithm the recommendation quality improves significantly.
| [
"['Balázs Hidasi' 'Domonkos Tikk']",
"Bal\\'azs Hidasi, Domonkos Tikk"
] |
stat.ML cs.LG | null | 1204.1276 | null | null | http://arxiv.org/pdf/1204.1276v4 | 2013-09-18T13:17:05Z | 2012-04-05T16:59:29Z | Distribution-Dependent Sample Complexity of Large Margin Learning | We obtain a tight distribution-specific characterization of the sample
complexity of large-margin classification with L2 regularization: We introduce
the margin-adapted dimension, which is a simple function of the second order
statistics of the data distribution, and show distribution-specific upper and
lower bounds on the sample complexity, both governed by the margin-adapted
dimension of the data distribution. The upper bounds are universal, and the
lower bounds hold for the rich family of sub-Gaussian distributions with
independent features. We conclude that this new quantity tightly characterizes
the true sample complexity of large-margin classification. To prove the lower
bound, we develop several new tools of independent interest. These include new
connections between shattering and hardness of learning, new properties of
shattering with linear classifiers, and a new lower bound on the smallest
eigenvalue of a random Gram matrix generated by sub-Gaussian variables. Our
results can be used to quantitatively compare large margin learning to other
learning rules, and to improve the effectiveness of methods that use sample
complexity bounds, such as active learning.
| [
"Sivan Sabato, Nathan Srebro and Naftali Tishby",
"['Sivan Sabato' 'Nathan Srebro' 'Naftali Tishby']"
] |
stat.ML cs.LG math.OC | null | 1204.1437 | null | null | http://arxiv.org/pdf/1204.1437v1 | 2012-04-06T08:55:38Z | 2012-04-06T08:55:38Z | Fast projections onto mixed-norm balls with applications | Joint sparsity offers powerful structural cues for feature selection,
especially for variables that are expected to demonstrate a "grouped" behavior.
Such behavior is commonly modeled via group-lasso, multitask lasso, and related
methods where feature selection is effected via mixed-norms. Several mixed-norm
based sparse models have received substantial attention, and for some cases
efficient algorithms are also available. Surprisingly, several constrained
sparse models seem to be lacking scalable algorithms. We address this
deficiency by presenting batch and online (stochastic-gradient) optimization
methods, both of which rely on efficient projections onto mixed-norm balls. We
illustrate our methods by applying them to the multitask lasso. We conclude by
mentioning some open problems.
| [
"Suvrit Sra",
"['Suvrit Sra']"
] |
cs.DS cs.LG | null | 1204.1467 | null | null | http://arxiv.org/pdf/1204.1467v1 | 2012-04-06T12:42:24Z | 2012-04-06T12:42:24Z | Learning Fuzzy {\beta}-Certain and {\beta}-Possible rules from
incomplete quantitative data by rough sets | The rough-set theory proposed by Pawlak, has been widely used in dealing with
data classification problems. The original rough-set model is, however, quite
sensitive to noisy data. Tzung thus proposed deals with the problem of
producing a set of fuzzy certain and fuzzy possible rules from quantitative
data with a predefined tolerance degree of uncertainty and misclassification.
This model allowed, which combines the variable precision rough-set model and
the fuzzy set theory, is thus proposed to solve this problem. This paper thus
deals with the problem of producing a set of fuzzy certain and fuzzy possible
rules from incomplete quantitative data with a predefined tolerance degree of
uncertainty and misclassification. A new method, incomplete quantitative data
for rough-set model and the fuzzy set theory, is thus proposed to solve this
problem. It first transforms each quantitative value into a fuzzy set of
linguistic terms using membership functions and then finding incomplete
quantitative data with lower and the fuzzy upper approximations. It second
calculates the fuzzy {\beta}-lower and the fuzzy {\beta}-upper approximations.
The certain and possible rules are then generated based on these fuzzy
approximations. These rules can then be used to classify unknown objects.
| [
"['Ali Soltan Mohammadi' 'L. Asadzadeh' 'D. D. Rezaee']",
"Ali Soltan Mohammadi and L. Asadzadeh and D. D. Rezaee"
] |
q-bio.NC cs.LG | 10.1016/j.jmp.2012.11.002 | 1204.1564 | null | null | http://arxiv.org/abs/1204.1564v4 | 2012-12-17T16:58:04Z | 2012-04-06T20:57:07Z | Minimal model of associative learning for cross-situational lexicon
acquisition | An explanation for the acquisition of word-object mappings is the associative
learning in a cross-situational scenario. Here we present analytical results of
the performance of a simple associative learning algorithm for acquiring a
one-to-one mapping between $N$ objects and $N$ words based solely on the
co-occurrence between objects and words. In particular, a learning trial in our
learning scenario consists of the presentation of $C + 1 < N$ objects together
with a target word, which refers to one of the objects in the context. We find
that the learning times are distributed exponentially and the learning rates
are given by $\ln{[\frac{N(N-1)}{C + (N-1)^{2}}]}$ in the case the $N$ target
words are sampled randomly and by $\frac{1}{N} \ln [\frac{N-1}{C}] $ in the
case they follow a deterministic presentation sequence. This learning
performance is much superior to those exhibited by humans and more realistic
learning algorithms in cross-situational experiments. We show that introduction
of discrimination limitations using Weber's law and forgetting reduce the
performance of the associative algorithm to the human level.
| [
"Paulo F. C. Tilles and Jose F. Fontanari",
"['Paulo F. C. Tilles' 'Jose F. Fontanari']"
] |
stat.ML cs.LG | null | 1204.1624 | null | null | http://arxiv.org/pdf/1204.1624v1 | 2012-04-07T12:17:03Z | 2012-04-07T12:17:03Z | UCB Algorithm for Exponential Distributions | We introduce in this paper a new algorithm for Multi-Armed Bandit (MAB)
problems. A machine learning paradigm popular within Cognitive Network related
topics (e.g., Spectrum Sensing and Allocation). We focus on the case where the
rewards are exponentially distributed, which is common when dealing with
Rayleigh fading channels. This strategy, named Multiplicative Upper Confidence
Bound (MUCB), associates a utility index to every available arm, and then
selects the arm with the highest index. For every arm, the associated index is
equal to the product of a multiplicative factor by the sample mean of the
rewards collected by this arm. We show that the MUCB policy has a low
complexity and is order optimal.
| [
"Wassim Jouini and Christophe Moy",
"['Wassim Jouini' 'Christophe Moy']"
] |
cs.AI cs.LG stat.ML | null | 1204.1681 | null | null | http://arxiv.org/pdf/1204.1681v1 | 2012-04-07T21:09:48Z | 2012-04-07T21:09:48Z | The threshold EM algorithm for parameter learning in bayesian network
with incomplete data | Bayesian networks (BN) are used in a big range of applications but they have
one issue concerning parameter learning. In real application, training data are
always incomplete or some nodes are hidden. To deal with this problem many
learning parameter algorithms are suggested foreground EM, Gibbs sampling and
RBE algorithms. In order to limit the search space and escape from local maxima
produced by executing EM algorithm, this paper presents a learning parameter
algorithm that is a fusion of EM and RBE algorithms. This algorithm
incorporates the range of a parameter into the EM algorithm. This range is
calculated by the first step of RBE algorithm allowing a regularization of each
parameter in bayesian network after the maximization step of the EM algorithm.
The threshold EM algorithm is applied in brain tumor diagnosis and show some
advantages and disadvantages over the EM algorithm.
| [
"['Fradj Ben Lamine' 'Karim Kalti' 'Mohamed Ali Mahjoub']",
"Fradj Ben Lamine, Karim Kalti, Mohamed Ali Mahjoub"
] |
math.ST cs.LG stat.ML stat.TH | 10.1214/13-AOS1092 | 1204.1685 | null | null | http://arxiv.org/abs/1204.1685v2 | 2013-05-24T13:14:50Z | 2012-04-07T21:49:22Z | Density-sensitive semisupervised inference | Semisupervised methods are techniques for using labeled data
$(X_1,Y_1),\ldots,(X_n,Y_n)$ together with unlabeled data $X_{n+1},\ldots,X_N$
to make predictions. These methods invoke some assumptions that link the
marginal distribution $P_X$ of X to the regression function f(x). For example,
it is common to assume that f is very smooth over high density regions of
$P_X$. Many of the methods are ad-hoc and have been shown to work in specific
examples but are lacking a theoretical foundation. We provide a minimax
framework for analyzing semisupervised methods. In particular, we study methods
based on metrics that are sensitive to the distribution $P_X$. Our model
includes a parameter $\alpha$ that controls the strength of the semisupervised
assumption. We then use the data to adapt to $\alpha$.
| [
"Martin Azizyan, Aarti Singh, Larry Wasserman",
"['Martin Azizyan' 'Aarti Singh' 'Larry Wasserman']"
] |
math.ST cs.LG stat.ML stat.TH | 10.1214/13-AOS1142 | 1204.1688 | null | null | http://arxiv.org/abs/1204.1688v3 | 2013-11-26T09:25:03Z | 2012-04-07T22:33:22Z | The asymptotics of ranking algorithms | We consider the predictive problem of supervised ranking, where the task is
to rank sets of candidate items returned in response to queries. Although there
exist statistical procedures that come with guarantees of consistency in this
setting, these procedures require that individuals provide a complete ranking
of all items, which is rarely feasible in practice. Instead, individuals
routinely provide partial preference information, such as pairwise comparisons
of items, and more practical approaches to ranking have aimed at modeling this
partial preference data directly. As we show, however, such an approach raises
serious theoretical challenges. Indeed, we demonstrate that many commonly used
surrogate losses for pairwise comparison data do not yield consistency;
surprisingly, we show inconsistency even in low-noise settings. With these
negative results as motivation, we present a new approach to supervised ranking
based on aggregation of partial preferences, and we develop $U$-statistic-based
empirical risk minimization procedures. We present an asymptotic analysis of
these new procedures, showing that they yield consistency results that parallel
those available for classification. We complement our theoretical results with
an experiment studying the new procedures in a large-scale web-ranking task.
| [
"['John C. Duchi' 'Lester Mackey' 'Michael I. Jordan']",
"John C. Duchi, Lester Mackey, Michael I. Jordan"
] |
cs.LG cs.IT math.IT stat.ML | null | 1204.1800 | null | null | http://arxiv.org/pdf/1204.1800v2 | 2013-04-01T07:12:43Z | 2012-04-09T05:53:27Z | On Power-law Kernels, corresponding Reproducing Kernel Hilbert Space and
Applications | The role of kernels is central to machine learning. Motivated by the
importance of power-law distributions in statistical modeling, in this paper,
we propose the notion of power-law kernels to investigate power-laws in
learning problem. We propose two power-law kernels by generalizing Gaussian and
Laplacian kernels. This generalization is based on distributions, arising out
of maximization of a generalized information measure known as nonextensive
entropy that is very well studied in statistical mechanics. We prove that the
proposed kernels are positive definite, and provide some insights regarding the
corresponding Reproducing Kernel Hilbert Space (RKHS). We also study practical
significance of both kernels in classification and regression, and present some
simulation results.
| [
"['Debarghya Ghoshdastidar' 'Ambedkar Dukkipati']",
"Debarghya Ghoshdastidar and Ambedkar Dukkipati"
] |
cs.AI cs.LG | null | 1204.1909 | null | null | http://arxiv.org/pdf/1204.1909v1 | 2012-04-09T15:56:56Z | 2012-04-09T15:56:56Z | Knapsack based Optimal Policies for Budget-Limited Multi-Armed Bandits | In budget-limited multi-armed bandit (MAB) problems, the learner's actions
are costly and constrained by a fixed budget. Consequently, an optimal
exploitation policy may not be to pull the optimal arm repeatedly, as is the
case in other variants of MAB, but rather to pull the sequence of different
arms that maximises the agent's total reward within the budget. This difference
from existing MABs means that new approaches to maximising the total reward are
required. Given this, we develop two pulling policies, namely: (i) KUBE; and
(ii) fractional KUBE. Whereas the former provides better performance up to 40%
in our experimental settings, the latter is computationally less expensive. We
also prove logarithmic upper bounds for the regret of both policies, and show
that these bounds are asymptotically optimal (i.e. they only differ from the
best possible regret by a constant factor).
| [
"['Long Tran-Thanh' 'Archie Chapman' 'Alex Rogers' 'Nicholas R. Jennings']",
"Long Tran-Thanh, Archie Chapman, Alex Rogers, Nicholas R. Jennings"
] |
cs.LG cs.DS cs.IR | null | 1204.1956 | null | null | http://arxiv.org/pdf/1204.1956v2 | 2012-04-10T01:08:52Z | 2012-04-09T19:33:47Z | Learning Topic Models - Going beyond SVD | Topic Modeling is an approach used for automatic comprehension and
classification of data in a variety of settings, and perhaps the canonical
application is in uncovering thematic structure in a corpus of documents. A
number of foundational works both in machine learning and in theory have
suggested a probabilistic model for documents, whereby documents arise as a
convex combination of (i.e. distribution on) a small number of topic vectors,
each topic vector being a distribution on words (i.e. a vector of
word-frequencies). Similar models have since been used in a variety of
application areas; the Latent Dirichlet Allocation or LDA model of Blei et al.
is especially popular.
Theoretical studies of topic modeling focus on learning the model's
parameters assuming the data is actually generated from it. Existing approaches
for the most part rely on Singular Value Decomposition(SVD), and consequently
have one of two limitations: these works need to either assume that each
document contains only one topic, or else can only recover the span of the
topic vectors instead of the topic vectors themselves.
This paper formally justifies Nonnegative Matrix Factorization(NMF) as a main
tool in this context, which is an analog of SVD where all vectors are
nonnegative. Using this tool we give the first polynomial-time algorithm for
learning topic models without the above two limitations. The algorithm uses a
fairly mild assumption about the underlying topic matrix called separability,
which is usually found to hold in real-life data. A compelling feature of our
algorithm is that it generalizes to models that incorporate topic-topic
correlations, such as the Correlated Topic Model and the Pachinko Allocation
Model.
We hope that this paper will motivate further theoretical results that use
NMF as a replacement for SVD - just as NMF has come to replace SVD in many
applications.
| [
"Sanjeev Arora, Rong Ge, Ankur Moitra",
"['Sanjeev Arora' 'Rong Ge' 'Ankur Moitra']"
] |
cs.IR cs.LG | 10.5121/ijdkp.2012.2201 | 1204.2058 | null | null | http://arxiv.org/abs/1204.2058v1 | 2012-04-10T06:59:48Z | 2012-04-10T06:59:48Z | A technical study and analysis on fuzzy similarity based models for text
classification | In this new and current era of technology, advancements and techniques,
efficient and effective text document classification is becoming a challenging
and highly required area to capably categorize text documents into mutually
exclusive categories. Fuzzy similarity provides a way to find the similarity of
features among various documents. In this paper, a technical review on various
fuzzy similarity based models is given. These models are discussed and compared
to frame out their use and necessity. A tour of different methodologies is
provided which is based upon fuzzy similarity related concerns. It shows that
how text and web documents are categorized efficiently into different
categories. Various experimental results of these models are also discussed.
The technical comparisons among each model's parameters are shown in the form
of a 3-D chart. Such study and technical review provide a strong base of
research work done on fuzzy similarity based text document categorization.
| [
"Shalini Puri and Sona Kaushik",
"['Shalini Puri' 'Sona Kaushik']"
] |
cs.IR cs.LG | null | 1204.2061 | null | null | http://arxiv.org/pdf/1204.2061v1 | 2012-04-10T07:05:20Z | 2012-04-10T07:05:20Z | A Fuzzy Similarity Based Concept Mining Model for Text Classification | Text Classification is a challenging and a red hot field in the current
scenario and has great importance in text categorization applications. A lot of
research work has been done in this field but there is a need to categorize a
collection of text documents into mutually exclusive categories by extracting
the concepts or features using supervised learning paradigm and different
classification algorithms. In this paper, a new Fuzzy Similarity Based Concept
Mining Model (FSCMM) is proposed to classify a set of text documents into pre -
defined Category Groups (CG) by providing them training and preparing on the
sentence, document and integrated corpora levels along with feature reduction,
ambiguity removal on each level to achieve high system performance. Fuzzy
Feature Category Similarity Analyzer (FFCSA) is used to analyze each extracted
feature of Integrated Corpora Feature Vector (ICFV) with the corresponding
categories or classes. This model uses Support Vector Machine Classifier (SVMC)
to classify correctly the training data patterns into two groups; i. e., + 1
and - 1, thereby producing accurate and correct results. The proposed model
works efficiently and effectively with great performance and high - accuracy
results.
| [
"['Shalini Puri']",
"Shalini Puri"
] |
stat.ML cs.LG | null | 1204.2069 | null | null | http://arxiv.org/pdf/1204.2069v4 | 2014-02-20T00:46:44Z | 2012-04-10T07:50:07Z | Asymptotic Accuracy of Distribution-Based Estimation for Latent
Variables | Hierarchical statistical models are widely employed in information science
and data engineering. The models consist of two types of variables: observable
variables that represent the given data and latent variables for the
unobservable labels. An asymptotic analysis of the models plays an important
role in evaluating the learning process; the result of the analysis is applied
not only to theoretical but also to practical situations, such as optimal model
selection and active learning. There are many studies of generalization errors,
which measure the prediction accuracy of the observable variables. However, the
accuracy of estimating the latent variables has not yet been elucidated. For a
quantitative evaluation of this, the present paper formulates
distribution-based functions for the errors in the estimation of the latent
variables. The asymptotic behavior is analyzed for both the maximum likelihood
and the Bayes methods.
| [
"Keisuke Yamazaki",
"['Keisuke Yamazaki']"
] |
cs.LG cs.CV stat.ML | null | 1204.2311 | null | null | http://arxiv.org/pdf/1204.2311v1 | 2012-04-11T01:03:03Z | 2012-04-11T01:03:03Z | Robust Nonnegative Matrix Factorization via $L_1$ Norm Regularization | Nonnegative Matrix Factorization (NMF) is a widely used technique in many
applications such as face recognition, motion segmentation, etc. It
approximates the nonnegative data in an original high dimensional space with a
linear representation in a low dimensional space by using the product of two
nonnegative matrices. In many applications data are often partially corrupted
with large additive noise. When the positions of noise are known, some existing
variants of NMF can be applied by treating these corrupted entries as missing
values. However, the positions are often unknown in many real world
applications, which prevents the usage of traditional NMF or other existing
variants of NMF. This paper proposes a Robust Nonnegative Matrix Factorization
(RobustNMF) algorithm that explicitly models the partial corruption as large
additive noise without requiring the information of positions of noise. In
practice, large additive noise can be used to model outliers. In particular,
the proposed method jointly approximates the clean data matrix with the product
of two nonnegative matrices and estimates the positions and values of
outliers/noise. An efficient iterative optimization algorithm with a solid
theoretical justification has been proposed to learn the desired matrix
factorization. Experimental results demonstrate the advantages of the proposed
algorithm.
| [
"Bin Shen, Luo Si, Rongrong Ji, Baodi Liu",
"['Bin Shen' 'Luo Si' 'Rongrong Ji' 'Baodi Liu']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.