categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
stat.ME cs.LG stat.ML | null | 1204.2477 | null | null | http://arxiv.org/pdf/1204.2477v1 | 2012-04-11T15:35:43Z | 2012-04-11T15:35:43Z | A Simple Explanation of A Spectral Algorithm for Learning Hidden Markov
Models | A simple linear algebraic explanation of the algorithm in "A Spectral
Algorithm for Learning Hidden Markov Models" (COLT 2009). Most of the content
is in Figure 2; the text just makes everything precise in four nearly-trivial
claims.
| [
"Matthew James Johnson",
"['Matthew James Johnson']"
] |
stat.ML cs.CL cs.IR cs.LG | null | 1204.2523 | null | null | http://arxiv.org/pdf/1204.2523v1 | 2012-04-11T18:53:58Z | 2012-04-11T18:53:58Z | Concept Modeling with Superwords | In information retrieval, a fundamental goal is to transform a document into
concepts that are representative of its content. The term "representative" is
in itself challenging to define, and various tasks require different
granularities of concepts. In this paper, we aim to model concepts that are
sparse over the vocabulary, and that flexibly adapt their content based on
other relevant semantic information such as textual structure or associated
image features. We explore a Bayesian nonparametric model based on nested beta
processes that allows for inferring an unknown number of strictly sparse
concepts. The resulting model provides an inherently different representation
of concepts than a standard LDA (or HDP) based topic model, and allows for
direct incorporation of semantic features. We demonstrate the utility of this
representation on multilingual blog data and the Congressional Record.
| [
"Khalid El-Arini, Emily B. Fox, Carlos Guestrin",
"['Khalid El-Arini' 'Emily B. Fox' 'Carlos Guestrin']"
] |
cs.DS cs.LG stat.ML | null | 1204.2581 | null | null | http://arxiv.org/pdf/1204.2581v1 | 2012-04-11T22:14:05Z | 2012-04-11T22:14:05Z | Modeling Relational Data via Latent Factor Blockmodel | In this paper we address the problem of modeling relational data, which
appear in many applications such as social network analysis, recommender
systems and bioinformatics. Previous studies either consider latent feature
based models but disregarding local structure in the network, or focus
exclusively on capturing local structure of objects based on latent blockmodels
without coupling with latent characteristics of objects. To combine the
benefits of the previous work, we propose a novel model that can simultaneously
incorporate the effect of latent features and covariates if any, as well as the
effect of latent structure that may exist in the data. To achieve this, we
model the relation graph as a function of both latent feature factors and
latent cluster memberships of objects to collectively discover globally
predictive intrinsic properties of objects and capture latent block structure
in the network to improve prediction performance. We also develop an
optimization transfer algorithm based on the generalized EM-style strategy to
learn the latent factors. We prove the efficacy of our proposed model through
the link prediction task and cluster analysis task, and extensive experiments
on the synthetic data and several real world datasets suggest that our proposed
LFBM model outperforms the other state of the art approaches in the evaluated
tasks.
| [
"['Sheng Gao' 'Ludovic Denoyer' 'Patrick Gallinari']",
"Sheng Gao and Ludovic Denoyer and Patrick Gallinari"
] |
cs.SI cs.LG stat.ML | null | 1204.2588 | null | null | http://arxiv.org/pdf/1204.2588v1 | 2012-04-11T22:58:46Z | 2012-04-11T22:58:46Z | Probabilistic Latent Tensor Factorization Model for Link Pattern
Prediction in Multi-relational Networks | This paper aims at the problem of link pattern prediction in collections of
objects connected by multiple relation types, where each type may play a
distinct role. While common link analysis models are limited to single-type
link prediction, we attempt here to capture the correlations among different
relation types and reveal the impact of various relation types on performance
quality. For that, we define the overall relations between object pairs as a
\textit{link pattern} which consists in interaction pattern and connection
structure in the network, and then use tensor formalization to jointly model
and predict the link patterns, which we refer to as \textit{Link Pattern
Prediction} (LPP) problem. To address the issue, we propose a Probabilistic
Latent Tensor Factorization (PLTF) model by introducing another latent factor
for multiple relation types and furnish the Hierarchical Bayesian treatment of
the proposed probabilistic model to avoid overfitting for solving the LPP
problem. To learn the proposed model we develop an efficient Markov Chain Monte
Carlo sampling method. Extensive experiments are conducted on several real
world datasets and demonstrate significant improvements over several existing
state-of-the-art methods.
| [
"['Sheng Gao' 'Ludovic Denoyer' 'Patrick Gallinari']",
"Sheng Gao and Ludovic Denoyer and Patrick Gallinari"
] |
cs.LG | null | 1204.2609 | null | null | http://arxiv.org/pdf/1204.2609v2 | 2012-04-16T02:44:25Z | 2012-04-12T03:49:15Z | Stochastic Feature Mapping for PAC-Bayes Classification | Probabilistic generative modeling of data distributions can potentially
exploit hidden information which is useful for discriminative classification.
This observation has motivated the development of approaches that couple
generative and discriminative models for classification. In this paper, we
propose a new approach to couple generative and discriminative models in an
unified framework based on PAC-Bayes risk theory. We first derive the
model-parameter-independent stochastic feature mapping from a practical MAP
classifier operating on generative models. Then we construct a linear
stochastic classifier equipped with the feature mapping, and derive the
explicit PAC-Bayes risk bounds for such classifier for both supervised and
semi-supervised learning. Minimizing the risk bound, using an EM-like iterative
procedure, results in a new posterior over hidden variables (E-step) and the
update rules of model parameters (M-step). The derivation of the posterior is
always feasible due to the way of equipping feature mapping and the explicit
form of bounding risk. The derived posterior allows the tuning of generative
models and subsequently the feature mappings for better classification. The
derived update rules of the model parameters are same to those of the uncoupled
models as the feature mapping is model-parameter-independent. Our experiments
show that the coupling between data modeling generative model and the
discriminative classifier via a stochastic feature mapping in this framework
leads to a general classification tool with state-of-the-art performance.
| [
"['Xiong Li' 'Tai Sing Lee' 'Yuncai Liu']",
"Xiong Li and Tai Sing Lee and Yuncai Liu"
] |
cs.LG stat.ME | null | 1204.3251 | null | null | http://arxiv.org/pdf/1204.3251v2 | 2012-06-28T09:36:27Z | 2012-04-15T10:21:57Z | Plug-in martingales for testing exchangeability on-line | A standard assumption in machine learning is the exchangeability of data,
which is equivalent to assuming that the examples are generated from the same
probability distribution independently. This paper is devoted to testing the
assumption of exchangeability on-line: the examples arrive one by one, and
after receiving each example we would like to have a valid measure of the
degree to which the assumption of exchangeability has been falsified. Such
measures are provided by exchangeability martingales. We extend known
techniques for constructing exchangeability martingales and show that our new
method is competitive with the martingales introduced before. Finally we
investigate the performance of our testing method on two benchmark datasets,
USPS and Statlog Satellite data; for the former, the known techniques give
satisfactory results, but for the latter our new more flexible method becomes
necessary.
| [
"Valentina Fedorova, Alex Gammerman, Ilia Nouretdinov, and Vladimir\n Vovk",
"['Valentina Fedorova' 'Alex Gammerman' 'Ilia Nouretdinov' 'Vladimir Vovk']"
] |
cs.LG cs.DS | null | 1204.3514 | null | null | http://arxiv.org/pdf/1204.3514v3 | 2012-05-25T15:53:51Z | 2012-04-16T15:10:32Z | Distributed Learning, Communication Complexity and Privacy | We consider the problem of PAC-learning from distributed data and analyze
fundamental communication complexity questions involved. We provide general
upper and lower bounds on the amount of communication needed to learn well,
showing that in addition to VC-dimension and covering number, quantities such
as the teaching-dimension and mistake-bound of a class play an important role.
We also present tight results for a number of common concept classes including
conjunctions, parity functions, and decision lists. For linear separators, we
show that for non-concentrated distributions, we can use a version of the
Perceptron algorithm to learn with much less communication than the number of
updates given by the usual margin bound. We also show how boosting can be
performed in a generic manner in the distributed setting to achieve
communication with only logarithmic dependence on 1/epsilon for any concept
class, and demonstrate how recent work on agnostic learning from
class-conditional queries can be used to achieve low communication in agnostic
settings as well. We additionally present an analysis of privacy, considering
both differential privacy and a notion of distributional privacy that is
especially appealing in this context.
| [
"Maria-Florina Balcan, Avrim Blum, Shai Fine, and Yishay Mansour",
"['Maria-Florina Balcan' 'Avrim Blum' 'Shai Fine' 'Yishay Mansour']"
] |
cs.LG stat.ML | null | 1204.3523 | null | null | http://arxiv.org/pdf/1204.3523v1 | 2012-04-16T15:25:50Z | 2012-04-16T15:25:50Z | Efficient Protocols for Distributed Classification and Optimization | In distributed learning, the goal is to perform a learning task over data
distributed across multiple nodes with minimal (expensive) communication. Prior
work (Daume III et al., 2012) proposes a general model that bounds the
communication required for learning classifiers while allowing for $\eps$
training error on linearly separable data adversarially distributed across
nodes.
In this work, we develop key improvements and extensions to this basic model.
Our first result is a two-party multiplicative-weight-update based protocol
that uses $O(d^2 \log{1/\eps})$ words of communication to classify distributed
data in arbitrary dimension $d$, $\eps$-optimally. This readily extends to
classification over $k$ nodes with $O(kd^2 \log{1/\eps})$ words of
communication. Our proposed protocol is simple to implement and is considerably
more efficient than baselines compared, as demonstrated by our empirical
results.
In addition, we illustrate general algorithm design paradigms for doing
efficient learning over distributed data. We show how to solve
fixed-dimensional and high dimensional linear programming efficiently in a
distributed setting where constraints may be distributed across nodes. Since
many learning problems can be viewed as convex optimization problems where
constraints are generated by individual points, this models many typical
distributed learning scenarios. Our techniques make use of a novel connection
from multipass streaming, as well as adapting the multiplicative-weight-update
framework more generally to a distributed setting. As a consequence, our
methods extend to the wide range of problems solvable using these techniques.
| [
"['Hal Daume III' 'Jeff M. Phillips' 'Avishek Saha'\n 'Suresh Venkatasubramanian']",
"Hal Daume III, Jeff M. Phillips, Avishek Saha, Suresh\n Venkatasubramanian"
] |
cs.SI cs.LG | null | 1204.3611 | null | null | http://arxiv.org/pdf/1204.3611v1 | 2012-04-16T19:39:13Z | 2012-04-16T19:39:13Z | Learning to Predict the Wisdom of Crowds | The problem of "approximating the crowd" is that of estimating the crowd's
majority opinion by querying only a subset of it. Algorithms that approximate
the crowd can intelligently stretch a limited budget for a crowdsourcing task.
We present an algorithm, "CrowdSense," that works in an online fashion to
dynamically sample subsets of labelers based on an exploration/exploitation
criterion. The algorithm produces a weighted combination of a subset of the
labelers' votes that approximates the crowd's opinion.
| [
"['Seyda Ertekin' 'Haym Hirsh' 'Cynthia Rudin']",
"Seyda Ertekin, Haym Hirsh, Cynthia Rudin"
] |
cs.CV cs.LG cs.NE | null | 1204.3968 | null | null | http://arxiv.org/pdf/1204.3968v1 | 2012-04-18T03:48:38Z | 2012-04-18T03:48:38Z | Convolutional Neural Networks Applied to House Numbers Digit
Classification | We classify digits of real-world house numbers using convolutional neural
networks (ConvNets). ConvNets are hierarchical feature learning neural networks
whose structure is biologically inspired. Unlike many popular vision approaches
that are hand-designed, ConvNets can automatically learn a unique set of
features optimized for a given task. We augmented the traditional ConvNet
architecture by learning multi-stage features and by using Lp pooling and
establish a new state-of-the-art of 94.85% accuracy on the SVHN dataset (45.2%
error improvement). Furthermore, we analyze the benefits of different pooling
methods and multi-stage features in ConvNets. The source code and a tutorial
are available at eblearn.sf.net.
| [
"Pierre Sermanet, Soumith Chintala, Yann LeCun",
"['Pierre Sermanet' 'Soumith Chintala' 'Yann LeCun']"
] |
cs.LG stat.CO stat.ML | null | 1204.3972 | null | null | http://arxiv.org/pdf/1204.3972v3 | 2013-03-13T21:55:59Z | 2012-04-18T04:43:24Z | EigenGP: Sparse Gaussian process models with data-dependent
eigenfunctions | Gaussian processes (GPs) provide a nonparametric representation of functions.
However, classical GP inference suffers from high computational cost and it is
difficult to design nonstationary GP priors in practice. In this paper, we
propose a sparse Gaussian process model, EigenGP, based on the Karhunen-Loeve
(KL) expansion of a GP prior. We use the Nystrom approximation to obtain data
dependent eigenfunctions and select these eigenfunctions by evidence
maximization. This selection reduces the number of eigenfunctions in our model
and provides a nonstationary covariance function. To handle nonlinear
likelihoods, we develop an efficient expectation propagation (EP) inference
algorithm, and couple it with expectation maximization for eigenfunction
selection. Because the eigenfunctions of a Gaussian kernel are associated with
clusters of samples - including both the labeled and unlabeled - selecting
relevant eigenfunctions enables EigenGP to conduct semi-supervised learning.
Our experimental results demonstrate improved predictive performance of EigenGP
over alternative state-of-the-art sparse GP and semisupervised learning methods
for regression, classification, and semisupervised classification.
| [
"Yuan Qi and Bo Dai and Yao Zhu",
"['Yuan Qi' 'Bo Dai' 'Yao Zhu']"
] |
cs.LG cs.GT | null | 1204.4145 | null | null | http://arxiv.org/pdf/1204.4145v1 | 2012-04-18T17:17:56Z | 2012-04-18T17:17:56Z | Learning From An Optimization Viewpoint | In this dissertation we study statistical and online learning problems from
an optimization viewpoint.The dissertation is divided into two parts :
I. We first consider the question of learnability for statistical learning
problems in the general learning setting. The question of learnability is well
studied and fully characterized for binary classification and for real valued
supervised learning problems using the theory of uniform convergence. However
we show that for the general learning setting uniform convergence theory fails
to characterize learnability. To fill this void we use stability of learning
algorithms to fully characterize statistical learnability in the general
setting. Next we consider the problem of online learning. Unlike the
statistical learning framework there is a dearth of generic tools that can be
used to establish learnability and rates for online learning problems in
general. We provide online analogs to classical tools from statistical learning
theory like Rademacher complexity, covering numbers, etc. We further use these
tools to fully characterize learnability for online supervised learning
problems.
II. In the second part, for general classes of convex learning problems, we
provide appropriate mirror descent (MD) updates for online and statistical
learning of these problems. Further, we show that the the MD is near optimal
for online convex learning and for most cases, is also near optimal for
statistical convex learning. We next consider the problem of convex
optimization and show that oracle complexity can be lower bounded by the so
called fat-shattering dimension of the associated linear class. Thus we
establish a strong connection between offline convex optimization problems and
statistical learning problems. We also show that for a large class of high
dimensional optimization problems, MD is in fact near optimal even for convex
optimization.
| [
"['Karthik Sridharan']",
"Karthik Sridharan"
] |
cs.LG stat.CO stat.ML | null | 1204.4166 | null | null | http://arxiv.org/pdf/1204.4166v2 | 2012-08-29T16:02:21Z | 2012-04-18T19:21:59Z | Message passing with relaxed moment matching | Bayesian learning is often hampered by large computational expense. As a
powerful generalization of popular belief propagation, expectation propagation
(EP) efficiently approximates the exact Bayesian computation. Nevertheless, EP
can be sensitive to outliers and suffer from divergence for difficult cases. To
address this issue, we propose a new approximate inference approach, relaxed
expectation propagation (REP). It relaxes the moment matching requirement of
expectation propagation by adding a relaxation factor into the KL minimization.
We penalize this relaxation with a $l_1$ penalty. As a result, when two
distributions in the relaxed KL divergence are similar, the relaxation factor
will be penalized to zero and, therefore, we obtain the original moment
matching; In the presence of outliers, these two distributions are
significantly different and the relaxation factor will be used to reduce the
contribution of the outlier. Based on this penalized KL minimization, REP is
robust to outliers and can greatly improve the posterior approximation quality
over EP. To examine the effectiveness of REP, we apply it to Gaussian process
classification, a task known to be suitable to EP. Our classification results
on synthetic and UCI benchmark datasets demonstrate significant improvement of
REP over EP and Power EP--in terms of algorithmic stability, estimation
accuracy and predictive performance.
| [
"Yuan Qi and Yandong Guo",
"['Yuan Qi' 'Yandong Guo']"
] |
cs.AI cs.LG cs.NE cs.SY | 10.1145/1569901.1570075 | 1204.4200 | null | null | http://arxiv.org/abs/1204.4200v2 | 2014-10-18T12:20:46Z | 2012-04-18T20:30:23Z | Discrete Dynamical Genetic Programming in XCS | A number of representation schemes have been presented for use within
Learning Classifier Systems, ranging from binary encodings to neural networks.
This paper presents results from an investigation into using a discrete
dynamical system representation within the XCS Learning Classifier System. In
particular, asynchronous random Boolean networks are used to represent the
traditional condition-action production system rules. It is shown possible to
use self-adaptive, open-ended evolution to design an ensemble of such discrete
dynamical systems within XCS to solve a number of well-known test problems.
| [
"['Richard J. Preen' 'Larry Bull']",
"Richard J. Preen and Larry Bull"
] |
cs.AI cs.LG cs.NE cs.SY | 10.1145/2001858.2001952 | 1204.4202 | null | null | http://arxiv.org/abs/1204.4202v1 | 2012-04-18T20:40:18Z | 2012-04-18T20:40:18Z | Fuzzy Dynamical Genetic Programming in XCSF | A number of representation schemes have been presented for use within
Learning Classifier Systems, ranging from binary encodings to Neural Networks,
and more recently Dynamical Genetic Programming (DGP). This paper presents
results from an investigation into using a fuzzy DGP representation within the
XCSF Learning Classifier System. In particular, asynchronous Fuzzy Logic
Networks are used to represent the traditional condition-action production
system rules. It is shown possible to use self-adaptive, open-ended evolution
to design an ensemble of such fuzzy dynamical systems within XCSF to solve
several well-known continuous-valued test problems.
| [
"['Richard J. Preen' 'Larry Bull']",
"Richard J. Preen and Larry Bull"
] |
cs.LG cs.AI cs.CV | null | 1204.4294 | null | null | http://arxiv.org/pdf/1204.4294v1 | 2012-04-19T09:29:10Z | 2012-04-19T09:29:10Z | Learning in Riemannian Orbifolds | Learning in Riemannian orbifolds is motivated by existing machine learning
algorithms that directly operate on finite combinatorial structures such as
point patterns, trees, and graphs. These methods, however, lack statistical
justification. This contribution derives consistency results for learning
problems in structured domains and thereby generalizes learning in vector
spaces and manifolds.
| [
"Brijnesh J. Jain and Klaus Obermayer",
"['Brijnesh J. Jain' 'Klaus Obermayer']"
] |
cs.LG | null | 1204.4329 | null | null | http://arxiv.org/pdf/1204.4329v1 | 2012-04-19T12:03:20Z | 2012-04-19T12:03:20Z | Supervised feature evaluation by consistency analysis: application to
measure sets used to characterise geographic objects | Nowadays, supervised learning is commonly used in many domains. Indeed, many
works propose to learn new knowledge from examples that translate the expected
behaviour of the considered system. A key issue of supervised learning concerns
the description language used to represent the examples. In this paper, we
propose a method to evaluate the feature set used to describe them. Our method
is based on the computation of the consistency of the example base. We carried
out a case study in the domain of geomatic in order to evaluate the sets of
measures used to characterise geographic objects. The case study shows that our
method allows to give relevant evaluations of measure sets.
| [
"Patrick Taillandier (UMMISCO), Alexis Drogoul (UMMISCO, MSI)",
"['Patrick Taillandier' 'Alexis Drogoul']"
] |
cs.HC cs.LG | null | 1204.4332 | null | null | http://arxiv.org/pdf/1204.4332v1 | 2012-04-19T12:10:10Z | 2012-04-19T12:10:10Z | Designing generalisation evaluation function through human-machine
dialogue | Automated generalisation has known important improvements these last few
years. However, an issue that still deserves more study concerns the automatic
evaluation of generalised data. Indeed, many automated generalisation systems
require the utilisation of an evaluation function to automatically assess
generalisation outcomes. In this paper, we propose a new approach dedicated to
the design of such a function. This approach allows an imperfectly defined
evaluation function to be revised through a man-machine dialogue. The user
gives its preferences to the system by comparing generalisation outcomes.
Machine Learning techniques are then used to improve the evaluation function.
An experiment carried out on buildings shows that our approach significantly
improves generalisation evaluation functions defined by users.
| [
"['Patrick Taillandier' 'Julien Gaffuri']",
"Patrick Taillandier (UMMISCO), Julien Gaffuri (COGIT)"
] |
cs.LG cs.CV stat.ML | null | 1204.4521 | null | null | http://arxiv.org/pdf/1204.4521v1 | 2012-04-20T03:01:56Z | 2012-04-20T03:01:56Z | A Privacy-Aware Bayesian Approach for Combining Classifier and Cluster
Ensembles | This paper introduces a privacy-aware Bayesian approach that combines
ensembles of classifiers and clusterers to perform semi-supervised and
transductive learning. We consider scenarios where instances and their
classification/clustering results are distributed across different data sites
and have sharing restrictions. As a special case, the privacy aware computation
of the model when instances of the target data are distributed across different
data sites, is also discussed. Experimental results show that the proposed
approach can provide good classification accuracies while adhering to the
data/model sharing constraints.
| [
"Ayan Acharya, Eduardo R. Hruschka, Joydeep Ghosh",
"['Ayan Acharya' 'Eduardo R. Hruschka' 'Joydeep Ghosh']"
] |
stat.ML cs.LG math.OC | null | 1204.4539 | null | null | http://arxiv.org/pdf/1204.4539v3 | 2013-08-29T13:12:00Z | 2012-04-20T06:24:37Z | Supervised Feature Selection in Graphs with Path Coding Penalties and
Network Flows | We consider supervised learning problems where the features are embedded in a
graph, such as gene expressions in a gene network. In this context, it is of
much interest to automatically select a subgraph with few connected components;
by exploiting prior knowledge, one can indeed improve the prediction
performance or obtain results that are easier to interpret. Regularization or
penalty functions for selecting features in graphs have recently been proposed,
but they raise new algorithmic challenges. For example, they typically require
solving a combinatorially hard selection problem among all connected subgraphs.
In this paper, we propose computationally feasible strategies to select a
sparse and well-connected subset of features sitting on a directed acyclic
graph (DAG). We introduce structured sparsity penalties over paths on a DAG
called "path coding" penalties. Unlike existing regularization functions that
model long-range interactions between features in a graph, path coding
penalties are tractable. The penalties and their proximal operators involve
path selection problems, which we efficiently solve by leveraging network flow
optimization. We experimentally show on synthetic, image, and genomic data that
our approach is scalable and leads to more connected subgraphs than other
regularization functions for graphs.
| [
"Julien Mairal and Bin Yu",
"['Julien Mairal' 'Bin Yu']"
] |
cs.LG stat.ML | null | 1204.4710 | null | null | http://arxiv.org/pdf/1204.4710v2 | 2013-03-29T22:04:06Z | 2012-04-20T19:26:05Z | Regret in Online Combinatorial Optimization | We address online linear optimization problems when the possible actions of
the decision maker are represented by binary vectors. The regret of the
decision maker is the difference between her realized loss and the best loss
she would have achieved by picking, in hindsight, the best possible action. Our
goal is to understand the magnitude of the best possible (minimax) regret. We
study the problem under three different assumptions for the feedback the
decision maker receives: full information, and the partial information models
of the so-called "semi-bandit" and "bandit" problems. Combining the Mirror
Descent algorithm and the INF (Implicitely Normalized Forecaster) strategy, we
are able to prove optimal bounds for the semi-bandit case. We also recover the
optimal bounds for the full information setting. In the bandit case we discuss
existing results in light of a new lower bound, and suggest a conjecture on the
optimal regret in that case. Finally we also prove that the standard
exponentially weighted average forecaster is provably suboptimal in the setting
of online combinatorial optimization.
| [
"Jean-Yves Audibert, S\\'ebastien Bubeck and G\\'abor Lugosi",
"['Jean-Yves Audibert' 'Sébastien Bubeck' 'Gábor Lugosi']"
] |
math.OC cs.LG cs.SY | null | 1204.4717 | null | null | http://arxiv.org/pdf/1204.4717v1 | 2012-04-20T19:55:30Z | 2012-04-20T19:55:30Z | Energy-Efficient Building HVAC Control Using Hybrid System LBMPC | Improving the energy-efficiency of heating, ventilation, and air-conditioning
(HVAC) systems has the potential to realize large economic and societal
benefits. This paper concerns the system identification of a hybrid system
model of a building-wide HVAC system and its subsequent control using a hybrid
system formulation of learning-based model predictive control (LBMPC). Here,
the learning refers to model updates to the hybrid system model that
incorporate the heating effects due to occupancy, solar effects, outside air
temperature (OAT), and equipment, in addition to integrator dynamics inherently
present in low-level control. Though we make significant modeling
simplifications, our corresponding controller that uses this model is able to
experimentally achieve a large reduction in energy usage without any
degradations in occupant comfort. It is in this way that we justify the
modeling simplifications that we have made. We conclude by presenting results
from experiments on our building HVAC testbed, which show an average of 1.5MWh
of energy savings per day (p = 0.002) with a 95% confidence interval of 1.0MWh
to 2.1MWh of energy savings.
| [
"['Anil Aswani' 'Neal Master' 'Jay Taneja' 'Andrew Krioukov' 'David Culler'\n 'Claire Tomlin']",
"Anil Aswani, Neal Master, Jay Taneja, Andrew Krioukov, David Culler,\n Claire Tomlin"
] |
cs.LG cs.AI cs.HC | null | 1204.4990 | null | null | http://arxiv.org/pdf/1204.4990v1 | 2012-04-23T08:02:19Z | 2012-04-23T08:02:19Z | Objective Function Designing Led by User Preferences Acquisition | Many real world problems can be defined as optimisation problems in which the
aim is to maximise an objective function. The quality of obtained solution is
directly linked to the pertinence of the used objective function. However,
designing such function, which has to translate the user needs, is usually
fastidious. In this paper, a method to help user objective functions designing
is proposed. Our approach, which is highly interactive, is based on man machine
dialogue and more particularly on the comparison of problem instance solutions
by the user. We propose an experiment in the domain of cartographic
generalisation that shows promising results.
| [
"['Patrick Taillandier' 'Julien Gaffuri']",
"Patrick Taillandier (UMMISCO), Julien Gaffuri (COGIT)"
] |
cs.AI cs.LG | 10.1145/1456223.1456281 | 1204.4991 | null | null | http://arxiv.org/abs/1204.4991v1 | 2012-04-23T08:03:06Z | 2012-04-23T08:03:06Z | Knowledge revision in systems based on an informed tree search strategy
: application to cartographic generalisation | Many real world problems can be expressed as optimisation problems. Solving
this kind of problems means to find, among all possible solutions, the one that
maximises an evaluation function. One approach to solve this kind of problem is
to use an informed search strategy. The principle of this kind of strategy is
to use problem-specific knowledge beyond the definition of the problem itself
to find solutions more efficiently than with an uninformed strategy. This kind
of strategy demands to define problem-specific knowledge (heuristics). The
efficiency and the effectiveness of systems based on it directly depend on the
used knowledge quality. Unfortunately, acquiring and maintaining such knowledge
can be fastidious. The objective of the work presented in this paper is to
propose an automatic knowledge revision approach for systems based on an
informed tree search strategy. Our approach consists in analysing the system
execution logs and revising knowledge based on these logs by modelling the
revision problem as a knowledge space exploration problem. We present an
experiment we carried out in an application domain where informed search
strategies are often used: cartographic generalisation.
| [
"['Patrick Taillandier' 'Cécile Duchêne' 'Alexis Drogoul']",
"Patrick Taillandier (COGIT, UMMISCO), C\\'ecile Duch\\^ene (COGIT),\n Alexis Drogoul (UMMISCO, MSI)"
] |
stat.ML cs.LG | null | 1204.5043 | null | null | http://arxiv.org/pdf/1204.5043v2 | 2012-06-12T08:59:52Z | 2012-04-23T12:35:56Z | Sparse Prediction with the $k$-Support Norm | We derive a novel norm that corresponds to the tightest convex relaxation of
sparsity combined with an $\ell_2$ penalty. We show that this new {\em
$k$-support norm} provides a tighter relaxation than the elastic net and is
thus a good replacement for the Lasso or the elastic net in sparse prediction
problems. Through the study of the $k$-support norm, we also bound the
looseness of the elastic net, thus shedding new light on it and providing
justification for its use.
| [
"['Andreas Argyriou' 'Rina Foygel' 'Nathan Srebro']",
"Andreas Argyriou and Rina Foygel and Nathan Srebro"
] |
cs.LG cs.CV | 10.1109/TIP.2013.2246175 | 1204.5309 | null | null | http://arxiv.org/abs/1204.5309v3 | 2013-03-26T11:51:49Z | 2012-04-24T08:56:42Z | Analysis Operator Learning and Its Application to Image Reconstruction | Exploiting a priori known structural information lies at the core of many
image reconstruction methods that can be stated as inverse problems. The
synthesis model, which assumes that images can be decomposed into a linear
combination of very few atoms of some dictionary, is now a well established
tool for the design of image reconstruction algorithms. An interesting
alternative is the analysis model, where the signal is multiplied by an
analysis operator and the outcome is assumed to be the sparse. This approach
has only recently gained increasing interest. The quality of reconstruction
methods based on an analysis model severely depends on the right choice of the
suitable operator.
In this work, we present an algorithm for learning an analysis operator from
training images. Our method is based on an $\ell_p$-norm minimization on the
set of full rank matrices with normalized columns. We carefully introduce the
employed conjugate gradient method on manifolds, and explain the underlying
geometry of the constraints. Moreover, we compare our approach to
state-of-the-art methods for image denoising, inpainting, and single image
super-resolution. Our numerical results show competitive performance of our
general approach in all presented applications compared to the specialized
state-of-the-art techniques.
| [
"Simon Hawe, Martin Kleinsteuber, and Klaus Diepold",
"['Simon Hawe' 'Martin Kleinsteuber' 'Klaus Diepold']"
] |
cs.LG stat.ML | null | 1204.5721 | null | null | http://arxiv.org/pdf/1204.5721v2 | 2012-11-03T18:50:58Z | 2012-04-25T18:04:32Z | Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit
Problems | Multi-armed bandit problems are the most basic examples of sequential
decision problems with an exploration-exploitation trade-off. This is the
balance between staying with the option that gave highest payoffs in the past
and exploring new options that might give higher payoffs in the future.
Although the study of bandit problems dates back to the Thirties,
exploration-exploitation trade-offs arise in several modern applications, such
as ad placement, website optimization, and packet routing. Mathematically, a
multi-armed bandit is defined by the payoff process associated with each
option. In this survey, we focus on two extreme cases in which the analysis of
regret is particularly simple and elegant: i.i.d. payoffs and adversarial
payoffs. Besides the basic setting of finitely many actions, we also analyze
some of the most important variants and extensions, such as the contextual
bandit model.
| [
"S\\'ebastien Bubeck and Nicol\\`o Cesa-Bianchi",
"['Sébastien Bubeck' 'Nicolò Cesa-Bianchi']"
] |
cs.LG math.CT | null | 1204.5802 | null | null | http://arxiv.org/abs/1204.5802v1 | 2012-04-26T01:35:10Z | 2012-04-26T01:35:10Z | Quantitative Concept Analysis | Formal Concept Analysis (FCA) begins from a context, given as a binary
relation between some objects and some attributes, and derives a lattice of
concepts, where each concept is given as a set of objects and a set of
attributes, such that the first set consists of all objects that satisfy all
attributes in the second, and vice versa. Many applications, though, provide
contexts with quantitative information, telling not just whether an object
satisfies an attribute, but also quantifying this satisfaction. Contexts in
this form arise as rating matrices in recommender systems, as occurrence
matrices in text analysis, as pixel intensity matrices in digital image
processing, etc. Such applications have attracted a lot of attention, and
several numeric extensions of FCA have been proposed. We propose the framework
of proximity sets (proxets), which subsume partially ordered sets (posets) as
well as metric spaces. One feature of this approach is that it extracts from
quantified contexts quantified concepts, and thus allows full use of the
available information. Another feature is that the categorical approach allows
analyzing any universal properties that the classical FCA and the new versions
may have, and thus provides structural guidance for aligning and combining the
approaches.
| [
"Dusko Pavlovic",
"['Dusko Pavlovic']"
] |
cs.DS cs.LG | null | 1204.5810 | null | null | http://arxiv.org/pdf/1204.5810v1 | 2012-04-26T02:06:44Z | 2012-04-26T02:06:44Z | Geometry of Online Packing Linear Programs | We consider packing LP's with $m$ rows where all constraint coefficients are
normalized to be in the unit interval. The n columns arrive in random order and
the goal is to set the corresponding decision variables irrevocably when they
arrive so as to obtain a feasible solution maximizing the expected reward.
Previous (1 - \epsilon)-competitive algorithms require the right-hand side of
the LP to be Omega((m/\epsilon^2) log (n/\epsilon)), a bound that worsens with
the number of columns and rows. However, the dependence on the number of
columns is not required in the single-row case and known lower bounds for the
general case are also independent of n.
Our goal is to understand whether the dependence on n is required in the
multi-row case, making it fundamentally harder than the single-row version. We
refute this by exhibiting an algorithm which is (1 - \epsilon)-competitive as
long as the right-hand sides are Omega((m^2/\epsilon^2) log (m/\epsilon)). Our
techniques refine previous PAC-learning based approaches which interpret the
online decisions as linear classifications of the columns based on sampled dual
prices. The key ingredient of our improvement comes from a non-standard
covering argument together with the realization that only when the columns of
the LP belong to few 1-d subspaces we can obtain small such covers; bounding
the size of the cover constructed also relies on the geometry of linear
classifiers. General packing LP's are handled by perturbing the input columns,
which can be seen as making the learning problem more robust.
| [
"['Marco Molinaro' 'R. Ravi']",
"Marco Molinaro and R. Ravi"
] |
cs.DB cs.LG | null | 1204.6078 | null | null | http://arxiv.org/pdf/1204.6078v1 | 2012-04-26T23:25:20Z | 2012-04-26T23:25:20Z | Distributed GraphLab: A Framework for Machine Learning in the Cloud | While high-level data parallel frameworks, like MapReduce, simplify the
design and implementation of large-scale data processing systems, they do not
naturally or efficiently support many important data mining and machine
learning algorithms and can lead to inefficient learning systems. To help fill
this critical void, we introduced the GraphLab abstraction which naturally
expresses asynchronous, dynamic, graph-parallel computation while ensuring data
consistency and achieving a high degree of parallel performance in the
shared-memory setting. In this paper, we extend the GraphLab framework to the
substantially more challenging distributed setting while preserving strong data
consistency guarantees. We develop graph based extensions to pipelined locking
and data versioning to reduce network congestion and mitigate the effect of
network latency. We also introduce fault tolerance to the GraphLab abstraction
using the classic Chandy-Lamport snapshot algorithm and demonstrate how it can
be easily implemented by exploiting the GraphLab abstraction itself. Finally,
we evaluate our distributed implementation of the GraphLab abstraction on a
large Amazon EC2 deployment and show 1-2 orders of magnitude performance gains
over Hadoop-based implementations.
| [
"['Yucheng Low' 'Joseph Gonzalez' 'Aapo Kyrola' 'Danny Bickson'\n 'Carlos Guestrin' 'Joseph M. Hellerstein']",
"Yucheng Low, Joseph Gonzalez, Aapo Kyrola, Danny Bickson, Carlos\n Guestrin, Joseph M. Hellerstein"
] |
cs.SY cs.LG | null | 1204.6250 | null | null | http://arxiv.org/pdf/1204.6250v1 | 2011-11-29T04:52:31Z | 2011-11-29T04:52:31Z | Feature Selection for Generator Excitation Neurocontroller Development
Using Filter Technique | Essentially, motive behind using control system is to generate suitable
control signal for yielding desired response of a physical process. Control of
synchronous generator has always remained very critical in power system
operation and control. For certain well known reasons power generators are
normally operated well below their steady state stability limit. This raises
demand for efficient and fast controllers. Artificial intelligence has been
reported to give revolutionary outcomes in the field of control engineering.
Artificial Neural Network (ANN), a branch of artificial intelligence has been
used for nonlinear and adaptive control, utilizing its inherent observability.
The overall performance of neurocontroller is dependent upon input features
too. Selecting optimum features to train a neurocontroller optimally is very
critical. Both quality and size of data are of equal importance for better
performance. In this work filter technique is employed to select independent
factors for ANN training.
| [
"Abdul Ghani Abro, Junita Mohamad Saleh",
"['Abdul Ghani Abro' 'Junita Mohamad Saleh']"
] |
cs.HC cs.LG | null | 1204.6325 | null | null | http://arxiv.org/pdf/1204.6325v2 | 2012-06-03T08:45:46Z | 2012-04-27T20:10:16Z | CELL: Connecting Everyday Life in an archipeLago | We explore the design of a seamless broadcast communication system that
brings together the distributed community of remote secondary education
schools. In contrast to higher education, primary and secondary education
establishments should remain distributed, in order to maintain a balance of
urban and rural life in the developing and the developed world. We plan to
deploy an ambient and social interactive TV platform (physical installation,
authoring tools, interactive content) that supports social communication in a
positive way. In particular, we present the physical design and the conceptual
model of the system.
| [
"Konstantinos Chorianopoulos, Vassiliki Tsaknaki",
"['Konstantinos Chorianopoulos' 'Vassiliki Tsaknaki']"
] |
stat.ML cs.LG | null | 1204.6509 | null | null | http://arxiv.org/pdf/1204.6509v1 | 2012-04-29T19:31:15Z | 2012-04-29T19:31:15Z | Dissimilarity Clustering by Hierarchical Multi-Level Refinement | We introduce in this paper a new way of optimizing the natural extension of
the quantization error using in k-means clustering to dissimilarity data. The
proposed method is based on hierarchical clustering analysis combined with
multi-level heuristic refinement. The method is computationally efficient and
achieves better quantization errors than the
| [
"['Brieuc Conan-Guez' 'Fabrice Rossi']",
"Brieuc Conan-Guez (LITA), Fabrice Rossi (SAMM)"
] |
stat.ML cs.LG | null | 1204.6583 | null | null | http://arxiv.org/pdf/1204.6583v1 | 2012-04-30T09:53:08Z | 2012-04-30T09:53:08Z | A Conjugate Property between Loss Functions and Uncertainty Sets in
Classification Problems | In binary classification problems, mainly two approaches have been proposed;
one is loss function approach and the other is uncertainty set approach. The
loss function approach is applied to major learning algorithms such as support
vector machine (SVM) and boosting methods. The loss function represents the
penalty of the decision function on the training samples. In the learning
algorithm, the empirical mean of the loss function is minimized to obtain the
classifier. Against a backdrop of the development of mathematical programming,
nowadays learning algorithms based on loss functions are widely applied to
real-world data analysis. In addition, statistical properties of such learning
algorithms are well-understood based on a lots of theoretical works. On the
other hand, the learning method using the so-called uncertainty set is used in
hard-margin SVM, mini-max probability machine (MPM) and maximum margin MPM. In
the learning algorithm, firstly, the uncertainty set is defined for each binary
label based on the training samples. Then, the best separating hyperplane
between the two uncertainty sets is employed as the decision function. This is
regarded as an extension of the maximum-margin approach. The uncertainty set
approach has been studied as an application of robust optimization in the field
of mathematical programming. The statistical properties of learning algorithms
with uncertainty sets have not been intensively studied. In this paper, we
consider the relation between the above two approaches. We point out that the
uncertainty set is described by using the level set of the conjugate of the
loss function. Based on such relation, we study statistical properties of
learning algorithms using uncertainty sets.
| [
"['Takafumi Kanamori' 'Akiko Takeda' 'Taiji Suzuki']",
"Takafumi Kanamori, Akiko Takeda, Taiji Suzuki"
] |
cs.LG cs.IR | null | 1204.6610 | null | null | http://arxiv.org/pdf/1204.6610v1 | 2012-04-30T12:18:40Z | 2012-04-30T12:18:40Z | Residual Belief Propagation for Topic Modeling | Fast convergence speed is a desired property for training latent Dirichlet
allocation (LDA), especially in online and parallel topic modeling for massive
data sets. This paper presents a novel residual belief propagation (RBP)
algorithm to accelerate the convergence speed for training LDA. The proposed
RBP uses an informed scheduling scheme for asynchronous message passing, which
passes fast-convergent messages with a higher priority to influence those
slow-convergent messages at each learning iteration. Extensive empirical
studies confirm that RBP significantly reduces the training time until
convergence while achieves a much lower predictive perplexity than other
state-of-the-art training algorithms for LDA, including variational Bayes (VB),
collapsed Gibbs sampling (GS), loopy belief propagation (BP), and residual VB
(RVB).
| [
"Jia Zeng, Xiao-Qin Cao and Zhi-Qiang Liu",
"['Jia Zeng' 'Xiao-Qin Cao' 'Zhi-Qiang Liu']"
] |
cs.LG stat.ML | null | 1204.6703 | null | null | http://arxiv.org/pdf/1204.6703v4 | 2013-01-17T21:01:29Z | 2012-04-30T17:06:06Z | A Spectral Algorithm for Latent Dirichlet Allocation | The problem of topic modeling can be seen as a generalization of the
clustering problem, in that it posits that observations are generated due to
multiple latent factors (e.g., the words in each document are generated as a
mixture of several active topics, as opposed to just one). This increased
representational power comes at the cost of a more challenging unsupervised
learning problem of estimating the topic probability vectors (the distributions
over words for each topic), when only the words are observed and the
corresponding topics are hidden.
We provide a simple and efficient learning procedure that is guaranteed to
recover the parameters for a wide class of mixture models, including the
popular latent Dirichlet allocation (LDA) model. For LDA, the procedure
correctly recovers both the topic probability vectors and the prior over the
topics, using only trigram statistics (i.e., third order moments, which may be
estimated with documents containing just three words). The method, termed
Excess Correlation Analysis (ECA), is based on a spectral decomposition of low
order moments (third and fourth order) via two singular value decompositions
(SVDs). Moreover, the algorithm is scalable since the SVD operations are
carried out on $k\times k$ matrices, where $k$ is the number of latent factors
(e.g. the number of topics), rather than in the $d$-dimensional observed space
(typically $d \gg k$).
| [
"['Animashree Anandkumar' 'Dean P. Foster' 'Daniel Hsu' 'Sham M. Kakade'\n 'Yi-Kai Liu']",
"Animashree Anandkumar, Dean P. Foster, Daniel Hsu, Sham M. Kakade,\n Yi-Kai Liu"
] |
cs.DS cs.IR cs.LG | null | 1205.0044 | null | null | http://arxiv.org/pdf/1205.0044v1 | 2012-04-30T22:26:51Z | 2012-04-30T22:26:51Z | A Singly-Exponential Time Algorithm for Computing Nonnegative Rank | Here, we give an algorithm for deciding if the nonnegative rank of a matrix
$M$ of dimension $m \times n$ is at most $r$ which runs in time
$(nm)^{O(r^2)}$. This is the first exact algorithm that runs in time
singly-exponential in $r$. This algorithm (and earlier algorithms) are built on
methods for finding a solution to a system of polynomial inequalities (if one
exists). Notably, the best algorithms for this task run in time exponential in
the number of variables but polynomial in all of the other parameters (the
number of inequalities and the maximum degree).
Hence these algorithms motivate natural algebraic questions whose solution
have immediate {\em algorithmic} implications: How many variables do we need to
represent the decision problem, does $M$ have nonnegative rank at most $r$? A
naive formulation uses $nr + mr$ variables and yields an algorithm that is
exponential in $n$ and $m$ even for constant $r$. (Arora, Ge, Kannan, Moitra,
STOC 2012) recently reduced the number of variables to $2r^2 2^r$, and here we
exponentially reduce the number of variables to $2r^2$ and this yields our main
algorithm. In fact, the algorithm that we obtain is nearly-optimal (under the
Exponential Time Hypothesis) since an algorithm that runs in time $(nm)^{o(r)}$
would yield a subexponential algorithm for 3-SAT .
Our main result is based on establishing a normal form for nonnegative matrix
factorization - which in turn allows us to exploit algebraic dependence among a
large collection of linear transformations with variable entries. Additionally,
we also demonstrate that nonnegative rank cannot be certified by even a very
large submatrix of $M$, and this property also follows from the intuition
gained from viewing nonnegative rank through the lens of systems of polynomial
inequalities.
| [
"Ankur Moitra",
"['Ankur Moitra']"
] |
stat.ML cs.LG cs.MA math.OC math.PR | 10.1109/TSP.2013.2241057 | 1205.0047 | null | null | http://arxiv.org/abs/1205.0047v2 | 2012-10-25T01:59:10Z | 2012-04-30T22:48:37Z | $QD$-Learning: A Collaborative Distributed Strategy for Multi-Agent
Reinforcement Learning Through Consensus + Innovations | The paper considers a class of multi-agent Markov decision processes (MDPs),
in which the network agents respond differently (as manifested by the
instantaneous one-stage random costs) to a global controlled state and the
control actions of a remote controller. The paper investigates a distributed
reinforcement learning setup with no prior information on the global state
transition and local agent cost statistics. Specifically, with the agents'
objective consisting of minimizing a network-averaged infinite horizon
discounted cost, the paper proposes a distributed version of $Q$-learning,
$\mathcal{QD}$-learning, in which the network agents collaborate by means of
local processing and mutual information exchange over a sparse (possibly
stochastic) communication network to achieve the network goal. Under the
assumption that each agent is only aware of its local online cost data and the
inter-agent communication network is \emph{weakly} connected, the proposed
distributed scheme is almost surely (a.s.) shown to yield asymptotically the
desired value function and the optimal stationary control policy at each
network agent. The analytical techniques developed in the paper to address the
mixed time-scale stochastic dynamics of the \emph{consensus + innovations}
form, which arise as a result of the proposed interactive distributed scheme,
are of independent interest.
| [
"Soummya Kar, Jose' M.F. Moura and H. Vincent Poor",
"['Soummya Kar' \"Jose' M. F. Moura\" 'H. Vincent Poor']"
] |
stat.ML cs.LG math.OC | null | 1205.0079 | null | null | http://arxiv.org/pdf/1205.0079v2 | 2012-05-19T21:06:21Z | 2012-05-01T03:37:13Z | Complexity Analysis of the Lasso Regularization Path | The regularization path of the Lasso can be shown to be piecewise linear,
making it possible to "follow" and explicitly compute the entire path. We
analyze in this paper this popular strategy, and prove that its worst case
complexity is exponential in the number of variables. We then oppose this
pessimistic result to an (optimistic) approximate analysis: We show that an
approximate path with at most O(1/sqrt(epsilon)) linear segments can always be
obtained, where every point on the path is guaranteed to be optimal up to a
relative epsilon-duality gap. We complete our theoretical analysis with a
practical algorithm to compute these approximate paths.
| [
"Julien Mairal and Bin Yu",
"['Julien Mairal' 'Bin Yu']"
] |
cs.LG math.OC | null | 1205.0088 | null | null | http://arxiv.org/pdf/1205.0088v2 | 2012-05-19T15:10:54Z | 2012-05-01T04:59:12Z | ProPPA: A Fast Algorithm for $\ell_1$ Minimization and Low-Rank Matrix
Completion | We propose a Projected Proximal Point Algorithm (ProPPA) for solving a class
of optimization problems. The algorithm iteratively computes the proximal point
of the last estimated solution projected into an affine space which itself is
parallel and approaching to the feasible set. We provide convergence analysis
theoretically supporting the general algorithm, and then apply it for solving
$\ell_1$-minimization problems and the matrix completion problem. These
problems arise in many applications including machine learning, image and
signal processing. We compare our algorithm with the existing state-of-the-art
algorithms. Experimental results on solving these problems show that our
algorithm is very efficient and competitive.
| [
"['Ranch Y. Q. Lai' 'Pong C. Yuen']",
"Ranch Y.Q. Lai and Pong C. Yuen"
] |
cs.LG stat.ML | null | 1205.0288 | null | null | http://arxiv.org/pdf/1205.0288v2 | 2013-01-07T17:42:46Z | 2012-05-01T23:42:57Z | A Randomized Mirror Descent Algorithm for Large Scale Multiple Kernel
Learning | We consider the problem of simultaneously learning to linearly combine a very
large number of kernels and learn a good predictor based on the learnt kernel.
When the number of kernels $d$ to be combined is very large, multiple kernel
learning methods whose computational cost scales linearly in $d$ are
intractable. We propose a randomized version of the mirror descent algorithm to
overcome this issue, under the objective of minimizing the group $p$-norm
penalized empirical risk. The key to achieve the required exponential speed-up
is the computationally efficient construction of low-variance estimates of the
gradient. We propose importance sampling based estimates, and find that the
ideal distribution samples a coordinate with a probability proportional to the
magnitude of the corresponding gradient. We show the surprising result that in
the case of learning the coefficients of a polynomial kernel, the combinatorial
structure of the base kernels to be combined allows the implementation of
sampling from this distribution to run in $O(\log(d))$ time, making the total
computational cost of the method to achieve an $\epsilon$-optimal solution to
be $O(\log(d)/\epsilon^2)$, thereby allowing our method to operate for very
large values of $d$. Experiments with simulated and real data confirm that the
new algorithm is computationally more efficient than its state-of-the-art
alternatives.
| [
"['Arash Afkanpour' 'András György' 'Csaba Szepesvári' 'Michael Bowling']",
"Arash Afkanpour, Andr\\'as Gy\\\"orgy, Csaba Szepesv\\'ari, Michael\n Bowling"
] |
cs.LG | null | 1205.0406 | null | null | http://arxiv.org/pdf/1205.0406v1 | 2012-05-02T12:38:11Z | 2012-05-02T12:38:11Z | Minimax Classifier for Uncertain Costs | Many studies on the cost-sensitive learning assumed that a unique cost matrix
is known for a problem. However, this assumption may not hold for many
real-world problems. For example, a classifier might need to be applied in
several circumstances, each of which associates with a different cost matrix.
Or, different human experts have different opinions about the costs for a given
problem. Motivated by these facts, this study aims to seek the minimax
classifier over multiple cost matrices. In summary, we theoretically proved
that, no matter how many cost matrices are involved, the minimax problem can be
tackled by solving a number of standard cost-sensitive problems and
sub-problems that involve only two cost matrices. As a result, a general
framework for achieving minimax classifier over multiple cost matrices is
suggested and justified by preliminary empirical studies.
| [
"Rui Wang and Ke Tang",
"['Rui Wang' 'Ke Tang']"
] |
cs.LG stat.ME stat.ML | null | 1205.0411 | null | null | http://arxiv.org/pdf/1205.0411v2 | 2012-05-21T23:29:06Z | 2012-05-02T12:49:19Z | Hypothesis testing using pairwise distances and associated kernels (with
Appendix) | We provide a unifying framework linking two classes of statistics used in
two-sample and independence testing: on the one hand, the energy distances and
distance covariances from the statistics literature; on the other, distances
between embeddings of distributions to reproducing kernel Hilbert spaces
(RKHS), as established in machine learning. The equivalence holds when energy
distances are computed with semimetrics of negative type, in which case a
kernel may be defined such that the RKHS distance between distributions
corresponds exactly to the energy distance. We determine the class of
probability distributions for which kernels induced by semimetrics are
characteristic (that is, for which embeddings of the distributions to an RKHS
are injective). Finally, we investigate the performance of this family of
kernels in two-sample and independence tests: we show in particular that the
energy distance most commonly employed in statistics is just one member of a
parametric family of kernels, and that other choices from this family can yield
more powerful tests.
| [
"['Dino Sejdinovic' 'Arthur Gretton' 'Bharath Sriperumbudur'\n 'Kenji Fukumizu']",
"Dino Sejdinovic, Arthur Gretton, Bharath Sriperumbudur, Kenji Fukumizu"
] |
cs.LG | null | 1205.0610 | null | null | http://arxiv.org/pdf/1205.0610v1 | 2012-05-03T04:09:19Z | 2012-05-03T04:09:19Z | Greedy Multiple Instance Learning via Codebook Learning and Nearest
Neighbor Voting | Multiple instance learning (MIL) has attracted great attention recently in
machine learning community. However, most MIL algorithms are very slow and
cannot be applied to large datasets. In this paper, we propose a greedy
strategy to speed up the multiple instance learning process. Our contribution
is two fold. First, we propose a density ratio model, and show that maximizing
a density ratio function is the low bound of the DD model under certain
conditions. Secondly, we make use of a histogram ratio between positive bags
and negative bags to represent the density ratio function and find codebooks
separately for positive bags and negative bags by a greedy strategy. For
testing, we make use of a nearest neighbor strategy to classify new bags. We
test our method on both small benchmark datasets and the large TRECVID MED11
dataset. The experimental results show that our method yields comparable
accuracy to the current state of the art, while being up to at least one order
of magnitude faster.
| [
"['Gang Chen' 'Jason Corso']",
"Gang Chen and Jason Corso"
] |
cs.IT cs.LG math.IT | null | 1205.0651 | null | null | http://arxiv.org/pdf/1205.0651v3 | 2013-12-30T08:27:53Z | 2012-05-03T08:49:01Z | Generative Maximum Entropy Learning for Multiclass Classification | Maximum entropy approach to classification is very well studied in applied
statistics and machine learning and almost all the methods that exists in
literature are discriminative in nature. In this paper, we introduce a maximum
entropy classification method with feature selection for large dimensional data
such as text datasets that is generative in nature. To tackle the curse of
dimensionality of large data sets, we employ conditional independence
assumption (Naive Bayes) and we perform feature selection simultaneously, by
enforcing a `maximum discrimination' between estimated class conditional
densities. For two class problems, in the proposed method, we use Jeffreys
($J$) divergence to discriminate the class conditional densities. To extend our
method to the multi-class case, we propose a completely new approach by
considering a multi-distribution divergence: we replace Jeffreys divergence by
Jensen-Shannon ($JS$) divergence to discriminate conditional densities of
multiple classes. In order to reduce computational complexity, we employ a
modified Jensen-Shannon divergence ($JS_{GM}$), based on AM-GM inequality. We
show that the resulting divergence is a natural generalization of Jeffreys
divergence to a multiple distributions case. As far as the theoretical
justifications are concerned we show that when one intends to select the best
features in a generative maximum entropy approach, maximum discrimination using
$J-$divergence emerges naturally in binary classification. Performance and
comparative study of the proposed algorithms have been demonstrated on large
dimensional text and gene expression datasets that show our methods scale up
very well with large dimensional datasets.
| [
"Ambedkar Dukkipati, Gaurav Pandey, Debarghya Ghoshdastidar, Paramita\n Koley, D. M. V. Satya Sriram",
"['Ambedkar Dukkipati' 'Gaurav Pandey' 'Debarghya Ghoshdastidar'\n 'Paramita Koley' 'D. M. V. Satya Sriram']"
] |
cond-mat.dis-nn cs.LG cs.NE | 10.1103/PhysRevE.85.041925 | 1205.0908 | null | null | http://arxiv.org/abs/1205.0908v1 | 2012-05-04T10:33:22Z | 2012-05-04T10:33:22Z | Weighted Patterns as a Tool for Improving the Hopfield Model | We generalize the standard Hopfield model to the case when a weight is
assigned to each input pattern. The weight can be interpreted as the frequency
of the pattern occurrence at the input of the network. In the framework of the
statistical physics approach we obtain the saddle-point equation allowing us to
examine the memory of the network. In the case of unequal weights our model
does not lead to the catastrophic destruction of the memory due to its
overfilling (that is typical for the standard Hopfield model). The real memory
consists only of the patterns with weights exceeding a critical value that is
determined by the weights distribution. We obtain the algorithm allowing us to
find this critical value for an arbitrary distribution of the weights, and
analyze in detail some particular weights distributions. It is shown that the
memory decreases as compared to the case of the standard Hopfield model.
However, in our model the network can learn online without the catastrophic
destruction of the memory.
| [
"['Iakov Karandashev' 'Boris Kryzhanovsky' 'Leonid Litinskii']",
"Iakov Karandashev, Boris Kryzhanovsky and Leonid Litinskii"
] |
cs.LG stat.ML | null | 1205.1053 | null | null | http://arxiv.org/pdf/1205.1053v1 | 2012-05-04T03:14:18Z | 2012-05-04T03:14:18Z | Variable Selection for Latent Dirichlet Allocation | In latent Dirichlet allocation (LDA), topics are multinomial distributions
over the entire vocabulary. However, the vocabulary usually contains many words
that are not relevant in forming the topics. We adopt a variable selection
method widely used in statistical modeling as a dimension reduction tool and
combine it with LDA. In this variable selection model for LDA (vsLDA), topics
are multinomial distributions over a subset of the vocabulary, and by excluding
words that are not informative for finding the latent topic structure of the
corpus, vsLDA finds topics that are more robust and discriminative. We compare
three models, vsLDA, LDA with symmetric priors, and LDA with asymmetric priors,
on heldout likelihood, MCMC chain consistency, and document classification. The
performance of vsLDA is better than symmetric LDA for likelihood and
classification, better than asymmetric LDA for consistency and classification,
and about the same in the other comparisons.
| [
"Dongwoo Kim, Yeonseung Chung, Alice Oh",
"['Dongwoo Kim' 'Yeonseung Chung' 'Alice Oh']"
] |
cs.CC cs.DS cs.LG | null | 1205.1183 | null | null | http://arxiv.org/pdf/1205.1183v2 | 2013-04-18T05:39:19Z | 2012-05-06T06:03:27Z | On the Complexity of Trial and Error | Motivated by certain applications from physics, biochemistry, economics, and
computer science, in which the objects under investigation are not accessible
because of various limitations, we propose a trial-and-error model to examine
algorithmic issues in such situations. Given a search problem with a hidden
input, we are asked to find a valid solution, to find which we can propose
candidate solutions (trials), and use observed violations (errors), to prepare
future proposals. In accordance with our motivating applications, we consider
the fairly broad class of constraint satisfaction problems, and assume that
errors are signaled by a verification oracle in the format of the index of a
violated constraint (with the content of the constraint still hidden).
Our discoveries are summarized as follows. On one hand, despite the seemingly
very little information provided by the verification oracle, efficient
algorithms do exist for a number of important problems. For the Nash, Core,
Stable Matching, and SAT problems, the unknown-input versions are as hard as
the corresponding known-input versions, up to a factor of polynomial. We
further give almost tight bounds on the latter two problems' trial
complexities. On the other hand, there are problems whose complexities are
substantially increased in the unknown-input model. In particular, no
time-efficient algorithms exist (under standard hardness assumptions) for Graph
Isomorphism and Group Isomorphism problems. The tools used to achieve these
results include order theory, strong ellipsoid method, and some non-standard
reductions.
Our model investigates the value of information, and our results demonstrate
that the lack of input information can introduce various levels of extra
difficulty. The model exhibits intimate connections with (and we hope can also
serve as a useful supplement to) certain existing learning and complexity
theories.
| [
"['Xiaohui Bei' 'Ning Chen' 'Shengyu Zhang']",
"Xiaohui Bei, Ning Chen, Shengyu Zhang"
] |
stat.ML cs.LG | null | 1205.1240 | null | null | http://arxiv.org/pdf/1205.1240v1 | 2012-05-06T19:54:33Z | 2012-05-06T19:54:33Z | Convex Relaxation for Combinatorial Penalties | In this paper, we propose an unifying view of several recently proposed
structured sparsity-inducing norms. We consider the situation of a model
simultaneously (a) penalized by a set- function de ned on the support of the
unknown parameter vector which represents prior knowledge on supports, and (b)
regularized in Lp-norm. We show that the natural combinatorial optimization
problems obtained may be relaxed into convex optimization problems and
introduce a notion, the lower combinatorial envelope of a set-function, that
characterizes the tightness of our relaxations. We moreover establish links
with norms based on latent representations including the latent group Lasso and
block-coding, and with norms obtained from submodular functions.
| [
"Guillaume Obozinski (INRIA Paris - Rocquencourt, LIENS), Francis Bach\n (INRIA Paris - Rocquencourt, LIENS)",
"['Guillaume Obozinski' 'Francis Bach']"
] |
stat.ML cs.LG stat.CO | null | 1205.1245 | null | null | http://arxiv.org/pdf/1205.1245v2 | 2013-02-06T09:36:02Z | 2012-05-06T20:18:13Z | Sparse group lasso and high dimensional multinomial classification | The sparse group lasso optimization problem is solved using a coordinate
gradient descent algorithm. The algorithm is applicable to a broad class of
convex loss functions. Convergence of the algorithm is established, and the
algorithm is used to investigate the performance of the multinomial sparse
group lasso classifier. On three different real data examples the multinomial
group lasso clearly outperforms multinomial lasso in terms of achieved
classification error rate and in terms of including fewer features for the
classification. The run-time of our sparse group lasso implementation is of the
same order of magnitude as the multinomial lasso algorithm implemented in the R
package glmnet. Our implementation scales well with the problem size. One of
the high dimensional examples considered is a 50 class classification problem
with 10k features, which amounts to estimating 500k parameters. The
implementation is available as the R package msgl.
| [
"Martin Vincent, Niels Richard Hansen",
"['Martin Vincent' 'Niels Richard Hansen']"
] |
stat.ML cs.LG stat.AP | 10.1109/TBME.2012.2226175 | 1205.1287 | null | null | http://arxiv.org/abs/1205.1287v7 | 2014-11-02T05:38:50Z | 2012-05-07T06:15:15Z | Compressed Sensing for Energy-Efficient Wireless Telemonitoring of
Noninvasive Fetal ECG via Block Sparse Bayesian Learning | Fetal ECG (FECG) telemonitoring is an important branch in telemedicine. The
design of a telemonitoring system via a wireless body-area network with low
energy consumption for ambulatory use is highly desirable. As an emerging
technique, compressed sensing (CS) shows great promise in
compressing/reconstructing data with low energy consumption. However, due to
some specific characteristics of raw FECG recordings such as non-sparsity and
strong noise contamination, current CS algorithms generally fail in this
application.
This work proposes to use the block sparse Bayesian learning (BSBL) framework
to compress/reconstruct non-sparse raw FECG recordings. Experimental results
show that the framework can reconstruct the raw recordings with high quality.
Especially, the reconstruction does not destroy the interdependence relation
among the multichannel recordings. This ensures that the independent component
analysis decomposition of the reconstructed recordings has high fidelity.
Furthermore, the framework allows the use of a sparse binary sensing matrix
with much fewer nonzero entries to compress recordings. Particularly, each
column of the matrix can contain only two nonzero entries. This shows the
framework, compared to other algorithms such as current CS algorithms and
wavelet algorithms, can greatly reduce code execution in CPU in the data
compression stage.
| [
"['Zhilin Zhang' 'Tzyy-Ping Jung' 'Scott Makeig' 'Bhaskar D. Rao']",
"Zhilin Zhang, Tzyy-Ping Jung, Scott Makeig, Bhaskar D. Rao"
] |
cs.CR cs.LG | null | 1205.1357 | null | null | http://arxiv.org/pdf/1205.1357v1 | 2012-05-07T12:16:27Z | 2012-05-07T12:16:27Z | Detecting Spammers via Aggregated Historical Data Set | The battle between email service providers and senders of mass unsolicited
emails (Spam) continues to gain traction. Vast numbers of Spam emails are sent
mainly from automatic botnets distributed over the world. One method for
mitigating Spam in a computationally efficient manner is fast and accurate
blacklisting of the senders. In this work we propose a new sender reputation
mechanism that is based on an aggregated historical data-set which encodes the
behavior of mail transfer agents over time. A historical data-set is created
from labeled logs of received emails. We use machine learning algorithms to
build a model that predicts the \emph{spammingness} of mail transfer agents in
the near future. The proposed mechanism is targeted mainly at large enterprises
and email service providers and can be used for updating both the black and the
white lists. We evaluate the proposed mechanism using 9.5M anonymized log
entries obtained from the biggest Internet service provider in Europe.
Experiments show that proposed method detects more than 94% of the Spam emails
that escaped the blacklist (i.e., TPR), while having less than 0.5%
false-alarms. Therefore, the effectiveness of the proposed method is much
higher than of previously reported reputation mechanisms, which rely on emails
logs. In addition, the proposed method, when used for updating both the black
and white lists, eliminated the need in automatic content inspection of 4 out
of 5 incoming emails, which resulted in dramatic reduction in the filtering
computational load.
| [
"['Eitan Menahem' 'Rami Puzis']",
"Eitan Menahem and Rami Puzis"
] |
cs.SI cs.LG physics.soc-ph | null | 1205.1456 | null | null | http://arxiv.org/pdf/1205.1456v1 | 2012-05-07T16:45:09Z | 2012-05-07T16:45:09Z | Dynamic Multi-Relational Chinese Restaurant Process for Analyzing
Influences on Users in Social Media | We study the problem of analyzing influence of various factors affecting
individual messages posted in social media. The problem is challenging because
of various types of influences propagating through the social media network
that act simultaneously on any user. Additionally, the topic composition of the
influencing factors and the susceptibility of users to these influences evolve
over time. This problem has not studied before, and off-the-shelf models are
unsuitable for this purpose. To capture the complex interplay of these various
factors, we propose a new non-parametric model called the Dynamic
Multi-Relational Chinese Restaurant Process. This accounts for the user network
for data generation and also allows the parameters to evolve over time.
Designing inference algorithms for this model suited for large scale
social-media data is another challenge. To this end, we propose a scalable and
multi-threaded inference algorithm based on online Gibbs Sampling. Extensive
evaluations on large-scale Twitter and Facebook data show that the extracted
topics when applied to authorship and commenting prediction outperform
state-of-the-art baselines. More importantly, our model produces valuable
insights on topic trends and user personality trends, beyond the capability of
existing approaches.
| [
"Himabindu Lakkaraju, Indrajit Bhattacharya, Chiranjib Bhattacharyya",
"['Himabindu Lakkaraju' 'Indrajit Bhattacharya' 'Chiranjib Bhattacharyya']"
] |
math.OC cs.IT cs.LG math.IT math.ST stat.ML stat.TH | null | 1205.1482 | null | null | http://arxiv.org/pdf/1205.1482v3 | 2012-11-01T20:28:03Z | 2012-05-07T18:55:04Z | Risk estimation for matrix recovery with spectral regularization | In this paper, we develop an approach to recursively estimate the quadratic
risk for matrix recovery problems regularized with spectral functions. Toward
this end, in the spirit of the SURE theory, a key step is to compute the (weak)
derivative and divergence of a solution with respect to the observations. As
such a solution is not available in closed form, but rather through a proximal
splitting algorithm, we propose to recursively compute the divergence from the
sequence of iterates. A second challenge that we unlocked is the computation of
the (weak) derivative of the proximity operator of a spectral function. To show
the potential applicability of our approach, we exemplify it on a matrix
completion problem to objectively and automatically select the regularization
parameter.
| [
"Charles-Alban Deledalle (CEREMADE), Samuel Vaiter (CEREMADE), Gabriel\n Peyr\\'e (CEREMADE), Jalal Fadili (GREYC), Charles Dossal (IMB)",
"['Charles-Alban Deledalle' 'Samuel Vaiter' 'Gabriel Peyré' 'Jalal Fadili'\n 'Charles Dossal']"
] |
stat.ML cs.LG | null | 1205.1496 | null | null | http://arxiv.org/pdf/1205.1496v2 | 2012-05-08T18:27:52Z | 2012-05-07T19:55:31Z | Graph-based Learning with Unbalanced Clusters | Graph construction is a crucial step in spectral clustering (SC) and
graph-based semi-supervised learning (SSL). Spectral methods applied on
standard graphs such as full-RBF, $\epsilon$-graphs and $k$-NN graphs can lead
to poor performance in the presence of proximal and unbalanced data. This is
because spectral methods based on minimizing RatioCut or normalized cut on
these graphs tend to put more importance on balancing cluster sizes over
reducing cut values. We propose a novel graph construction technique and show
that the RatioCut solution on this new graph is able to handle proximal and
unbalanced data. Our method is based on adaptively modulating the neighborhood
degrees in a $k$-NN graph, which tends to sparsify neighborhoods in low density
regions. Our method adapts to data with varying levels of unbalancedness and
can be naturally used for small cluster detection. We justify our ideas through
limit cut analysis. Unsupervised and semi-supervised experiments on synthetic
and real data sets demonstrate the superiority of our method.
| [
"['Jing Qian' 'Venkatesh Saligrama' 'Manqi Zhao']",
"Jing Qian, Venkatesh Saligrama, Manqi Zhao"
] |
stat.ML cs.LG | null | 1205.1782 | null | null | http://arxiv.org/pdf/1205.1782v2 | 2012-05-21T16:30:22Z | 2012-05-08T19:22:43Z | Approximate Dynamic Programming By Minimizing Distributionally Robust
Bounds | Approximate dynamic programming is a popular method for solving large Markov
decision processes. This paper describes a new class of approximate dynamic
programming (ADP) methods- distributionally robust ADP-that address the curse
of dimensionality by minimizing a pessimistic bound on the policy loss. This
approach turns ADP into an optimization problem, for which we derive new
mathematical program formulations and analyze its properties. DRADP improves on
the theoretical guarantees of existing ADP methods-it guarantees convergence
and L1 norm based error bounds. The empirical evaluation of DRADP shows that
the theoretical guarantees translate well into good performance on benchmark
problems.
| [
"Marek Petrik",
"['Marek Petrik']"
] |
cs.LG stat.ML | null | 1205.1828 | null | null | http://arxiv.org/pdf/1205.1828v1 | 2012-05-08T21:12:03Z | 2012-05-08T21:12:03Z | The Natural Gradient by Analogy to Signal Whitening, and Recipes and
Tricks for its Use | The natural gradient allows for more efficient gradient descent by removing
dependencies and biases inherent in a function's parameterization. Several
papers present the topic thoroughly and precisely. It remains a very difficult
idea to get your head around however. The intent of this note is to provide
simple intuition for the natural gradient and its use. We review how an ill
conditioned parameter space can undermine learning, introduce the natural
gradient by analogy to the more widely understood concept of signal whitening,
and present tricks and specific prescriptions for applying the natural gradient
to learning problems.
| [
"['Jascha Sohl-Dickstein']",
"Jascha Sohl-Dickstein"
] |
cs.LG physics.data-an | null | 1205.1925 | null | null | http://arxiv.org/pdf/1205.1925v1 | 2012-05-09T09:49:37Z | 2012-05-09T09:49:37Z | Hamiltonian Annealed Importance Sampling for partition function
estimation | We introduce an extension to annealed importance sampling that uses
Hamiltonian dynamics to rapidly estimate normalization constants. We
demonstrate this method by computing log likelihoods in directed and undirected
probabilistic image models. We compare the performance of linear generative
models with both Gaussian and Laplace priors, product of experts models with
Laplace and Student's t experts, the mc-RBM, and a bilinear generative model.
We provide code to compare additional models.
| [
"['Jascha Sohl-Dickstein' 'Benjamin J. Culpepper']",
"Jascha Sohl-Dickstein and Benjamin J. Culpepper"
] |
math.FA cs.LG | null | 1205.1928 | null | null | http://arxiv.org/pdf/1205.1928v3 | 2012-07-17T10:36:07Z | 2012-05-09T10:01:09Z | The representer theorem for Hilbert spaces: a necessary and sufficient
condition | A family of regularization functionals is said to admit a linear representer
theorem if every member of the family admits minimizers that lie in a fixed
finite dimensional subspace. A recent characterization states that a general
class of regularization functionals with differentiable regularizer admits a
linear representer theorem if and only if the regularization term is a
non-decreasing function of the norm. In this report, we improve over such
result by replacing the differentiability assumption with lower semi-continuity
and deriving a proof that is independent of the dimensionality of the space.
| [
"Francesco Dinuzzo, Bernhard Sch\\\"olkopf",
"['Francesco Dinuzzo' 'Bernhard Schölkopf']"
] |
physics.data-an cs.LG | null | 1205.1939 | null | null | http://arxiv.org/pdf/1205.1939v1 | 2012-05-09T11:14:00Z | 2012-05-09T11:14:00Z | Hamiltonian Monte Carlo with Reduced Momentum Flips | Hamiltonian Monte Carlo (or hybrid Monte Carlo) with partial momentum
refreshment explores the state space more slowly than it otherwise would due to
the momentum reversals which occur on proposal rejection. These cause
trajectories to double back on themselves, leading to random walk behavior on
timescales longer than the typical rejection time, and leading to slower
mixing. I present a technique by which the number of momentum reversals can be
reduced. This is accomplished by maintaining the net exchange of probability
between states with opposite momenta, but reducing the rate of exchange in both
directions such that it is 0 in one direction. An experiment illustrates these
reduced momentum flips accelerating mixing for a particular distribution.
| [
"['Jascha Sohl-Dickstein']",
"Jascha Sohl-Dickstein"
] |
cs.SI cs.LG physics.soc-ph stat.ML | null | 1205.2056 | null | null | http://arxiv.org/pdf/1205.2056v1 | 2012-05-09T18:20:32Z | 2012-05-09T18:20:32Z | Dynamic Behavioral Mixed-Membership Model for Large Evolving Networks | The majority of real-world networks are dynamic and extremely large (e.g.,
Internet Traffic, Twitter, Facebook, ...). To understand the structural
behavior of nodes in these large dynamic networks, it may be necessary to model
the dynamics of behavioral roles representing the main connectivity patterns
over time. In this paper, we propose a dynamic behavioral mixed-membership
model (DBMM) that captures the roles of nodes in the graph and how they evolve
over time. Unlike other node-centric models, our model is scalable for
analyzing large dynamic networks. In addition, DBMM is flexible,
parameter-free, has no functional form or parameterization, and is
interpretable (identifies explainable patterns). The performance results
indicate our approach can be applied to very large networks while the
experimental results show that our model uncovers interesting patterns
underlying the dynamics of these networks.
| [
"['Ryan Rossi' 'Brian Gallagher' 'Jennifer Neville' 'Keith Henderson']",
"Ryan Rossi, Brian Gallagher, Jennifer Neville, Keith Henderson"
] |
cs.LG | null | 1205.2151 | null | null | http://arxiv.org/pdf/1205.2151v1 | 2012-05-10T03:31:39Z | 2012-05-10T03:31:39Z | A Converged Algorithm for Tikhonov Regularized Nonnegative Matrix
Factorization with Automatic Regularization Parameters Determination | We present a converged algorithm for Tikhonov regularized nonnegative matrix
factorization (NMF). We specially choose this regularization because it is
known that Tikhonov regularized least square (LS) is the more preferable form
in solving linear inverse problems than the conventional LS. Because an NMF
problem can be decomposed into LS subproblems, it can be expected that Tikhonov
regularized NMF will be the more appropriate approach in solving NMF problems.
The algorithm is derived using additive update rules which have been shown to
have convergence guarantee. We equip the algorithm with a mechanism to
automatically determine the regularization parameters based on the L-curve, a
well-known concept in the inverse problems community, but is rather unknown in
the NMF research. The introduction of this algorithm thus solves two inherent
problems in Tikhonov regularized NMF algorithm research, i.e., convergence
guarantee and regularization parameters determination.
| [
"Andri Mirzal",
"['Andri Mirzal']"
] |
stat.ML cs.LG | null | 1205.2171 | null | null | http://arxiv.org/pdf/1205.2171v2 | 2015-07-15T18:42:11Z | 2012-05-10T07:01:00Z | A Generalized Kernel Approach to Structured Output Learning | We study the problem of structured output learning from a regression
perspective. We first provide a general formulation of the kernel dependency
estimation (KDE) problem using operator-valued kernels. We show that some of
the existing formulations of this problem are special cases of our framework.
We then propose a covariance-based operator-valued kernel that allows us to
take into account the structure of the kernel feature space. This kernel
operates on the output space and encodes the interactions between the outputs
without any reference to the input space. To address this issue, we introduce a
variant of our KDE method based on the conditional covariance operator that in
addition to the correlation between the outputs takes into account the effects
of the input variables. Finally, we evaluate the performance of our KDE
approach using both covariance and conditional covariance kernels on two
structured output problems, and compare it to the state-of-the-art kernel-based
structured output regression methods.
| [
"Hachem Kadri (INRIA Lille - Nord Europe), Mohammad Ghavamzadeh (INRIA\n Lille - Nord Europe), Philippe Preux (INRIA Lille - Nord Europe)",
"['Hachem Kadri' 'Mohammad Ghavamzadeh' 'Philippe Preux']"
] |
stat.ML cs.LG physics.data-an | null | 1205.2172 | null | null | http://arxiv.org/pdf/1205.2172v2 | 2012-10-05T06:22:14Z | 2012-05-10T07:02:20Z | Modularity-Based Clustering for Network-Constrained Trajectories | We present a novel clustering approach for moving object trajectories that
are constrained by an underlying road network. The approach builds a similarity
graph based on these trajectories then uses modularity-optimization hiearchical
graph clustering to regroup trajectories with similar profiles. Our
experimental study shows the superiority of the proposed approach over classic
hierarchical clustering and gives a brief insight to visualization of the
clustering results.
| [
"['Mohamed Khalil El Mahrsi' 'Fabrice Rossi']",
"Mohamed Khalil El Mahrsi (LTCI), Fabrice Rossi (SAMM)"
] |
cs.LG | null | 1205.2265 | null | null | http://arxiv.org/pdf/1205.2265v2 | 2012-10-04T06:49:29Z | 2012-05-08T23:06:06Z | Efficient Constrained Regret Minimization | Online learning constitutes a mathematical and compelling framework to
analyze sequential decision making problems in adversarial environments. The
learner repeatedly chooses an action, the environment responds with an outcome,
and then the learner receives a reward for the played action. The goal of the
learner is to maximize his total reward. However, there are situations in
which, in addition to maximizing the cumulative reward, there are some
additional constraints on the sequence of decisions that must be satisfied on
average by the learner. In this paper we study an extension to the online
learning where the learner aims to maximize the total reward given that some
additional constraints need to be satisfied. By leveraging on the theory of
Lagrangian method in constrained optimization, we propose Lagrangian
exponentially weighted average (LEWA) algorithm, which is a primal-dual variant
of the well known exponentially weighted average algorithm, to efficiently
solve constrained online decision making problems. Using novel theoretical
analysis, we establish the regret and the violation of the constraint bounds in
full information and bandit feedback models.
| [
"['Mehrdad Mahdavi' 'Tianbao Yang' 'Rong Jin']",
"Mehrdad Mahdavi, Tianbao Yang, Rong Jin"
] |
stat.ML cs.DC cs.LG | null | 1205.2282 | null | null | http://arxiv.org/pdf/1205.2282v1 | 2012-05-10T14:44:31Z | 2012-05-10T14:44:31Z | A Discussion on Parallelization Schemes for Stochastic Vector
Quantization Algorithms | This paper studies parallelization schemes for stochastic Vector Quantization
algorithms in order to obtain time speed-ups using distributed resources. We
show that the most intuitive parallelization scheme does not lead to better
performances than the sequential algorithm. Another distributed scheme is
therefore introduced which obtains the expected speed-ups. Then, it is improved
to fit implementation on distributed architectures where communications are
slow and inter-machines synchronization too costly. The schemes are tested with
simulated distributed architectures and, for the last one, with Microsoft
Windows Azure platform obtaining speed-ups up to 32 Virtual Machines.
| [
"Matthieu Durut (LTCI), Beno\\^it Patra (LSTA), Fabrice Rossi (SAMM)",
"['Matthieu Durut' 'Benoît Patra' 'Fabrice Rossi']"
] |
cs.LG math.OC stat.CO stat.ML | null | 1205.2334 | null | null | http://arxiv.org/pdf/1205.2334v2 | 2012-05-30T00:49:30Z | 2012-05-10T18:25:06Z | Sparse Approximation via Penalty Decomposition Methods | In this paper we consider sparse approximation problems, that is, general
$l_0$ minimization problems with the $l_0$-"norm" of a vector being a part of
constraints or objective function. In particular, we first study the
first-order optimality conditions for these problems. We then propose penalty
decomposition (PD) methods for solving them in which a sequence of penalty
subproblems are solved by a block coordinate descent (BCD) method. Under some
suitable assumptions, we establish that any accumulation point of the sequence
generated by the PD methods satisfies the first-order optimality conditions of
the problems. Furthermore, for the problems in which the $l_0$ part is the only
nonconvex part, we show that such an accumulation point is a local minimizer of
the problems. In addition, we show that any accumulation point of the sequence
generated by the BCD method is a saddle point of the penalty subproblem.
Moreover, for the problems in which the $l_0$ part is the only nonconvex part,
we establish that such an accumulation point is a local minimizer of the
penalty subproblem. Finally, we test the performance of our PD methods by
applying them to sparse logistic regression, sparse inverse covariance
selection, and compressed sensing problems. The computational results
demonstrate that our methods generally outperform the existing methods in terms
of solution quality and/or speed.
| [
"['Zhaosong Lu' 'Yong Zhang']",
"Zhaosong Lu and Yong Zhang"
] |
cs.NA cs.LG math.OC | null | 1205.2584 | null | null | http://arxiv.org/pdf/1205.2584v2 | 2012-09-13T03:14:12Z | 2012-05-11T17:26:21Z | Low Complexity Damped Gauss-Newton Algorithms for CANDECOMP/PARAFAC | The damped Gauss-Newton (dGN) algorithm for CANDECOMP/PARAFAC (CP)
decomposition can handle the challenges of collinearity of factors and
different magnitudes of factors; nevertheless, for factorization of an $N$-D
tensor of size $I_1\times I_N$ with rank $R$, the algorithm is computationally
demanding due to construction of large approximate Hessian of size $(RT \times
RT)$ and its inversion where $T = \sum_n I_n$. In this paper, we propose a fast
implementation of the dGN algorithm which is based on novel expressions of the
inverse approximate Hessian in block form. The new implementation has lower
computational complexity, besides computation of the gradient (this part is
common to both methods), requiring the inversion of a matrix of size
$NR^2\times NR^2$, which is much smaller than the whole approximate Hessian, if
$T \gg NR$. In addition, the implementation has lower memory requirements,
because neither the Hessian nor its inverse never need to be stored in their
entirety. A variant of the algorithm working with complex valued data is
proposed as well. Complexity and performance of the proposed algorithm is
compared with those of dGN and ALS with line search on examples of difficult
benchmark tensors.
| [
"['Anh Huy Phan' 'Petr Tichavský' 'Andrzej Cichocki']",
"Anh Huy Phan and Petr Tichavsk\\'y and Andrzej Cichocki"
] |
stat.ML cs.LG | null | 1205.2599 | null | null | http://arxiv.org/pdf/1205.2599v1 | 2012-05-09T18:49:04Z | 2012-05-09T18:49:04Z | On the Identifiability of the Post-Nonlinear Causal Model | By taking into account the nonlinear effect of the cause, the inner noise
effect, and the measurement distortion effect in the observed variables, the
post-nonlinear (PNL) causal model has demonstrated its excellent performance in
distinguishing the cause from effect. However, its identifiability has not been
properly addressed, and how to apply it in the case of more than two variables
is also a problem. In this paper, we conduct a systematic investigation on its
identifiability in the two-variable case. We show that this model is
identifiable in most cases; by enumerating all possible situations in which the
model is not identifiable, we provide sufficient conditions for its
identifiability. Simulations are given to support the theoretical results.
Moreover, in the case of more than two variables, we show that the whole causal
structure can be found by applying the PNL causal model to each structure in
the Markov equivalent class and testing if the disturbance is independent of
the direct causes for each variable. In this way the exhaustive search over all
possible causal structures is avoided.
| [
"['Kun Zhang' 'Aapo Hyvarinen']",
"Kun Zhang, Aapo Hyvarinen"
] |
cs.LG | null | 1205.2600 | null | null | http://arxiv.org/pdf/1205.2600v1 | 2012-05-09T18:48:23Z | 2012-05-09T18:48:23Z | A Uniqueness Theorem for Clustering | Despite the widespread use of Clustering, there is distressingly little
general theory of clustering available. Questions like "What distinguishes a
clustering of data from other data partitioning?", "Are there any principles
governing all clustering paradigms?", "How should a user choose an appropriate
clustering algorithm for a particular task?", etc. are almost completely
unanswered by the existing body of clustering literature. We consider an
axiomatic approach to the theory of Clustering. We adopt the framework of
Kleinberg, [Kle03]. By relaxing one of Kleinberg's clustering axioms, we
sidestep his impossibility result and arrive at a consistent set of axioms. We
suggest to extend these axioms, aiming to provide an axiomatic taxonomy of
clustering paradigms. Such a taxonomy should provide users some guidance
concerning the choice of the appropriate clustering paradigm for a given task.
The main result of this paper is a set of abstract properties that characterize
the Single-Linkage clustering function. This characterization result provides
new insight into the properties of desired data groupings that make
Single-Linkage the appropriate choice. We conclude by considering a taxonomy of
clustering functions based on abstract properties that each satisfies.
| [
"['Reza Bosagh Zadeh' 'Shai Ben-David']",
"Reza Bosagh Zadeh, Shai Ben-David"
] |
cs.LG | null | 1205.2602 | null | null | http://arxiv.org/pdf/1205.2602v1 | 2012-05-09T18:46:51Z | 2012-05-09T18:46:51Z | The Entire Quantile Path of a Risk-Agnostic SVM Classifier | A quantile binary classifier uses the rule: Classify x as +1 if P(Y = 1|X =
x) >= t, and as -1 otherwise, for a fixed quantile parameter t {[0, 1]. It has
been shown that Support Vector Machines (SVMs) in the limit are quantile
classifiers with t = 1/2 . In this paper, we show that by using asymmetric cost
of misclassification SVMs can be appropriately extended to recover, in the
limit, the quantile binary classifier for any t. We then present a principled
algorithm to solve the extended SVM classifier for all values of t
simultaneously. This has two implications: First, one can recover the entire
conditional distribution P(Y = 1|X = x) = t for t {[0, 1]. Second, we can build
a risk-agnostic SVM classifier where the cost of misclassification need not be
known apriori. Preliminary numerical experiments show the effectiveness of the
proposed algorithm.
| [
"Jin Yu, S. V.N. Vishwanatan, Jian Zhang",
"['Jin Yu' 'S. V. N. Vishwanatan' 'Jian Zhang']"
] |
stat.ML cs.LG | null | 1205.2604 | null | null | http://arxiv.org/pdf/1205.2604v1 | 2012-05-09T18:43:56Z | 2012-05-09T18:43:56Z | The Infinite Latent Events Model | We present the Infinite Latent Events Model, a nonparametric hierarchical
Bayesian distribution over infinite dimensional Dynamic Bayesian Networks with
binary state representations and noisy-OR-like transitions. The distribution
can be used to learn structure in discrete timeseries data by simultaneously
inferring a set of latent events, which events fired at each timestep, and how
those events are causally linked. We illustrate the model on a sound
factorization task, a network topology identification task, and a video game
task.
| [
"David Wingate, Noah Goodman, Daniel Roy, Joshua Tenenbaum",
"['David Wingate' 'Noah Goodman' 'Daniel Roy' 'Joshua Tenenbaum']"
] |
cs.LG stat.ML | null | 1205.2605 | null | null | http://arxiv.org/pdf/1205.2605v1 | 2012-05-09T18:42:06Z | 2012-05-09T18:42:06Z | Herding Dynamic Weights for Partially Observed Random Field Models | Learning the parameters of a (potentially partially observable) random field
model is intractable in general. Instead of focussing on a single optimal
parameter value we propose to treat parameters as dynamical quantities. We
introduce an algorithm to generate complex dynamics for parameters and (both
visible and hidden) state vectors. We show that under certain conditions
averages computed over trajectories of the proposed dynamical system converge
to averages computed over the data. Our "herding dynamics" does not require
expensive operations such as exponentiation and is fully deterministic.
| [
"Max Welling",
"['Max Welling']"
] |
cs.LG cs.AI | null | 1205.2606 | null | null | http://arxiv.org/pdf/1205.2606v1 | 2012-05-09T18:40:40Z | 2012-05-09T18:40:40Z | Exploring compact reinforcement-learning representations with linear
regression | This paper presents a new algorithm for online linear regression whose
efficiency guarantees satisfy the requirements of the KWIK (Knows What It
Knows) framework. The algorithm improves on the complexity bounds of the
current state-of-the-art procedure in this setting. We explore several
applications of this algorithm for learning compact reinforcement-learning
representations. We show that KWIK linear regression can be used to learn the
reward function of a factored MDP and the probabilities of action outcomes in
Stochastic STRIPS and Object Oriented MDPs, none of which have been proven to
be efficiently learnable in the RL setting before. We also combine KWIK linear
regression with other KWIK learners to learn larger portions of these models,
including experiments on learning factored MDP transition and reward functions
together.
| [
"Thomas J. Walsh, Istvan Szita, Carlos Diuk, Michael L. Littman",
"['Thomas J. Walsh' 'Istvan Szita' 'Carlos Diuk' 'Michael L. Littman']"
] |
cs.LG stat.ML | null | 1205.2608 | null | null | http://arxiv.org/pdf/1205.2608v1 | 2012-05-09T18:38:39Z | 2012-05-09T18:38:39Z | Temporal-Difference Networks for Dynamical Systems with Continuous
Observations and Actions | Temporal-difference (TD) networks are a class of predictive state
representations that use well-established TD methods to learn models of
partially observable dynamical systems. Previous research with TD networks has
dealt only with dynamical systems with finite sets of observations and actions.
We present an algorithm for learning TD network representations of dynamical
systems with continuous observations and actions. Our results show that the
algorithm is capable of learning accurate and robust models of several noisy
continuous dynamical systems. The algorithm presented here is the first fully
incremental method for learning a predictive representation of a continuous
dynamical system.
| [
"Christopher M. Vigorito",
"['Christopher M. Vigorito']"
] |
stat.ML cs.LG | null | 1205.2609 | null | null | http://arxiv.org/pdf/1205.2609v1 | 2012-05-09T18:37:50Z | 2012-05-09T18:37:50Z | Which Spatial Partition Trees are Adaptive to Intrinsic Dimension? | Recent theory work has found that a special type of spatial partition tree -
called a random projection tree - is adaptive to the intrinsic dimension of the
data from which it is built. Here we examine this same question, with a
combination of theory and experiments, for a broader class of trees that
includes k-d trees, dyadic trees, and PCA trees. Our motivation is to get a
feel for (i) the kind of intrinsic low dimensional structure that can be
empirically verified, (ii) the extent to which a spatial partition can exploit
such structure, and (iii) the implications for standard statistical tasks such
as regression, vector quantization, and nearest neighbor search.
| [
"['Nakul Verma' 'Samory Kpotufe' 'Sanjoy Dasgupta']",
"Nakul Verma, Samory Kpotufe, Sanjoy Dasgupta"
] |
cs.LG | null | 1205.2610 | null | null | http://arxiv.org/pdf/1205.2610v1 | 2012-05-09T18:36:39Z | 2012-05-09T18:36:39Z | Probabilistic Structured Predictors | We consider MAP estimators for structured prediction with exponential family
models. In particular, we concentrate on the case that efficient algorithms for
uniform sampling from the output space exist. We show that under this
assumption (i) exact computation of the partition function remains a hard
problem, and (ii) the partition function and the gradient of the log partition
function can be approximated efficiently. Our main result is an approximation
scheme for the partition function based on Markov Chain Monte Carlo theory. We
also show that the efficient uniform sampling assumption holds in several
application settings that are of importance in machine learning.
| [
"['Shankar Vembu' 'Thomas Gartner' 'Mario Boley']",
"Shankar Vembu, Thomas Gartner, Mario Boley"
] |
cs.IR cs.LG | null | 1205.2611 | null | null | http://arxiv.org/pdf/1205.2611v1 | 2012-05-09T18:35:35Z | 2012-05-09T18:35:35Z | Ordinal Boltzmann Machines for Collaborative Filtering | Collaborative filtering is an effective recommendation technique wherein the
preference of an individual can potentially be predicted based on preferences
of other members. Early algorithms often relied on the strong locality in the
preference data, that is, it is enough to predict preference of a user on a
particular item based on a small subset of other users with similar tastes or
of other items with similar properties. More recently, dimensionality reduction
techniques have proved to be equally competitive, and these are based on the
co-occurrence patterns rather than locality. This paper explores and extends a
probabilistic model known as Boltzmann Machine for collaborative filtering
tasks. It seamlessly integrates both the similarity and co-occurrence in a
principled manner. In particular, we study parameterisation options to deal
with the ordinal nature of the preferences, and propose a joint modelling of
both the user-based and item-based processes. Experiments on moderate and
large-scale movie recommendation show that our framework rivals existing
well-known methods.
| [
"['Tran The Truyen' 'Dinh Q. Phung' 'Svetha Venkatesh']",
"Tran The Truyen, Dinh Q. Phung, Svetha Venkatesh"
] |
cs.LG stat.ML | null | 1205.2612 | null | null | http://arxiv.org/pdf/1205.2612v1 | 2012-05-09T18:33:52Z | 2012-05-09T18:33:52Z | Computing Posterior Probabilities of Structural Features in Bayesian
Networks | We study the problem of learning Bayesian network structures from data.
Koivisto and Sood (2004) and Koivisto (2006) presented algorithms that can
compute the exact marginal posterior probability of a subnetwork, e.g., a
single edge, in O(n2n) time and the posterior probabilities for all n(n-1)
potential edges in O(n2n) total time, assuming that the number of parents per
node or the indegree is bounded by a constant. One main drawback of their
algorithms is the requirement of a special structure prior that is non uniform
and does not respect Markov equivalence. In this paper, we develop an algorithm
that can compute the exact posterior probability of a subnetwork in O(3n) time
and the posterior probabilities for all n(n-1) potential edges in O(n3n) total
time. Our algorithm also assumes a bounded indegree but allows general
structure priors. We demonstrate the applicability of the algorithm on several
data sets with up to 20 variables.
| [
"['Jin Tian' 'Ru He']",
"Jin Tian, Ru He"
] |
cs.LG stat.ML | null | 1205.2614 | null | null | http://arxiv.org/pdf/1205.2614v1 | 2012-05-09T18:30:23Z | 2012-05-09T18:30:23Z | Products of Hidden Markov Models: It Takes N>1 to Tango | Products of Hidden Markov Models(PoHMMs) are an interesting class of
generative models which have received little attention since their
introduction. This maybe in part due to their more computationally expensive
gradient-based learning algorithm,and the intractability of computing the log
likelihood of sequences under the model. In this paper, we demonstrate how the
partition function can be estimated reliably via Annealed Importance Sampling.
We perform experiments using contrastive divergence learning on rainfall data
and data captured from pairs of people dancing. Our results suggest that
advances in learning and evaluation for undirected graphical models and recent
increases in available computing power make PoHMMs worth considering for
complex time-series modeling tasks.
| [
"['Graham W Taylor' 'Geoffrey E. Hinton']",
"Graham W Taylor, Geoffrey E. Hinton"
] |
stat.ML cs.LG stat.ME | null | 1205.2617 | null | null | http://arxiv.org/pdf/1205.2617v1 | 2012-05-09T18:26:23Z | 2012-05-09T18:26:23Z | Modeling Discrete Interventional Data using Directed Cyclic Graphical
Models | We outline a representation for discrete multivariate distributions in terms
of interventional potential functions that are globally normalized. This
representation can be used to model the effects of interventions, and the
independence properties encoded in this model can be represented as a directed
graph that allows cycles. In addition to discussing inference and sampling with
this representation, we give an exponential family parametrization that allows
parameter estimation to be stated as a convex optimization problem; we also
give a convex relaxation of the task of simultaneous parameter and structure
learning using group l1-regularization. The model is evaluated on simulated
data and intracellular flow cytometry data.
| [
"['Mark Schmidt' 'Kevin Murphy']",
"Mark Schmidt, Kevin Murphy"
] |
cs.IR cs.LG stat.ML | null | 1205.2618 | null | null | http://arxiv.org/pdf/1205.2618v1 | 2012-05-09T18:25:09Z | 2012-05-09T18:25:09Z | BPR: Bayesian Personalized Ranking from Implicit Feedback | Item recommendation is the task of predicting a personalized ranking on a set
of items (e.g. websites, movies, products). In this paper, we investigate the
most common scenario with implicit feedback (e.g. clicks, purchases). There are
many methods for item recommendation from implicit feedback like matrix
factorization (MF) or adaptive knearest-neighbor (kNN). Even though these
methods are designed for the item prediction task of personalized ranking, none
of them is directly optimized for ranking. In this paper we present a generic
optimization criterion BPR-Opt for personalized ranking that is the maximum
posterior estimator derived from a Bayesian analysis of the problem. We also
provide a generic learning algorithm for optimizing models with respect to
BPR-Opt. The learning method is based on stochastic gradient descent with
bootstrap sampling. We show how to apply our method to two state-of-the-art
recommender models: matrix factorization and adaptive kNN. Our experiments
indicate that for the task of personalized ranking our optimization method
outperforms the standard learning techniques for MF and kNN. The results show
the importance of optimizing models for the right criterion.
| [
"['Steffen Rendle' 'Christoph Freudenthaler' 'Zeno Gantner'\n 'Lars Schmidt-Thieme']",
"Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, Lars\n Schmidt-Thieme"
] |
cs.LG cs.CE stat.ML | null | 1205.2622 | null | null | http://arxiv.org/pdf/1205.2622v1 | 2012-05-09T17:26:42Z | 2012-05-09T17:26:42Z | Using the Gene Ontology Hierarchy when Predicting Gene Function | The problem of multilabel classification when the labels are related through
a hierarchical categorization scheme occurs in many application domains such as
computational biology. For example, this problem arises naturally when trying
to automatically assign gene function using a controlled vocabularies like Gene
Ontology. However, most existing approaches for predicting gene functions solve
independent classification problems to predict genes that are involved in a
given function category, independently of the rest. Here, we propose two simple
methods for incorporating information about the hierarchical nature of the
categorization scheme. In the first method, we use information about a gene's
previous annotation to set an initial prior on its label. In a second approach,
we extend a graph-based semi-supervised learning algorithm for predicting gene
function in a hierarchy. We show that we can efficiently solve this problem by
solving a linear system of equations. We compare these approaches with a
previous label reconciliation-based approach. Results show that using the
hierarchy information directly, compared to using reconciliation methods,
improves gene function prediction.
| [
"Sara Mostafavi, Quaid Morris",
"['Sara Mostafavi' 'Quaid Morris']"
] |
cs.LG stat.ML | null | 1205.2623 | null | null | http://arxiv.org/pdf/1205.2623v1 | 2012-05-09T17:24:52Z | 2012-05-09T17:24:52Z | Virtual Vector Machine for Bayesian Online Classification | In a typical online learning scenario, a learner is required to process a
large data stream using a small memory buffer. Such a requirement is usually in
conflict with a learner's primary pursuit of prediction accuracy. To address
this dilemma, we introduce a novel Bayesian online classi cation algorithm,
called the Virtual Vector Machine. The virtual vector machine allows you to
smoothly trade-off prediction accuracy with memory size. The virtual vector
machine summarizes the information contained in the preceding data stream by a
Gaussian distribution over the classi cation weights plus a constant number of
virtual data points. The virtual data points are designed to add extra
non-Gaussian information about the classi cation weights. To maintain the
constant number of virtual points, the virtual vector machine adds the current
real data point into the virtual point set, merges two most similar virtual
points into a new virtual point or deletes a virtual point that is far from the
decision boundary. The information lost in this process is absorbed into the
Gaussian distribution. The extra information provided by the virtual points
leads to improved predictive accuracy over previous online classification
algorithms.
| [
"['Thomas P. Minka' 'Rongjing Xiang' 'Yuan' 'Qi']",
"Thomas P. Minka, Rongjing Xiang, Yuan (Alan) Qi"
] |
cs.AI cs.LG | null | 1205.2624 | null | null | http://arxiv.org/pdf/1205.2624v1 | 2012-05-09T17:23:13Z | 2012-05-09T17:23:13Z | Convexifying the Bethe Free Energy | The introduction of loopy belief propagation (LBP) revitalized the
application of graphical models in many domains. Many recent works present
improvements on the basic LBP algorithm in an attempt to overcome convergence
and local optima problems. Notable among these are convexified free energy
approximations that lead to inference procedures with provable convergence and
quality properties. However, empirically LBP still outperforms most of its
convex variants in a variety of settings, as we also demonstrate here.
Motivated by this fact we seek convexified free energies that directly
approximate the Bethe free energy. We show that the proposed approximations
compare favorably with state-of-the art convex free energy approximations.
| [
"['Ofer Meshi' 'Ariel Jaimovich' 'Amir Globerson' 'Nir Friedman']",
"Ofer Meshi, Ariel Jaimovich, Amir Globerson, Nir Friedman"
] |
cs.AI cs.LG | null | 1205.2625 | null | null | http://arxiv.org/pdf/1205.2625v1 | 2012-05-09T17:21:25Z | 2012-05-09T17:21:25Z | Convergent message passing algorithms - a unifying view | Message-passing algorithms have emerged as powerful techniques for
approximate inference in graphical models. When these algorithms converge, they
can be shown to find local (or sometimes even global) optima of variational
formulations to the inference problem. But many of the most popular algorithms
are not guaranteed to converge. This has lead to recent interest in convergent
message-passing algorithms. In this paper, we present a unified view of
convergent message-passing algorithms. We present a simple derivation of an
abstract algorithm, tree-consistency bound optimization (TCBO) that is provably
convergent in both its sum and max product forms. We then show that many of the
existing convergent algorithms are instances of our TCBO algorithm, and obtain
novel convergent algorithms "for free" by exchanging maximizations and
summations in existing algorithms. In particular, we show that Wainwright's
non-convergent sum-product algorithm for tree based variational bounds, is
actually convergent with the right update order for the case where trees are
monotonic chains.
| [
"Talya Meltzer, Amir Globerson, Yair Weiss",
"['Talya Meltzer' 'Amir Globerson' 'Yair Weiss']"
] |
stat.ML cs.LG | null | 1205.2626 | null | null | http://arxiv.org/pdf/1205.2626v1 | 2012-05-09T17:19:05Z | 2012-05-09T17:19:05Z | Group Sparse Priors for Covariance Estimation | Recently it has become popular to learn sparse Gaussian graphical models
(GGMs) by imposing l1 or group l1,2 penalties on the elements of the precision
matrix. Thispenalized likelihood approach results in a tractable convex
optimization problem. In this paper, we reinterpret these results as performing
MAP estimation under a novel prior which we call the group l1 and l1,2
positivedefinite matrix distributions. This enables us to build a hierarchical
model in which the l1 regularization terms vary depending on which group the
entries are assigned to, which in turn allows us to learn block structured
sparse GGMs with unknown group assignments. Exact inference in this
hierarchical model is intractable, due to the need to compute the normalization
constant of these matrix distributions. However, we derive upper bounds on the
partition functions, which lets us use fast variational inference (optimizing a
lower bound on the joint posterior). We show that on two real world data sets
(motion capture and financial data), our method which infers the block
structure outperforms a method that uses a fixed block structure, which in turn
outperforms baseline methods that ignore block structure.
| [
"Benjamin Marlin, Mark Schmidt, Kevin Murphy",
"['Benjamin Marlin' 'Mark Schmidt' 'Kevin Murphy']"
] |
cs.LG stat.ML | null | 1205.2627 | null | null | http://arxiv.org/pdf/1205.2627v1 | 2012-05-09T17:17:33Z | 2012-05-09T17:17:33Z | Domain Knowledge Uncertainty and Probabilistic Parameter Constraints | Incorporating domain knowledge into the modeling process is an effective way
to improve learning accuracy. However, as it is provided by humans, domain
knowledge can only be specified with some degree of uncertainty. We propose to
explicitly model such uncertainty through probabilistic constraints over the
parameter space. In contrast to hard parameter constraints, our approach is
effective also when the domain knowledge is inaccurate and generally results in
superior modeling accuracy. We focus on generative and conditional modeling
where the parameters are assigned a Dirichlet or Gaussian prior and demonstrate
the framework with experiments on both synthetic and real-world data.
| [
"Yi Mao, Guy Lebanon",
"['Yi Mao' 'Guy Lebanon']"
] |
cs.LG stat.ML | null | 1205.2628 | null | null | http://arxiv.org/pdf/1205.2628v1 | 2012-05-09T17:15:45Z | 2012-05-09T17:15:45Z | Multiple Source Adaptation and the Renyi Divergence | This paper presents a novel theoretical study of the general problem of
multiple source adaptation using the notion of Renyi divergence. Our results
build on our previous work [12], but significantly broaden the scope of that
work in several directions. We extend previous multiple source loss guarantees
based on distribution weighted combinations to arbitrary target distributions
P, not necessarily mixtures of the source distributions, analyze both known and
unknown target distribution cases, and prove a lower bound. We further extend
our bounds to deal with the case where the learner receives an approximate
distribution for each source instead of the exact one, and show that similar
loss guarantees can be achieved depending on the divergence between the
approximate and true distributions. We also analyze the case where the labeling
functions of the source domains are somewhat different. Finally, we report the
results of experiments with both an artificial data set and a sentiment
analysis task, showing the performance benefits of the distribution weighted
combinations and the quality of our bounds based on the Renyi divergence.
| [
"Yishay Mansour, Mehryar Mohri, Afshin Rostamizadeh",
"['Yishay Mansour' 'Mehryar Mohri' 'Afshin Rostamizadeh']"
] |
cs.LG stat.ML | null | 1205.2629 | null | null | http://arxiv.org/pdf/1205.2629v1 | 2012-05-09T17:14:10Z | 2012-05-09T17:14:10Z | Interpretation and Generalization of Score Matching | Score matching is a recently developed parameter learning method that is
particularly effective to complicated high dimensional density models with
intractable partition functions. In this paper, we study two issues that have
not been completely resolved for score matching. First, we provide a formal
link between maximum likelihood and score matching. Our analysis shows that
score matching finds model parameters that are more robust with noisy training
data. Second, we develop a generalization of score matching. Based on this
generalization, we further demonstrate an extension of score matching to models
of discrete data.
| [
"['Siwei Lyu']",
"Siwei Lyu"
] |
cs.LG cs.CV stat.ML | null | 1205.2631 | null | null | http://arxiv.org/pdf/1205.2631v1 | 2012-05-09T17:09:42Z | 2012-05-09T17:09:42Z | Multi-Task Feature Learning Via Efficient l2,1-Norm Minimization | The problem of joint feature selection across a group of related tasks has
applications in many areas including biomedical informatics and computer
vision. We consider the l2,1-norm regularized regression model for joint
feature selection from multiple tasks, which can be derived in the
probabilistic framework by assuming a suitable prior from the exponential
family. One appealing feature of the l2,1-norm regularization is that it
encourages multiple predictors to share similar sparsity patterns. However, the
resulting optimization problem is challenging to solve due to the
non-smoothness of the l2,1-norm regularization. In this paper, we propose to
accelerate the computation by reformulating it as two equivalent smooth convex
optimization problems which are then solved via the Nesterov's method-an
optimal first-order black-box method for smooth convex optimization. A key
building block in solving the reformulations is the Euclidean projection. We
show that the Euclidean projection for the first reformulation can be
analytically computed, while the Euclidean projection for the second one can be
computed in linear time. Empirical evaluations on several data sets verify the
efficiency of the proposed algorithms.
| [
"Jun Liu, Shuiwang Ji, Jieping Ye",
"['Jun Liu' 'Shuiwang Ji' 'Jieping Ye']"
] |
cs.DS cs.LG stat.ML | null | 1205.2632 | null | null | http://arxiv.org/pdf/1205.2632v1 | 2012-05-09T15:49:12Z | 2012-05-09T15:49:12Z | Improving Compressed Counting | Compressed Counting (CC) [22] was recently proposed for estimating the ath
frequency moments of data streams, where 0 < a <= 2. CC can be used for
estimating Shannon entropy, which can be approximated by certain functions of
the ath frequency moments as a -> 1. Monitoring Shannon entropy for anomaly
detection (e.g., DDoS attacks) in large networks is an important task. This
paper presents a new algorithm for improving CC. The improvement is most
substantial when a -> 1--. For example, when a = 0:99, the new algorithm
reduces the estimation variance roughly by 100-fold. This new algorithm would
make CC considerably more practical for estimating Shannon entropy.
Furthermore, the new algorithm is statistically optimal when a = 0.5.
| [
"Ping Li",
"['Ping Li']"
] |
stat.ML cs.LG | null | 1205.2640 | null | null | http://arxiv.org/pdf/1205.2640v1 | 2012-05-09T15:31:59Z | 2012-05-09T15:31:59Z | Identifying confounders using additive noise models | We propose a method for inferring the existence of a latent common cause
('confounder') of two observed random variables. The method assumes that the
two effects of the confounder are (possibly nonlinear) functions of the
confounder plus independent, additive noise. We discuss under which conditions
the model is identifiable (up to an arbitrary reparameterization of the
confounder) from the joint distribution of the effects. We state and prove a
theoretical result that provides evidence for the conjecture that the model is
generically identifiable under suitable technical conditions. In addition, we
propose a practical method to estimate the confounder from a finite i.i.d.
sample of the effects and illustrate that the method works well on both
simulated and real-world data.
| [
"['Dominik Janzing' 'Jonas Peters' 'Joris Mooij' 'Bernhard Schoelkopf']",
"Dominik Janzing, Jonas Peters, Joris Mooij, Bernhard Schoelkopf"
] |
stat.ML cs.LG stat.ME | null | 1205.2641 | null | null | http://arxiv.org/pdf/1205.2641v1 | 2012-05-09T15:30:07Z | 2012-05-09T15:30:07Z | Bayesian Discovery of Linear Acyclic Causal Models | Methods for automated discovery of causal relationships from
non-interventional data have received much attention recently. A widely used
and well understood model family is given by linear acyclic causal models
(recursive structural equation models). For Gaussian data both constraint-based
methods (Spirtes et al., 1993; Pearl, 2000) (which output a single equivalence
class) and Bayesian score-based methods (Geiger and Heckerman, 1994) (which
assign relative scores to the equivalence classes) are available. On the
contrary, all current methods able to utilize non-Gaussianity in the data
(Shimizu et al., 2006; Hoyer et al., 2008) always return only a single graph or
a single equivalence class, and so are fundamentally unable to express the
degree of certainty attached to that output. In this paper we develop a
Bayesian score-based approach able to take advantage of non-Gaussianity when
estimating linear acyclic causal models, and we empirically demonstrate that,
at least on very modest size networks, its accuracy is as good as or better
than existing methods. We provide a complete code package (in R) which
implements all algorithms and performs all of the analysis provided in the
paper, and hope that this will further the application of these methods to
solving causal inference problems.
| [
"Patrik O. Hoyer, Antti Hyttinen",
"['Patrik O. Hoyer' 'Antti Hyttinen']"
] |
cs.LG cs.SY math.OC stat.CO stat.ML | null | 1205.2643 | null | null | http://arxiv.org/pdf/1205.2643v1 | 2012-05-09T15:26:47Z | 2012-05-09T15:26:47Z | New inference strategies for solving Markov Decision Processes using
reversible jump MCMC | In this paper we build on previous work which uses inferences techniques, in
particular Markov Chain Monte Carlo (MCMC) methods, to solve parameterized
control problems. We propose a number of modifications in order to make this
approach more practical in general, higher-dimensional spaces. We first
introduce a new target distribution which is able to incorporate more reward
information from sampled trajectories. We also show how to break strong
correlations between the policy parameters and sampled trajectories in order to
sample more freely. Finally, we show how to incorporate these techniques in a
principled manner to obtain estimates of the optimal policy.
| [
"['Matthias Hoffman' 'Hendrik Kueck' 'Nando de Freitas' 'Arnaud Doucet']",
"Matthias Hoffman, Hendrik Kueck, Nando de Freitas, Arnaud Doucet"
] |
cs.LG cs.GT | null | 1205.2646 | null | null | http://arxiv.org/pdf/1205.2646v1 | 2012-05-09T15:16:31Z | 2012-05-09T15:16:31Z | Censored Exploration and the Dark Pool Problem | We introduce and analyze a natural algorithm for multi-venue exploration from
censored data, which is motivated by the Dark Pool Problem of modern
quantitative finance. We prove that our algorithm converges in polynomial time
to a near-optimal allocation policy; prior results for similar problems in
stochastic inventory control guaranteed only asymptotic convergence and
examined variants in which each venue could be treated independently. Our
analysis bears a strong resemblance to that of efficient exploration/
exploitation schemes in the reinforcement learning literature. We describe an
extensive experimental evaluation of our algorithm on the Dark Pool Problem
using real trading data.
| [
"['Kuzman Ganchev' 'Michael Kearns' 'Yuriy Nevmyvaka'\n 'Jennifer Wortman Vaughan']",
"Kuzman Ganchev, Michael Kearns, Yuriy Nevmyvaka, Jennifer Wortman\n Vaughan"
] |
cs.SI cs.LG physics.soc-ph stat.ML | null | 1205.2648 | null | null | http://arxiv.org/pdf/1205.2648v1 | 2012-05-09T15:13:59Z | 2012-05-09T15:13:59Z | Learning Continuous-Time Social Network Dynamics | We demonstrate that a number of sociology models for social network dynamics
can be viewed as continuous time Bayesian networks (CTBNs). A sampling-based
approximate inference method for CTBNs can be used as the basis of an
expectation-maximization procedure that achieves better accuracy in estimating
the parameters of the model than the standard method of moments
algorithmfromthe sociology literature. We extend the existing social network
models to allow for indirect and asynchronous observations of the links. A
Markov chain Monte Carlo sampling algorithm for this new model permits
estimation and inference. We provide results on both a synthetic network (for
verification) and real social network data.
| [
"Yu Fan, Christian R. Shelton",
"['Yu Fan' 'Christian R. Shelton']"
] |
cs.LG stat.ML | null | 1205.2650 | null | null | http://arxiv.org/pdf/1205.2650v1 | 2012-05-09T15:09:51Z | 2012-05-09T15:09:51Z | Correlated Non-Parametric Latent Feature Models | We are often interested in explaining data through a set of hidden factors or
features. When the number of hidden features is unknown, the Indian Buffet
Process (IBP) is a nonparametric latent feature model that does not bound the
number of active features in dataset. However, the IBP assumes that all latent
features are uncorrelated, making it inadequate for many realworld problems. We
introduce a framework for correlated nonparametric feature models, generalising
the IBP. We use this framework to generate several specific models and
demonstrate applications on realworld datasets.
| [
"Finale Doshi-Velez, Zoubin Ghahramani",
"['Finale Doshi-Velez' 'Zoubin Ghahramani']"
] |
cs.LG stat.ML | null | 1205.2653 | null | null | http://arxiv.org/pdf/1205.2653v1 | 2012-05-09T15:01:22Z | 2012-05-09T15:01:22Z | L2 Regularization for Learning Kernels | The choice of the kernel is critical to the success of many learning
algorithms but it is typically left to the user. Instead, the training data can
be used to learn the kernel by selecting it out of a given family, such as that
of non-negative linear combinations of p base kernels, constrained by a trace
or L1 regularization. This paper studies the problem of learning kernels with
the same family of kernels but with an L2 regularization instead, and for
regression problems. We analyze the problem of learning kernels with ridge
regression. We derive the form of the solution of the optimization problem and
give an efficient iterative algorithm for computing that solution. We present a
novel theoretical analysis of the problem based on stability and give learning
bounds for orthogonal kernels that contain only an additive term O(pp/m) when
compared to the standard kernel ridge regression stability bound. We also
report the results of experiments indicating that L1 regularization can lead to
modest improvements for a small number of kernels, but to performance
degradations in larger-scale cases. In contrast, L2 regularization never
degrades performance and in fact achieves significant improvements with a large
number of kernels.
| [
"['Corinna Cortes' 'Mehryar Mohri' 'Afshin Rostamizadeh']",
"Corinna Cortes, Mehryar Mohri, Afshin Rostamizadeh"
] |
cs.LG cs.IT math.IT stat.ML | null | 1205.2656 | null | null | http://arxiv.org/pdf/1205.2656v1 | 2012-05-09T14:54:51Z | 2012-05-09T14:54:51Z | Convex Coding | Inspired by recent work on convex formulations of clustering (Lashkari &
Golland, 2008; Nowozin & Bakir, 2008) we investigate a new formulation of the
Sparse Coding Problem (Olshausen & Field, 1997). In sparse coding we attempt to
simultaneously represent a sequence of data-vectors sparsely (i.e. sparse
approximation (Tropp et al., 2006)) in terms of a 'code' defined by a set of
basis elements, while also finding a code that enables such an approximation.
As existing alternating optimization procedures for sparse coding are
theoretically prone to severe local minima problems, we propose a convex
relaxation of the sparse coding problem and derive a boosting-style algorithm,
that (Nowozin & Bakir, 2008) serves as a convex 'master problem' which calls a
(potentially non-convex) sub-problem to identify the next code element to add.
Finally, we demonstrate the properties of our boosted coding algorithm on an
image denoising task.
| [
"['David M. Bradley' 'J Andrew Bagnell']",
"David M. Bradley, J Andrew Bagnell"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.