categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
cs.AI cs.LG | null | 1201.6583 | null | null | http://arxiv.org/pdf/1201.6583v1 | 2012-01-31T15:46:27Z | 2012-01-31T15:46:27Z | Empowerment for Continuous Agent-Environment Systems | This paper develops generalizations of empowerment to continuous states.
Empowerment is a recently introduced information-theoretic quantity motivated
by hypotheses about the efficiency of the sensorimotor loop in biological
organisms, but also from considerations stemming from curiosity-driven
learning. Empowemerment measures, for agent-environment systems with stochastic
transitions, how much influence an agent has on its environment, but only that
influence that can be sensed by the agent sensors. It is an
information-theoretic generalization of joint controllability (influence on
environment) and observability (measurement by sensors) of the environment by
the agent, both controllability and observability being usually defined in
control theory as the dimensionality of the control/observation spaces. Earlier
work has shown that empowerment has various interesting and relevant
properties, e.g., it allows us to identify salient states using only the
dynamics, and it can act as intrinsic reward without requiring an external
reward. However, in this previous work empowerment was limited to the case of
small-scale and discrete domains and furthermore state transition probabilities
were assumed to be known. The goal of this paper is to extend empowerment to
the significantly more important and relevant case of continuous vector-valued
state spaces and initially unknown state transition probabilities. The
continuous state space is addressed by Monte-Carlo approximation; the unknown
transitions are addressed by model learning and prediction for which we apply
Gaussian processes regression with iterated forecasting. In a number of
well-known continuous control tasks we examine the dynamics induced by
empowerment and include an application to exploration and online model
learning.
| [
"Tobias Jung and Daniel Polani and Peter Stone",
"['Tobias Jung' 'Daniel Polani' 'Peter Stone']"
] |
cs.AI cs.LG | null | 1201.6604 | null | null | http://arxiv.org/pdf/1201.6604v1 | 2012-01-31T16:36:51Z | 2012-01-31T16:36:51Z | Gaussian Processes for Sample Efficient Reinforcement Learning with
RMAX-like Exploration | We present an implementation of model-based online reinforcement learning
(RL) for continuous domains with deterministic transitions that is specifically
designed to achieve low sample complexity. To achieve low sample complexity,
since the environment is unknown, an agent must intelligently balance
exploration and exploitation, and must be able to rapidly generalize from
observations. While in the past a number of related sample efficient RL
algorithms have been proposed, to allow theoretical analysis, mainly
model-learners with weak generalization capabilities were considered. Here, we
separate function approximation in the model learner (which does require
samples) from the interpolation in the planner (which does not require
samples). For model-learning we apply Gaussian processes regression (GP) which
is able to automatically adjust itself to the complexity of the problem (via
Bayesian hyperparameter selection) and, in practice, often able to learn a
highly accurate model from very little data. In addition, a GP provides a
natural way to determine the uncertainty of its predictions, which allows us to
implement the "optimism in the face of uncertainty" principle used to
efficiently control exploration. Our method is evaluated on four common
benchmark domains.
| [
"['Tobias Jung' 'Peter Stone']",
"Tobias Jung and Peter Stone"
] |
cs.AI cs.LG | null | 1201.6615 | null | null | http://arxiv.org/pdf/1201.6615v1 | 2012-01-31T16:57:55Z | 2012-01-31T16:57:55Z | Feature Selection for Value Function Approximation Using Bayesian Model
Selection | Feature selection in reinforcement learning (RL), i.e. choosing basis
functions such that useful approximations of the unkown value function can be
obtained, is one of the main challenges in scaling RL to real-world
applications. Here we consider the Gaussian process based framework GPTD for
approximate policy evaluation, and propose feature selection through marginal
likelihood optimization of the associated hyperparameters. Our approach has two
appealing benefits: (1) given just sample transitions, we can solve the policy
evaluation problem fully automatically (without looking at the learning task,
and, in theory, independent of the dimensionality of the state space), and (2)
model selection allows us to consider more sophisticated kernels, which in turn
enable us to identify relevant subspaces and eliminate irrelevant state
variables such that we can achieve substantial computational savings and
improved prediction performance.
| [
"['Tobias Jung' 'Peter Stone']",
"Tobias Jung and Peter Stone"
] |
cs.AI cs.LG cs.MA | null | 1201.6626 | null | null | http://arxiv.org/pdf/1201.6626v1 | 2012-01-31T17:26:17Z | 2012-01-31T17:26:17Z | Learning RoboCup-Keepaway with Kernels | We apply kernel-based methods to solve the difficult reinforcement learning
problem of 3vs2 keepaway in RoboCup simulated soccer. Key challenges in
keepaway are the high-dimensionality of the state space (rendering conventional
discretization-based function approximation like tilecoding infeasible), the
stochasticity due to noise and multiple learning agents needing to cooperate
(meaning that the exact dynamics of the environment are unknown) and real-time
learning (meaning that an efficient online implementation is required). We
employ the general framework of approximate policy iteration with
least-squares-based policy evaluation. As underlying function approximator we
consider the family of regularization networks with subset of regressors
approximation. The core of our proposed solution is an efficient recursive
implementation with automatic supervised selection of relevant basis functions.
Simulation results indicate that the behavior learned through our approach
clearly outperforms the best results obtained earlier with tilecoding by Stone
et al. (2005).
| [
"Tobias Jung and Daniel Polani",
"['Tobias Jung' 'Daniel Polani']"
] |
cs.LG stat.ML | null | 1202.0302 | null | null | null | null | null | Kernels on Sample Sets via Nonparametric Divergence Estimates | Most machine learning algorithms, such as classification or regression, treat
the individual data point as the object of interest. Here we consider extending
machine learning algorithms to operate on groups of data points. We suggest
treating a group of data points as an i.i.d. sample set from an underlying
feature distribution for that group. Our approach employs kernel machines with
a kernel on i.i.d. sample sets of vectors. We define certain kernel functions
on pairs of distributions, and then use a nonparametric estimator to
consistently estimate those functions based on sample sets. The projection of
the estimated Gram matrix to the cone of symmetric positive semi-definite
matrices enables us to use kernel machines for classification, regression,
anomaly detection, and low-dimensional embedding in the space of distributions.
We present several numerical experiments both on real and simulated datasets to
demonstrate the advantages of our new approach.
| [
"Danica J. Sutherland, Liang Xiong, Barnab\\'as P\\'oczos, and Jeff\n Schneider"
] |
stat.ML cs.LG math.ST stat.TH | null | 1202.0786 | null | null | http://arxiv.org/pdf/1202.0786v2 | 2012-02-06T01:19:43Z | 2012-02-03T17:44:36Z | Minimax Rates of Estimation for Sparse PCA in High Dimensions | We study sparse principal components analysis in the high-dimensional
setting, where $p$ (the number of variables) can be much larger than $n$ (the
number of observations). We prove optimal, non-asymptotic lower and upper
bounds on the minimax estimation error for the leading eigenvector when it
belongs to an $\ell_q$ ball for $q \in [0,1]$. Our bounds are sharp in $p$ and
$n$ for all $q \in [0, 1]$ over a wide class of distributions. The upper bound
is obtained by analyzing the performance of $\ell_q$-constrained PCA. In
particular, our results provide convergence rates for $\ell_1$-constrained PCA.
| [
"['Vincent Q. Vu' 'Jing Lei']",
"Vincent Q. Vu and Jing Lei"
] |
cs.LG stat.ML | null | 1202.0855 | null | null | http://arxiv.org/pdf/1202.0855v1 | 2012-02-04T01:41:36Z | 2012-02-04T01:41:36Z | A Reconstruction Error Formulation for Semi-Supervised Multi-task and
Multi-view Learning | A significant challenge to make learning techniques more suitable for general
purpose use is to move beyond i) complete supervision, ii) low dimensional
data, iii) a single task and single view per instance. Solving these challenges
allows working with "Big Data" problems that are typically high dimensional
with multiple (but possibly incomplete) labelings and views. While other work
has addressed each of these problems separately, in this paper we show how to
address them together, namely semi-supervised dimension reduction for
multi-task and multi-view learning (SSDR-MML), which performs optimization for
dimension reduction and label inference in semi-supervised setting. The
proposed framework is designed to handle both multi-task and multi-view
learning settings, and can be easily adapted to many useful applications.
Information obtained from all tasks and views is combined via reconstruction
errors in a linear fashion that can be efficiently solved using an alternating
optimization scheme. Our formulation has a number of advantages. We explicitly
model the information combining mechanism as a data structure (a
weight/nearest-neighbor matrix) which allows investigating fundamental
questions in multi-task and multi-view learning. We address one such question
by presenting a general measure to quantify the success of simultaneous
learning of multiple tasks or from multiple views. We show that our SSDR-MML
approach can outperform many state-of-the-art baseline methods and demonstrate
the effectiveness of connecting dimension reduction and learning.
| [
"Buyue Qian, Xiang Wang and Ian Davidson",
"['Buyue Qian' 'Xiang Wang' 'Ian Davidson']"
] |
cs.LG stat.ML | 10.1109/TSP.2012.2226165 | 1202.1119 | null | null | http://arxiv.org/abs/1202.1119v2 | 2012-10-18T04:17:59Z | 2012-02-06T12:39:37Z | Cramer Rao-Type Bounds for Sparse Bayesian Learning | In this paper, we derive Hybrid, Bayesian and Marginalized Cram\'{e}r-Rao
lower bounds (HCRB, BCRB and MCRB) for the single and multiple measurement
vector Sparse Bayesian Learning (SBL) problem of estimating compressible
vectors and their prior distribution parameters. We assume the unknown vector
to be drawn from a compressible Student-t prior distribution. We derive CRBs
that encompass the deterministic or random nature of the unknown parameters of
the prior distribution and the regression noise variance. We extend the MCRB to
the case where the compressible vector is distributed according to a general
compressible prior distribution, of which the generalized Pareto distribution
is a special case. We use the derived bounds to uncover the relationship
between the compressibility and Mean Square Error (MSE) in the estimates.
Further, we illustrate the tightness and utility of the bounds through
simulations, by comparing them with the MSE performance of two popular
SBL-based estimators. It is found that the MCRB is generally the tightest among
the bounds derived and that the MSE performance of the Expectation-Maximization
(EM) algorithm coincides with the MCRB for the compressible vector. Through
simulations, we demonstrate the dependence of the MSE performance of SBL based
estimators on the compressibility of the vector for several values of the
number of observations and at different signal powers.
| [
"Ranjitha Prasad and Chandra R. Murthy",
"['Ranjitha Prasad' 'Chandra R. Murthy']"
] |
cs.LG stat.ML | null | 1202.1121 | null | null | http://arxiv.org/abs/1202.1121v2 | 2014-11-14T14:39:39Z | 2012-02-06T12:43:12Z | rFerns: An Implementation of the Random Ferns Method for General-Purpose
Machine Learning | In this paper I present an extended implementation of the Random ferns
algorithm contained in the R package rFerns. It differs from the original by
the ability of consuming categorical and numerical attributes instead of only
binary ones. Also, instead of using simple attribute subspace ensemble it
employs bagging and thus produce error approximation and variable importance
measure modelled after Random forest algorithm. I also present benchmarks'
results which show that although Random ferns' accuracy is mostly smaller than
achieved by Random forest, its speed and good quality of importance measure it
provides make rFerns a reasonable choice for a specific applications.
| [
"Miron B. Kursa",
"['Miron B. Kursa']"
] |
cs.LG | null | 1202.1334 | null | null | http://arxiv.org/pdf/1202.1334v2 | 2012-03-02T15:02:28Z | 2012-02-07T02:27:55Z | Contextual Bandit Learning with Predictable Rewards | Contextual bandit learning is a reinforcement learning problem where the
learner repeatedly receives a set of features (context), takes an action and
receives a reward based on the action and context. We consider this problem
under a realizability assumption: there exists a function in a (known) function
class, always capable of predicting the expected reward, given the action and
context. Under this assumption, we show three things. We present a new
algorithm---Regressor Elimination--- with a regret similar to the agnostic
setting (i.e. in the absence of realizability assumption). We prove a new lower
bound showing no algorithm can achieve superior performance in the worst case
even with the realizability assumption. However, we do show that for any set of
policies (mapping contexts to actions), there is a distribution over rewards
(given context) such that our new algorithm has constant regret unlike the
previous approaches.
| [
"['Alekh Agarwal' 'Miroslav Dudík' 'Satyen Kale' 'John Langford'\n 'Robert E. Schapire']",
"Alekh Agarwal and Miroslav Dud\\'ik and Satyen Kale and John Langford\n and Robert E. Schapire"
] |
cs.LG stat.ML | null | 1202.1523 | null | null | http://arxiv.org/pdf/1202.1523v1 | 2012-02-07T14:54:59Z | 2012-02-07T14:54:59Z | Information Forests | We describe Information Forests, an approach to classification that
generalizes Random Forests by replacing the splitting criterion of non-leaf
nodes from a discriminative one -- based on the entropy of the label
distribution -- to a generative one -- based on maximizing the information
divergence between the class-conditional distributions in the resulting
partitions. The basic idea consists of deferring classification until a measure
of "classification confidence" is sufficiently high, and instead breaking down
the data so as to maximize this measure. In an alternative interpretation,
Information Forests attempt to partition the data into subsets that are "as
informative as possible" for the purpose of the task, which is to classify the
data. Classification confidence, or informative content of the subsets, is
quantified by the Information Divergence. Our approach relates to active
learning, semi-supervised learning, mixed generative/discriminative learning.
| [
"Zhao Yi, Stefano Soatto, Maneesh Dewan, Yiqiang Zhan",
"['Zhao Yi' 'Stefano Soatto' 'Maneesh Dewan' 'Yiqiang Zhan']"
] |
cs.LG | null | 1202.1558 | null | null | http://arxiv.org/pdf/1202.1558v1 | 2012-02-07T23:14:36Z | 2012-02-07T23:14:36Z | On the Performance of Maximum Likelihood Inverse Reinforcement Learning | Inverse reinforcement learning (IRL) addresses the problem of recovering a
task description given a demonstration of the optimal policy used to solve such
a task. The optimal policy is usually provided by an expert or teacher, making
IRL specially suitable for the problem of apprenticeship learning. The task
description is encoded in the form of a reward function of a Markov decision
process (MDP). Several algorithms have been proposed to find the reward
function corresponding to a set of demonstrations. One of the algorithms that
has provided best results in different applications is a gradient method to
optimize a policy squared error criterion. On a parallel line of research,
other authors have presented recently a gradient approximation of the maximum
likelihood estimate of the reward signal. In general, both approaches
approximate the gradient estimate and the criteria at different stages to make
the algorithm tractable and efficient. In this work, we provide a detailed
description of the different methods to highlight differences in terms of
reward estimation, policy similarity and computational costs. We also provide
experimental results to evaluate the differences in performance of the methods.
| [
"['Héctor Ratia' 'Luis Montesano' 'Ruben Martinez-Cantin']",
"H\\'ector Ratia and Luis Montesano and Ruben Martinez-Cantin"
] |
cs.AI cs.LG cs.RO | null | 1202.2112 | null | null | http://arxiv.org/pdf/1202.2112v1 | 2012-02-09T20:48:22Z | 2012-02-09T20:48:22Z | Predicting Contextual Sequences via Submodular Function Maximization | Sequence optimization, where the items in a list are ordered to maximize some
reward has many applications such as web advertisement placement, search, and
control libraries in robotics. Previous work in sequence optimization produces
a static ordering that does not take any features of the item or context of the
problem into account. In this work, we propose a general approach to order the
items within the sequence based on the context (e.g., perceptual information,
environment description, and goals). We take a simple, efficient,
reduction-based approach where the choice and order of the items is established
by repeatedly learning simple classifiers or regressors for each "slot" in the
sequence. Our approach leverages recent work on submodular function
maximization to provide a formal regret reduction from submodular sequence
optimization to simple cost-sensitive prediction. We apply our contextual
sequence prediction algorithm to optimize control libraries and demonstrate
results on two robotics problems: manipulator trajectory prediction and mobile
robot path planning.
| [
"['Debadeepta Dey' 'Tian Yu Liu' 'Martial Hebert' 'J. Andrew Bagnell']",
"Debadeepta Dey, Tian Yu Liu, Martial Hebert, J. Andrew Bagnell"
] |
stat.ME cs.LG stat.ML | null | 1202.2143 | null | null | http://arxiv.org/pdf/1202.2143v1 | 2012-02-09T22:31:01Z | 2012-02-09T22:31:01Z | Active Bayesian Optimization: Minimizing Minimizer Entropy | The ultimate goal of optimization is to find the minimizer of a target
function.However, typical criteria for active optimization often ignore the
uncertainty about the minimizer. We propose a novel criterion for global
optimization and an associated sequential active learning strategy using
Gaussian processes.Our criterion is the reduction of uncertainty in the
posterior distribution of the function minimizer. It can also flexibly
incorporate multiple global minimizers. We implement a tractable approximation
of the criterion and demonstrate that it obtains the global minimizer
accurately compared to conventional Bayesian optimization criteria.
| [
"['Il Memming Park' 'Marcel Nassar' 'Mijung Park']",
"Il Memming Park, Marcel Nassar, Mijung Park"
] |
cs.CV cs.LG | null | 1202.2160 | null | null | http://arxiv.org/pdf/1202.2160v2 | 2012-07-13T21:32:24Z | 2012-02-10T00:30:48Z | Scene Parsing with Multiscale Feature Learning, Purity Trees, and
Optimal Covers | Scene parsing, or semantic segmentation, consists in labeling each pixel in
an image with the category of the object it belongs to. It is a challenging
task that involves the simultaneous detection, segmentation and recognition of
all the objects in the image.
The scene parsing method proposed here starts by computing a tree of segments
from a graph of pixel dissimilarities. Simultaneously, a set of dense feature
vectors is computed which encodes regions of multiple sizes centered on each
pixel. The feature extractor is a multiscale convolutional network trained from
raw pixels. The feature vectors associated with the segments covered by each
node in the tree are aggregated and fed to a classifier which produces an
estimate of the distribution of object categories contained in the segment. A
subset of tree nodes that cover the image are then selected so as to maximize
the average "purity" of the class distributions, hence maximizing the overall
likelihood that each segment will contain a single object. The convolutional
network feature extractor is trained end-to-end from raw pixels, alleviating
the need for engineered features. After training, the system is parameter free.
The system yields record accuracies on the Stanford Background Dataset (8
classes), the Sift Flow Dataset (33 classes) and the Barcelona Dataset (170
classes) while being an order of magnitude faster than competing approaches,
producing a 320 \times 240 image labeling in less than 1 second.
| [
"Cl\\'ement Farabet and Camille Couprie and Laurent Najman and Yann\n LeCun",
"['Clément Farabet' 'Camille Couprie' 'Laurent Najman' 'Yann LeCun']"
] |
cs.LG q-bio.TO | 10.1016/j.forsciint.2011.03.010 | 1202.2703 | null | null | http://arxiv.org/abs/1202.2703v1 | 2012-02-13T12:28:12Z | 2012-02-13T12:28:12Z | Craniofacial reconstruction as a prediction problem using a Latent Root
Regression model | In this paper, we present a computer-assisted method for facial
reconstruction. This method provides an estimation of the facial shape
associated with unidentified skeletal remains. Current computer-assisted
methods using a statistical framework rely on a common set of extracted points
located on the bone and soft-tissue surfaces. Most of the facial reconstruction
methods then consist of predicting the position of the soft-tissue surface
points, when the positions of the bone surface points are known. We propose to
use Latent Root Regression for prediction. The results obtained are then
compared to those given by Principal Components Analysis linear models. In
conjunction, we have evaluated the influence of the number of skull landmarks
used. Anatomical skull landmarks are completed iteratively by points located
upon geodesics which link these anatomical landmarks, thus enabling us to
artificially increase the number of skull points. Facial points are obtained
using a mesh-matching algorithm between a common reference mesh and individual
soft-tissue surface meshes. The proposed method is validated in term of
accuracy, based on a leave-one-out cross-validation test applied to a
homogeneous database. Accuracy measures are obtained by computing the distance
between the original face surface and its reconstruction. Finally, these
results are discussed referring to current computer-assisted reconstruction
facial techniques.
| [
"Maxime Berar (LITIS), Fran\\c{c}oise Tilotta, Joan Alexis Glaun\\`es\n (MAP5), Yves Rozenholc (MAP5)",
"['Maxime Berar' 'Françoise Tilotta' 'Joan Alexis Glaunès' 'Yves Rozenholc']"
] |
cs.LG stat.ML | null | 1202.3079 | null | null | http://arxiv.org/pdf/1202.3079v1 | 2012-02-14T16:12:09Z | 2012-02-14T16:12:09Z | Towards minimax policies for online linear optimization with bandit
feedback | We address the online linear optimization problem with bandit feedback. Our
contribution is twofold. First, we provide an algorithm (based on exponential
weights) with a regret of order $\sqrt{d n \log N}$ for any finite action set
with $N$ actions, under the assumption that the instantaneous loss is bounded
by 1. This shaves off an extraneous $\sqrt{d}$ factor compared to previous
works, and gives a regret bound of order $d \sqrt{n \log n}$ for any compact
set of actions. Without further assumptions on the action set, this last bound
is minimax optimal up to a logarithmic factor. Interestingly, our result also
shows that the minimax regret for bandit linear optimization with expert advice
in $d$ dimension is the same as for the basic $d$-armed bandit with expert
advice. Our second contribution is to show how to use the Mirror Descent
algorithm to obtain computationally efficient strategies with minimax optimal
regret bounds in specific examples. More precisely we study two canonical
action sets: the hypercube and the Euclidean ball. In the former case, we
obtain the first computationally efficient algorithm with a $d \sqrt{n}$
regret, thus improving by a factor $\sqrt{d \log n}$ over the best known result
for a computationally efficient algorithm. In the latter case, our approach
gives the first algorithm with a $\sqrt{d n \log n}$ regret, again shaving off
an extraneous $\sqrt{d}$ compared to previous works.
| [
"['Sébastien Bubeck' 'Nicolò Cesa-Bianchi' 'Sham M. Kakade']",
"S\\'ebastien Bubeck, Nicol\\`o Cesa-Bianchi, Sham M. Kakade"
] |
cs.LG stat.ML | null | 1202.3323 | null | null | http://arxiv.org/pdf/1202.3323v2 | 2012-09-27T19:39:42Z | 2012-02-15T14:39:42Z | Mirror Descent Meets Fixed Share (and feels no regret) | Mirror descent with an entropic regularizer is known to achieve shifting
regret bounds that are logarithmic in the dimension. This is done using either
a carefully designed projection or by a weight sharing technique. Via a novel
unified analysis, we show that these two approaches deliver essentially
equivalent bounds on a notion of regret generalizing shifting, adaptive,
discounted, and other related regrets. Our analysis also captures and extends
the generalized weight sharing technique of Bousquet and Warmuth, and can be
refined in several ways, including improvements for small losses and adaptive
tuning of parameters.
| [
"['Nicolò Cesa-Bianchi' 'Pierre Gaillard' 'Gabor Lugosi' 'Gilles Stoltz']",
"Nicol\\`o Cesa-Bianchi, Pierre Gaillard (INRIA Paris - Rocquencourt,\n DMA), Gabor Lugosi (ICREA), Gilles Stoltz (INRIA Paris - Rocquencourt, DMA,\n GREGH)"
] |
cs.AI cs.LG cs.SE | null | 1202.3335 | null | null | http://arxiv.org/pdf/1202.3335v1 | 2012-02-15T15:03:01Z | 2012-02-15T15:03:01Z | An efficient high-quality hierarchical clustering algorithm for
automatic inference of software architecture from the source code of a
software system | It is a high-quality algorithm for hierarchical clustering of large software
source code. This effectively allows to break the complexity of tens of
millions lines of source code, so that a human software engineer can comprehend
a software system at high level by means of looking at its architectural
diagram that is reconstructed automatically from the source code of the
software system. The architectural diagram shows a tree of subsystems having
OOP classes in its leaves (in the other words, a nested software
decomposition). The tool reconstructs the missing
(inconsistent/incomplete/inexistent) architectural documentation for a software
system from its source code. This facilitates software maintenance: change
requests can be performed substantially faster. Simply speaking, this unique
tool allows to lift the comprehensible grain of object-oriented software
systems from OOP class-level to subsystem-level. It is estimated that a
commercial tool, developed on the basis of this work, will reduce software
maintenance expenses 10 times on the current needs, and will allow to implement
next-generation software systems which are currently too complex to be within
the range of human comprehension, therefore can't yet be designed or
implemented. Implemented prototype in Open Source:
http://sourceforge.net/p/insoar/code-0/1/tree/
| [
"Sarge Rogatch",
"['Sarge Rogatch']"
] |
cs.DS cs.LG | 10.1109/TIT.2013.2272457 | 1202.3505 | null | null | http://arxiv.org/abs/1202.3505v2 | 2013-06-21T20:58:43Z | 2012-02-16T03:07:35Z | Near-optimal Coresets For Least-Squares Regression | We study (constrained) least-squares regression as well as multiple response
least-squares regression and ask the question of whether a subset of the data,
a coreset, suffices to compute a good approximate solution to the regression.
We give deterministic, low order polynomial-time algorithms to construct such
coresets with approximation guarantees, together with lower bounds indicating
that there is not much room for improvement upon our results.
| [
"Christos Boutsidis, Petros Drineas, Malik Magdon-Ismail",
"['Christos Boutsidis' 'Petros Drineas' 'Malik Magdon-Ismail']"
] |
cs.DS cs.LG | null | 1202.3639 | null | null | http://arxiv.org/pdf/1202.3639v3 | 2013-09-07T17:09:32Z | 2012-02-16T16:40:56Z | Finding a most biased coin with fewest flips | We study the problem of learning a most biased coin among a set of coins by
tossing the coins adaptively. The goal is to minimize the number of tosses
until we identify a coin i* whose posterior probability of being most biased is
at least 1-delta for a given delta. Under a particular probabilistic model, we
give an optimal algorithm, i.e., an algorithm that minimizes the expected
number of future tosses. The problem is closely related to finding the best arm
in the multi-armed bandit problem using adaptive strategies. Our algorithm
employs an optimal adaptive strategy -- a strategy that performs the best
possible action at each step after observing the outcomes of all previous coin
tosses. Consequently, our algorithm is also optimal for any starting history of
outcomes. To our knowledge, this is the first algorithm that employs an optimal
adaptive strategy under a Bayesian setting for this problem. Our proof of
optimality employs tools from the field of Markov games.
| [
"Karthekeyan Chandrasekaran and Richard Karp",
"['Karthekeyan Chandrasekaran' 'Richard Karp']"
] |
math.OC cs.LG | 10.1007/s10107-013-0729-x | 1202.3663 | null | null | http://arxiv.org/abs/1202.3663v6 | 2013-11-18T22:47:23Z | 2012-02-16T18:41:42Z | Guaranteed clustering and biclustering via semidefinite programming | Identifying clusters of similar objects in data plays a significant role in a
wide range of applications. As a model problem for clustering, we consider the
densest k-disjoint-clique problem, whose goal is to identify the collection of
k disjoint cliques of a given weighted complete graph maximizing the sum of the
densities of the complete subgraphs induced by these cliques. In this paper, we
establish conditions ensuring exact recovery of the densest k cliques of a
given graph from the optimal solution of a particular semidefinite program. In
particular, the semidefinite relaxation is exact for input graphs corresponding
to data consisting of k large, distinct clusters and a smaller number of
outliers. This approach also yields a semidefinite relaxation for the
biclustering problem with similar recovery guarantees. Given a set of objects
and a set of features exhibited by these objects, biclustering seeks to
simultaneously group the objects and features according to their expression
levels. This problem may be posed as partitioning the nodes of a weighted
bipartite complete graph such that the sum of the densities of the resulting
bipartite complete subgraphs is maximized. As in our analysis of the densest
k-disjoint-clique problem, we show that the correct partition of the objects
and features can be recovered from the optimal solution of a semidefinite
program in the case that the given data consists of several disjoint sets of
objects exhibiting similar features. Empirical evidence from numerical
experiments supporting these theoretical guarantees is also provided.
| [
"['Brendan P. W. Ames']",
"Brendan P. W. Ames"
] |
cs.LG cs.AI stat.ML | null | 1202.3701 | null | null | http://arxiv.org/pdf/1202.3701v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Active Diagnosis via AUC Maximization: An Efficient Approach for
Multiple Fault Identification in Large Scale, Noisy Networks | The problem of active diagnosis arises in several applications such as
disease diagnosis, and fault diagnosis in computer networks, where the goal is
to rapidly identify the binary states of a set of objects (e.g., faulty or
working) by sequentially selecting, and observing, (noisy) responses to binary
valued queries. Current algorithms in this area rely on loopy belief
propagation for active query selection. These algorithms have an exponential
time complexity, making them slow and even intractable in large networks. We
propose a rank-based greedy algorithm that sequentially chooses queries such
that the area under the ROC curve of the rank-based output is maximized. The
AUC criterion allows us to make a simplifying assumption that significantly
reduces the complexity of active query selection (from exponential to near
quadratic), with little or no compromise on the performance quality.
| [
"Gowtham Bellala, Jason Stanley, Clayton Scott, Suresh K. Bhavnani",
"['Gowtham Bellala' 'Jason Stanley' 'Clayton Scott' 'Suresh K. Bhavnani']"
] |
cs.LG stat.ML | null | 1202.3702 | null | null | http://arxiv.org/pdf/1202.3702v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Semi-supervised Learning with Density Based Distances | We present a simple, yet effective, approach to Semi-Supervised Learning. Our
approach is based on estimating density-based distances (DBD) using a shortest
path calculation on a graph. These Graph-DBD estimates can then be used in any
distance-based supervised learning method, such as Nearest Neighbor methods and
SVMs with RBF kernels. In order to apply the method to very large data sets, we
also present a novel algorithm which integrates nearest neighbor computations
into the shortest path search and can find exact shortest paths even in
extremely large dense graphs. Significant runtime improvement over the commonly
used Laplacian regularization method is then shown on a large scale dataset.
| [
"['Avleen S. Bijral' 'Nathan Ratliff' 'Nathan Srebro']",
"Avleen S. Bijral, Nathan Ratliff, Nathan Srebro"
] |
cs.LG stat.ML | null | 1202.3704 | null | null | http://arxiv.org/pdf/1202.3704v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Near-Optimal Target Learning With Stochastic Binary Signals | We study learning in a noisy bisection model: specifically, Bayesian
algorithms to learn a target value V given access only to noisy realizations of
whether V is less than or greater than a threshold theta. At step t = 0, 1, 2,
..., the learner sets threshold theta t and observes a noisy realization of
sign(V - theta t). After T steps, the goal is to output an estimate V^ which is
within an eta-tolerance of V . This problem has been studied, predominantly in
environments with a fixed error probability q < 1/2 for the noisy realization
of sign(V - theta t). In practice, it is often the case that q can approach
1/2, especially as theta -> V, and there is little known when this happens. We
give a pseudo-Bayesian algorithm which provably converges to V. When the true
prior matches our algorithm's Gaussian prior, we show near-optimal expected
performance. Our methods extend to the general multiple-threshold setting where
the observation noisily indicates which of k >= 2 regions V belongs to.
| [
"['Mithun Chakraborty' 'Sanmay Das' 'Malik Magdon-Ismail']",
"Mithun Chakraborty, Sanmay Das, Malik Magdon-Ismail"
] |
cs.LG stat.ML | null | 1202.3708 | null | null | http://arxiv.org/pdf/1202.3708v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Smoothing Proximal Gradient Method for General Structured Sparse
Learning | We study the problem of learning high dimensional regression models
regularized by a structured-sparsity-inducing penalty that encodes prior
structural information on either input or output sides. We consider two widely
adopted types of such penalties as our motivating examples: 1) overlapping
group lasso penalty, based on the l1/l2 mixed-norm penalty, and 2) graph-guided
fusion penalty. For both types of penalties, due to their non-separability,
developing an efficient optimization method has remained a challenging problem.
In this paper, we propose a general optimization approach, called smoothing
proximal gradient method, which can solve the structured sparse regression
problems with a smooth convex loss and a wide spectrum of
structured-sparsity-inducing penalties. Our approach is based on a general
smoothing technique of Nesterov. It achieves a convergence rate faster than the
standard first-order method, subgradient method, and is much more scalable than
the most widely used interior-point method. Numerical results are reported to
demonstrate the efficiency and scalability of the proposed method.
| [
"['Xi Chen' 'Qihang Lin' 'Seyoung Kim' 'Jaime G. Carbonell' 'Eric P. Xing']",
"Xi Chen, Qihang Lin, Seyoung Kim, Jaime G. Carbonell, Eric P. Xing"
] |
cs.LG stat.ML | null | 1202.3712 | null | null | http://arxiv.org/pdf/1202.3712v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Ensembles of Kernel Predictors | This paper examines the problem of learning with a finite and possibly large
set of p base kernels. It presents a theoretical and empirical analysis of an
approach addressing this problem based on ensembles of kernel predictors. This
includes novel theoretical guarantees based on the Rademacher complexity of the
corresponding hypothesis sets, the introduction and analysis of a learning
algorithm based on these hypothesis sets, and a series of experiments using
ensembles of kernel predictors with several data sets. Both convex combinations
of kernel-based hypotheses and more general Lq-regularized nonnegative
combinations are analyzed. These theoretical, algorithmic, and empirical
results are compared with those achieved by using learning kernel techniques,
which can be viewed as another approach for solving the same problem.
| [
"['Corinna Cortes' 'Mehryar Mohri' 'Afshin Rostamizadeh']",
"Corinna Cortes, Mehryar Mohri, Afshin Rostamizadeh"
] |
cs.LG stat.ML | null | 1202.3714 | null | null | http://arxiv.org/pdf/1202.3714v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Active Learning for Developing Personalized Treatment | The personalization of treatment via bio-markers and other risk categories
has drawn increasing interest among clinical scientists. Personalized treatment
strategies can be learned using data from clinical trials, but such trials are
very costly to run. This paper explores the use of active learning techniques
to design more efficient trials, addressing issues such as whom to recruit, at
what point in the trial, and which treatment to assign, throughout the duration
of the trial. We propose a minimax bandit model with two different optimization
criteria, and discuss the computational challenges and issues pertaining to
this approach. We evaluate our active learning policies using both simulated
data, and data modeled after a clinical trial for treating depressed
individuals, and contrast our methods with other plausible active learning
policies.
| [
"Kun Deng, Joelle Pineau, Susan A. Murphy",
"['Kun Deng' 'Joelle Pineau' 'Susan A. Murphy']"
] |
cs.LG stat.ML | null | 1202.3716 | null | null | http://arxiv.org/pdf/1202.3716v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Boosting as a Product of Experts | In this paper, we derive a novel probabilistic model of boosting as a Product
of Experts. We re-derive the boosting algorithm as a greedy incremental model
selection procedure which ensures that addition of new experts to the ensemble
does not decrease the likelihood of the data. These learning rules lead to a
generic boosting algorithm - POE- Boost which turns out to be similar to the
AdaBoost algorithm under certain assumptions on the expert probabilities. The
paper then extends the POEBoost algorithm to POEBoost.CS which handles
hypothesis that produce probabilistic predictions. This new algorithm is shown
to have better generalization performance compared to other state of the art
algorithms.
| [
"Narayanan U. Edakunni, Gary Brown, Tim Kovacs",
"['Narayanan U. Edakunni' 'Gary Brown' 'Tim Kovacs']"
] |
cs.LG stat.ML | null | 1202.3717 | null | null | http://arxiv.org/pdf/1202.3717v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | PAC-Bayesian Policy Evaluation for Reinforcement Learning | Bayesian priors offer a compact yet general means of incorporating domain
knowledge into many learning tasks. The correctness of the Bayesian analysis
and inference, however, largely depends on accuracy and correctness of these
priors. PAC-Bayesian methods overcome this problem by providing bounds that
hold regardless of the correctness of the prior distribution. This paper
introduces the first PAC-Bayesian bound for the batch reinforcement learning
problem with function approximation. We show how this bound can be used to
perform model-selection in a transfer learning scenario. Our empirical results
confirm that PAC-Bayesian policy evaluation is able to leverage prior
distributions when they are informative and, unlike standard Bayesian RL
approaches, ignore them when they are misleading.
| [
"['Mahdi MIlani Fard' 'Joelle Pineau' 'Csaba Szepesvari']",
"Mahdi MIlani Fard, Joelle Pineau, Csaba Szepesvari"
] |
cs.LG cs.AI stat.ML | null | 1202.3722 | null | null | http://arxiv.org/pdf/1202.3722v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Hierarchical Affinity Propagation | Affinity propagation is an exemplar-based clustering algorithm that finds a
set of data-points that best exemplify the data, and associates each datapoint
with one exemplar. We extend affinity propagation in a principled way to solve
the hierarchical clustering problem, which arises in a variety of domains
including biology, sensor networks and decision making in operational research.
We derive an inference algorithm that operates by propagating information up
and down the hierarchy, and is efficient despite the high-order potentials
required for the graphical model formulation. We demonstrate that our method
outperforms greedy techniques that cluster one layer at a time. We show that on
an artificial dataset designed to mimic the HIV-strain mutation dynamics, our
method outperforms related methods. For real HIV sequences, where the ground
truth is not available, we show our method achieves better results, in terms of
the underlying objective function, and show the results correspond meaningfully
to geographical location and strain subtypes. Finally we report results on
using the method for the analysis of mass spectra, showing it performs
favorably compared to state-of-the-art methods.
| [
"['Inmar Givoni' 'Clement Chung' 'Brendan J. Frey']",
"Inmar Givoni, Clement Chung, Brendan J. Frey"
] |
cs.LG stat.ML | null | 1202.3725 | null | null | http://arxiv.org/pdf/1202.3725v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Generalized Fisher Score for Feature Selection | Fisher score is one of the most widely used supervised feature selection
methods. However, it selects each feature independently according to their
scores under the Fisher criterion, which leads to a suboptimal subset of
features. In this paper, we present a generalized Fisher score to jointly
select features. It aims at finding an subset of features, which maximize the
lower bound of traditional Fisher score. The resulting feature selection
problem is a mixed integer programming, which can be reformulated as a
quadratically constrained linear programming (QCLP). It is solved by cutting
plane algorithm, in each iteration of which a multiple kernel learning problem
is solved alternatively by multivariate ridge regression and projected gradient
descent. Experiments on benchmark data sets indicate that the proposed method
outperforms Fisher score as well as many other state-of-the-art feature
selection methods.
| [
"['Quanquan Gu' 'Zhenhui Li' 'Jiawei Han']",
"Quanquan Gu, Zhenhui Li, Jiawei Han"
] |
cs.LG stat.ML | null | 1202.3726 | null | null | http://arxiv.org/pdf/1202.3726v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Active Semi-Supervised Learning using Submodular Functions | We consider active, semi-supervised learning in an offline transductive
setting. We show that a previously proposed error bound for active learning on
undirected weighted graphs can be generalized by replacing graph cut with an
arbitrary symmetric submodular function. Arbitrary non-symmetric submodular
functions can be used via symmetrization. Different choices of submodular
functions give different versions of the error bound that are appropriate for
different kinds of problems. Moreover, the bound is deterministic and holds for
adversarially chosen labels. We show exactly minimizing this error bound is
NP-complete. However, we also introduce for any submodular function an
associated active semi-supervised learning method that approximately minimizes
the corresponding error bound. We show that the error bound is tight in the
sense that there is no other bound of the same form which is better. Our
theoretical results are supported by experiments on real data.
| [
"Andrew Guillory, Jeff A. Bilmes",
"['Andrew Guillory' 'Jeff A. Bilmes']"
] |
cs.LG stat.ML | null | 1202.3727 | null | null | http://arxiv.org/pdf/1202.3727v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Bregman divergence as general framework to estimate unnormalized
statistical models | We show that the Bregman divergence provides a rich framework to estimate
unnormalized statistical models for continuous or discrete random variables,
that is, models which do not integrate or sum to one, respectively. We prove
that recent estimation methods such as noise-contrastive estimation, ratio
matching, and score matching belong to the proposed framework, and explain
their interconnection based on supervised learning. Further, we discuss the
role of boosting in unsupervised learning.
| [
"Michael Gutmann, Jun-ichiro Hirayama",
"['Michael Gutmann' 'Jun-ichiro Hirayama']"
] |
cs.LG stat.ML | null | 1202.3730 | null | null | http://arxiv.org/pdf/1202.3730v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Sequential Inference for Latent Force Models | Latent force models (LFMs) are hybrid models combining mechanistic principles
with non-parametric components. In this article, we shall show how LFMs can be
equivalently formulated and solved using the state variable approach. We shall
also show how the Gaussian process prior used in LFMs can be equivalently
formulated as a linear statespace model driven by a white noise process and how
inference on the resulting model can be efficiently implemented using Kalman
filter and smoother. Then we shall show how the recently proposed switching LFM
can be reformulated using the state variable approach, and how we can construct
a probabilistic model for the switches by formulating a similar switching LFM
as a switching linear dynamic system (SLDS). We illustrate the performance of
the proposed methodology in simulated scenarios and apply it to inferring the
switching points in GPS data collected from car movement data in urban
environment.
| [
"['Jouni Hartikainen' 'Simo Sarkka']",
"Jouni Hartikainen, Simo Sarkka"
] |
cs.LG stat.ML | null | 1202.3731 | null | null | http://arxiv.org/pdf/1202.3731v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | What Cannot be Learned with Bethe Approximations | We address the problem of learning the parameters in graphical models when
inference is intractable. A common strategy in this case is to replace the
partition function with its Bethe approximation. We show that there exists a
regime of empirical marginals where such Bethe learning will fail. By failure
we mean that the empirical marginals cannot be recovered from the approximated
maximum likelihood parameters (i.e., moment matching is not achieved). We
provide several conditions on empirical marginals that yield outer and inner
bounds on the set of Bethe learnable marginals. An interesting implication of
our results is that there exists a large class of marginals that cannot be
obtained as stable fixed points of belief propagation. Taken together our
results provide a novel approach to analyzing learning with Bethe
approximations and highlight when it can be expected to work or fail.
| [
"['Uri Heinemann' 'Amir Globerson']",
"Uri Heinemann, Amir Globerson"
] |
cs.LG cs.AI stat.ML | null | 1202.3732 | null | null | http://arxiv.org/pdf/1202.3732v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Sum-Product Networks: A New Deep Architecture | The key limiting factor in graphical model inference and learning is the
complexity of the partition function. We thus ask the question: what are
general conditions under which the partition function is tractable? The answer
leads to a new kind of deep architecture, which we call sum-product networks
(SPNs). SPNs are directed acyclic graphs with variables as leaves, sums and
products as internal nodes, and weighted edges. We show that if an SPN is
complete and consistent it represents the partition function and all marginals
of some graphical model, and give semantics to its nodes. Essentially all
tractable graphical models can be cast as SPNs, but SPNs are also strictly more
general. We then propose learning algorithms for SPNs, based on backpropagation
and EM. Experiments show that inference and learning with SPNs can be both
faster and more accurate than with standard deep networks. For example, SPNs
perform image completion better than state-of-the-art deep networks for this
task. SPNs also have intriguing potential connections to the architecture of
the cortex.
| [
"['Hoifung Poon' 'Pedro Domingos']",
"Hoifung Poon, Pedro Domingos"
] |
cs.LG stat.ML | null | 1202.3733 | null | null | http://arxiv.org/pdf/1202.3733v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Lipschitz Parametrization of Probabilistic Graphical Models | We show that the log-likelihood of several probabilistic graphical models is
Lipschitz continuous with respect to the lp-norm of the parameters. We discuss
several implications of Lipschitz parametrization. We present an upper bound of
the Kullback-Leibler divergence that allows understanding methods that penalize
the lp-norm of differences of parameters as the minimization of that upper
bound. The expected log-likelihood is lower bounded by the negative lp-norm,
which allows understanding the generalization ability of probabilistic models.
The exponential of the negative lp-norm is involved in the lower bound of the
Bayes error rate, which shows that it is reasonable to use parameters as
features in algorithms that rely on metric spaces (e.g. classification,
dimensionality reduction, clustering). Our results do not rely on specific
algorithms for learning the structure or parameters. We show preliminary
results for activity recognition and temporal segmentation.
| [
"['Jean Honorio']",
"Jean Honorio"
] |
cs.LG cs.AI stat.ML | null | 1202.3734 | null | null | http://arxiv.org/pdf/1202.3734v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Efficient Probabilistic Inference with Partial Ranking Queries | Distributions over rankings are used to model data in various settings such
as preference analysis and political elections. The factorial size of the space
of rankings, however, typically forces one to make structural assumptions, such
as smoothness, sparsity, or probabilistic independence about these underlying
distributions. We approach the modeling problem from the computational
principle that one should make structural assumptions which allow for efficient
calculation of typical probabilistic queries. For ranking models, "typical"
queries predominantly take the form of partial ranking queries (e.g., given a
user's top-k favorite movies, what are his preferences over remaining movies?).
In this paper, we argue that riffled independence factorizations proposed in
recent literature [7, 8] are a natural structural assumption for ranking
distributions, allowing for particularly efficient processing of partial
ranking queries.
| [
"['Jonathan Huang' 'Ashish Kapoor' 'Carlos E. Guestrin']",
"Jonathan Huang, Ashish Kapoor, Carlos E. Guestrin"
] |
cs.LG stat.ML | null | 1202.3735 | null | null | http://arxiv.org/pdf/1202.3735v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Noisy-OR Models with Latent Confounding | Given a set of experiments in which varying subsets of observed variables are
subject to intervention, we consider the problem of identifiability of causal
models exhibiting latent confounding. While identifiability is trivial when
each experiment intervenes on a large number of variables, the situation is
more complicated when only one or a few variables are subject to intervention
per experiment. For linear causal models with latent variables Hyttinen et al.
(2010) gave precise conditions for when such data are sufficient to identify
the full model. While their result cannot be extended to discrete-valued
variables with arbitrary cause-effect relationships, we show that a similar
result can be obtained for the class of causal models whose conditional
probability distributions are restricted to a `noisy-OR' parameterization. We
further show that identification is preserved under an extension of the model
that allows for negative influences, and present learning algorithms that we
test for accuracy, scalability and robustness.
| [
"['Antti Hyttinen' 'Frederick Eberhardt' 'Patrik O. Hoyer']",
"Antti Hyttinen, Frederick Eberhardt, Patrik O. Hoyer"
] |
cs.LG stat.ML | null | 1202.3736 | null | null | http://arxiv.org/pdf/1202.3736v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Discovering causal structures in binary exclusive-or skew acyclic models | Discovering causal relations among observed variables in a given data set is
a main topic in studies of statistics and artificial intelligence. Recently,
some techniques to discover an identifiable causal structure have been explored
based on non-Gaussianity of the observed data distribution. However, most of
these are limited to continuous data. In this paper, we present a novel causal
model for binary data and propose a new approach to derive an identifiable
causal structure governing the data based on skew Bernoulli distributions of
external noise. Experimental evaluation shows excellent performance for both
artificial and real world data sets.
| [
"Takanori Inazumi, Takashi Washio, Shohei Shimizu, Joe Suzuki, Akihiro\n Yamamoto, Yoshinobu Kawahara",
"['Takanori Inazumi' 'Takashi Washio' 'Shohei Shimizu' 'Joe Suzuki'\n 'Akihiro Yamamoto' 'Yoshinobu Kawahara']"
] |
cs.LG stat.ML | null | 1202.3737 | null | null | http://arxiv.org/pdf/1202.3737v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Detecting low-complexity unobserved causes | We describe a method that infers whether statistical dependences between two
observed variables X and Y are due to a "direct" causal link or only due to a
connecting causal path that contains an unobserved variable of low complexity,
e.g., a binary variable. This problem is motivated by statistical genetics.
Given a genetic marker that is correlated with a phenotype of interest, we want
to detect whether this marker is causal or it only correlates with a causal
one. Our method is based on the analysis of the location of the conditional
distributions P(Y|x) in the simplex of all distributions of Y. We report
encouraging results on semi-empirical data.
| [
"['Dominik Janzing' 'Eleni Sgouritsa' 'Oliver Stegle' 'Jonas Peters'\n 'Bernhard Schoelkopf']",
"Dominik Janzing, Eleni Sgouritsa, Oliver Stegle, Jonas Peters,\n Bernhard Schoelkopf"
] |
cs.LG cs.AI stat.ML | null | 1202.3738 | null | null | http://arxiv.org/pdf/1202.3738v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Learning Determinantal Point Processes | Determinantal point processes (DPPs), which arise in random matrix theory and
quantum physics, are natural models for subset selection problems where
diversity is preferred. Among many remarkable properties, DPPs offer tractable
algorithms for exact inference, including computing marginal probabilities and
sampling; however, an important open question has been how to learn a DPP from
labeled training data. In this paper we propose a natural feature-based
parameterization of conditional DPPs, and show how it leads to a convex and
efficient learning formulation. We analyze the relationship between our model
and binary Markov random fields with repulsive potentials, which are
qualitatively similar but computationally intractable. Finally, we apply our
approach to the task of extractive summarization, where the goal is to choose a
small subset of sentences conveying the most important information from a set
of documents. In this task there is a fundamental tradeoff between sentences
that are highly relevant to the collection as a whole, and sentences that are
diverse and not repetitive. Our parameterization allows us to naturally balance
these two characteristics. We evaluate our system on data from the DUC 2003/04
multi-document summarization task, achieving state-of-the-art results.
| [
"['Alex Kulesza' 'Ben Taskar']",
"Alex Kulesza, Ben Taskar"
] |
cs.LG cs.AI cs.IT math.IT stat.ML | null | 1202.3742 | null | null | http://arxiv.org/pdf/1202.3742v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Variational Algorithms for Marginal MAP | Marginal MAP problems are notoriously difficult tasks for graphical models.
We derive a general variational framework for solving marginal MAP problems, in
which we apply analogues of the Bethe, tree-reweighted, and mean field
approximations. We then derive a "mixed" message passing algorithm and a
convergent alternative using CCCP to solve the BP-type approximations.
Theoretically, we give conditions under which the decoded solution is a global
or local optimum, and obtain novel upper bounds on solutions. Experimentally we
demonstrate that our algorithms outperform related approaches. We also show
that EM and variational EM comprise a special case of our framework.
| [
"Qiang Liu, Alexander T. Ihler",
"['Qiang Liu' 'Alexander T. Ihler']"
] |
cs.LG stat.ML | null | 1202.3746 | null | null | http://arxiv.org/pdf/1202.3746v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Asymptotic Efficiency of Deterministic Estimators for Discrete
Energy-Based Models: Ratio Matching and Pseudolikelihood | Standard maximum likelihood estimation cannot be applied to discrete
energy-based models in the general case because the computation of exact model
probabilities is intractable. Recent research has seen the proposal of several
new estimators designed specifically to overcome this intractability, but
virtually nothing is known about their theoretical properties. In this paper,
we present a generalized estimator that unifies many of the classical and
recently proposed estimators. We use results from the standard asymptotic
theory for M-estimators to derive a generic expression for the asymptotic
covariance matrix of our generalized estimator. We apply these results to study
the relative statistical efficiency of classical pseudolikelihood and the
recently-proposed ratio matching estimator.
| [
"['Benjamin Marlin' 'Nando de Freitas']",
"Benjamin Marlin, Nando de Freitas"
] |
cs.LG stat.ML | null | 1202.3747 | null | null | http://arxiv.org/pdf/1202.3747v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Reconstructing Pompeian Households | A database of objects discovered in houses in the Roman city of Pompeii
provides a unique view of ordinary life in an ancient city. Experts have used
this collection to study the structure of Roman households, exploring the
distribution and variability of tasks in architectural spaces, but such
approaches are necessarily affected by modern cultural assumptions. In this
study we present a data-driven approach to household archeology, treating it as
an unsupervised labeling problem. This approach scales to large data sets and
provides a more objective complement to human interpretation.
| [
"David Mimno",
"['David Mimno']"
] |
cs.LG stat.ML | null | 1202.3748 | null | null | http://arxiv.org/pdf/1202.3748v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Conditional Restricted Boltzmann Machines for Structured Output
Prediction | Conditional Restricted Boltzmann Machines (CRBMs) are rich probabilistic
models that have recently been applied to a wide range of problems, including
collaborative filtering, classification, and modeling motion capture data.
While much progress has been made in training non-conditional RBMs, these
algorithms are not applicable to conditional models and there has been almost
no work on training and generating predictions from conditional RBMs for
structured output problems. We first argue that standard Contrastive
Divergence-based learning may not be suitable for training CRBMs. We then
identify two distinct types of structured output prediction problems and
propose an improved learning algorithm for each. The first problem type is one
where the output space has arbitrary structure but the set of likely output
configurations is relatively small, such as in multi-label classification. The
second problem is one where the output space is arbitrarily structured but
where the output space variability is much greater, such as in image denoising
or pixel labeling. We show that the new learning algorithms can work much
better than Contrastive Divergence on both types of problems.
| [
"['Volodymyr Mnih' 'Hugo Larochelle' 'Geoffrey E. Hinton']",
"Volodymyr Mnih, Hugo Larochelle, Geoffrey E. Hinton"
] |
cs.LG stat.ML | null | 1202.3750 | null | null | http://arxiv.org/pdf/1202.3750v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Fractional Moments on Bandit Problems | Reinforcement learning addresses the dilemma between exploration to find
profitable actions and exploitation to act according to the best observations
already made. Bandit problems are one such class of problems in stateless
environments that represent this explore/exploit situation. We propose a
learning algorithm for bandit problems based on fractional expectation of
rewards acquired. The algorithm is theoretically shown to converge on an
eta-optimal arm and achieve O(n) sample complexity. Experimental results show
the algorithm incurs substantially lower regrets than parameter-optimized
eta-greedy and SoftMax approaches and other low sample complexity
state-of-the-art techniques.
| [
"Ananda Narayanan B, Balaraman Ravindran",
"['Ananda Narayanan B' 'Balaraman Ravindran']"
] |
cs.IR cs.CL cs.LG stat.ML | null | 1202.3752 | null | null | http://arxiv.org/pdf/1202.3752v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Multidimensional counting grids: Inferring word order from disordered
bags of words | Models of bags of words typically assume topic mixing so that the words in a
single bag come from a limited number of topics. We show here that many sets of
bag of words exhibit a very different pattern of variation than the patterns
that are efficiently captured by topic mixing. In many cases, from one bag of
words to the next, the words disappear and new ones appear as if the theme
slowly and smoothly shifted across documents (providing that the documents are
somehow ordered). Examples of latent structure that describe such ordering are
easily imagined. For example, the advancement of the date of the news stories
is reflected in a smooth change over the theme of the day as certain evolving
news stories fall out of favor and new events create new stories. Overlaps
among the stories of consecutive days can be modeled by using windows over
linearly arranged tight distributions over words. We show here that such
strategy can be extended to multiple dimensions and cases where the ordering of
data is not readily obvious. We demonstrate that this way of modeling
covariation in word occurrences outperforms standard topic models in
classification and prediction tasks in applications in biology, text modeling
and computer vision.
| [
"['Nebojsa Jojic' 'Alessandro Perina']",
"Nebojsa Jojic, Alessandro Perina"
] |
cs.LG stat.ML | null | 1202.3753 | null | null | http://arxiv.org/pdf/1202.3753v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Partial Order MCMC for Structure Discovery in Bayesian Networks | We present a new Markov chain Monte Carlo method for estimating posterior
probabilities of structural features in Bayesian networks. The method draws
samples from the posterior distribution of partial orders on the nodes; for
each sampled partial order, the conditional probabilities of interest are
computed exactly. We give both analytical and empirical results that suggest
the superiority of the new method compared to previous methods, which sample
either directed acyclic graphs or linear orders on the nodes.
| [
"['Teppo Niinimaki' 'Pekka Parviainen' 'Mikko Koivisto']",
"Teppo Niinimaki, Pekka Parviainen, Mikko Koivisto"
] |
cs.LG stat.ML | null | 1202.3757 | null | null | http://arxiv.org/pdf/1202.3757v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Identifiability of Causal Graphs using Functional Models | This work addresses the following question: Under what assumptions on the
data generating process can one infer the causal graph from the joint
distribution? The approach taken by conditional independence-based causal
discovery methods is based on two assumptions: the Markov condition and
faithfulness. It has been shown that under these assumptions the causal graph
can be identified up to Markov equivalence (some arrows remain undirected)
using methods like the PC algorithm. In this work we propose an alternative by
defining Identifiable Functional Model Classes (IFMOCs). As our main theorem we
prove that if the data generating process belongs to an IFMOC, one can identify
the complete causal graph. To the best of our knowledge this is the first
identifiability result of this kind that is not limited to linear functional
relationships. We discuss how the IFMOC assumption and the Markov and
faithfulness assumptions relate to each other and explain why we believe that
the IFMOC assumption can be tested more easily on given data. We further
provide a practical algorithm that recovers the causal graph from finitely many
data; experiments on simulated data support the theoretical findings.
| [
"['Jonas Peters' 'Joris Mooij' 'Dominik Janzing' 'Bernhard Schoelkopf']",
"Jonas Peters, Joris Mooij, Dominik Janzing, Bernhard Schoelkopf"
] |
cs.LG stat.ML | null | 1202.3758 | null | null | http://arxiv.org/pdf/1202.3758v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Nonparametric Divergence Estimation with Applications to Machine
Learning on Distributions | Low-dimensional embedding, manifold learning, clustering, classification, and
anomaly detection are among the most important problems in machine learning.
The existing methods usually consider the case when each instance has a fixed,
finite-dimensional feature representation. Here we consider a different
setting. We assume that each instance corresponds to a continuous probability
distribution. These distributions are unknown, but we are given some i.i.d.
samples from each distribution. Our goal is to estimate the distances between
these distributions and use these distances to perform low-dimensional
embedding, clustering/classification, or anomaly detection for the
distributions. We present estimation algorithms, describe how to apply them for
machine learning tasks on distributions, and show empirical results on
synthetic data, real word images, and astronomical data sets.
| [
"['Barnabas Poczos' 'Liang Xiong' 'Jeff Schneider']",
"Barnabas Poczos, Liang Xiong, Jeff Schneider"
] |
stat.ME cs.LG stat.ML | null | 1202.3760 | null | null | http://arxiv.org/pdf/1202.3760v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Fast MCMC sampling for Markov jump processes and continuous time
Bayesian networks | Markov jump processes and continuous time Bayesian networks are important
classes of continuous time dynamical systems. In this paper, we tackle the
problem of inferring unobserved paths in these models by introducing a fast
auxiliary variable Gibbs sampler. Our approach is based on the idea of
uniformization, and sets up a Markov chain over paths by sampling a finite set
of virtual jump times and then running a standard hidden Markov model forward
filtering-backward sampling algorithm over states at the set of extant and
virtual jump times. We demonstrate significant computational benefits over a
state-of-the-art Gibbs sampler on a number of continuous time Bayesian
networks.
| [
"['Vinayak Rao' 'Yee Whye Teh']",
"Vinayak Rao, Yee Whye Teh"
] |
cs.LG stat.ML | null | 1202.3761 | null | null | http://arxiv.org/pdf/1202.3761v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | New Probabilistic Bounds on Eigenvalues and Eigenvectors of Random
Kernel Matrices | Kernel methods are successful approaches for different machine learning
problems. This success is mainly rooted in using feature maps and kernel
matrices. Some methods rely on the eigenvalues/eigenvectors of the kernel
matrix, while for other methods the spectral information can be used to
estimate the excess risk. An important question remains on how close the sample
eigenvalues/eigenvectors are to the population values. In this paper, we
improve earlier results on concentration bounds for eigenvalues of general
kernel matrices. For distance and inner product kernel functions, e.g. radial
basis functions, we provide new concentration bounds, which are characterized
by the eigenvalues of the sample covariance matrix. Meanwhile, the obstacles
for sharper bounds are accounted for and partially addressed. As a case study,
we derive a concentration inequality for sample kernel target-alignment.
| [
"['Nima Reyhani' 'Hideitsu Hino' 'Ricardo Vigario']",
"Nima Reyhani, Hideitsu Hino, Ricardo Vigario"
] |
cs.LG stat.ML | null | 1202.3763 | null | null | http://arxiv.org/pdf/1202.3763v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | An Efficient Algorithm for Computing Interventional Distributions in
Latent Variable Causal Models | Probabilistic inference in graphical models is the task of computing marginal
and conditional densities of interest from a factorized representation of a
joint probability distribution. Inference algorithms such as variable
elimination and belief propagation take advantage of constraints embedded in
this factorization to compute such densities efficiently. In this paper, we
propose an algorithm which computes interventional distributions in latent
variable causal models represented by acyclic directed mixed graphs(ADMGs). To
compute these distributions efficiently, we take advantage of a recursive
factorization which generalizes the usual Markov factorization for DAGs and the
more recent factorization for ADMGs. Our algorithm can be viewed as a
generalization of variable elimination to the mixed graph case. We show our
algorithm is exponential in the mixed graph generalization of treewidth.
| [
"Ilya Shpitser, Thomas S. Richardson, James M. Robins",
"['Ilya Shpitser' 'Thomas S. Richardson' 'James M. Robins']"
] |
stat.ME cs.LG stat.ML | null | 1202.3765 | null | null | http://arxiv.org/pdf/1202.3765v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Learning mixed graphical models from data with p larger than n | Structure learning of Gaussian graphical models is an extensively studied
problem in the classical multivariate setting where the sample size n is larger
than the number of random variables p, as well as in the more challenging
setting when p>>n. However, analogous approaches for learning the structure of
graphical models with mixed discrete and continuous variables when p>>n remain
largely unexplored. Here we describe a statistical learning procedure for this
problem based on limited-order correlations and assess its performance with
synthetic and real data.
| [
"['Inma Tur' 'Robert Castelo']",
"Inma Tur, Robert Castelo"
] |
cs.LG stat.ML | null | 1202.3766 | null | null | http://arxiv.org/pdf/1202.3766v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Robust learning Bayesian networks for prior belief | Recent reports have described that learning Bayesian networks are highly
sensitive to the chosen equivalent sample size (ESS) in the Bayesian Dirichlet
equivalence uniform (BDeu). This sensitivity often engenders some unstable or
undesirable results. This paper describes some asymptotic analyses of BDeu to
explain the reasons for the sensitivity and its effects. Furthermore, this
paper presents a proposal for a robust learning score for ESS by eliminating
the sensitive factors from the approximation of log-BDeu.
| [
"['Maomi Ueno']",
"Maomi Ueno"
] |
cs.LG stat.ML | null | 1202.3769 | null | null | http://arxiv.org/pdf/1202.3769v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Sparse matrix-variate Gaussian process blockmodels for network modeling | We face network data from various sources, such as protein interactions and
online social networks. A critical problem is to model network interactions and
identify latent groups of network nodes. This problem is challenging due to
many reasons. For example, the network nodes are interdependent instead of
independent of each other, and the data are known to be very noisy (e.g.,
missing edges). To address these challenges, we propose a new relational model
for network data, Sparse Matrix-variate Gaussian process Blockmodel (SMGB). Our
model generalizes popular bilinear generative models and captures nonlinear
network interactions using a matrix-variate Gaussian process with latent
membership variables. We also assign sparse prior distributions on the latent
membership variables to learn sparse group assignments for individual network
nodes. To estimate the latent variables efficiently from data, we develop an
efficient variational expectation maximization method. We compared our
approaches with several state-of-the-art network models on both synthetic and
real-world network datasets. Experimental results demonstrate SMGBs outperform
the alternative approaches in terms of discovering latent classes or predicting
unknown interactions.
| [
"Feng Yan, Zenglin Xu, Yuan (Alan) Qi",
"['Feng Yan' 'Zenglin Xu' 'Yuan' 'Qi']"
] |
cs.LG stat.ML | null | 1202.3770 | null | null | http://arxiv.org/pdf/1202.3770v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Hierarchical Maximum Margin Learning for Multi-Class Classification | Due to myriads of classes, designing accurate and efficient classifiers
becomes very challenging for multi-class classification. Recent research has
shown that class structure learning can greatly facilitate multi-class
learning. In this paper, we propose a novel method to learn the class structure
for multi-class classification problems. The class structure is assumed to be a
binary hierarchical tree. To learn such a tree, we propose a maximum separating
margin method to determine the child nodes of any internal node. The proposed
method ensures that two classgroups represented by any two sibling nodes are
most separable. In the experiments, we evaluate the accuracy and efficiency of
the proposed method over other multi-class classification methods on real world
large-scale problems. The results show that the proposed method outperforms
benchmark methods in terms of accuracy for most datasets and performs
comparably with other class structure learning methods in terms of efficiency
for all datasets.
| [
"Jian-Bo Yang, Ivor W. Tsang",
"['Jian-Bo Yang' 'Ivor W. Tsang']"
] |
cs.LG stat.ML | null | 1202.3771 | null | null | http://arxiv.org/pdf/1202.3771v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Tightening MRF Relaxations with Planar Subproblems | We describe a new technique for computing lower-bounds on the minimum energy
configuration of a planar Markov Random Field (MRF). Our method successively
adds large numbers of constraints and enforces consistency over binary
projections of the original problem state space. These constraints are
represented in terms of subproblems in a dual-decomposition framework that is
optimized using subgradient techniques. The complete set of constraints we
consider enforces cycle consistency over the original graph. In practice we
find that the method converges quickly on most problems with the addition of a
few subproblems and outperforms existing methods for some interesting classes
of hard potentials.
| [
"['Julian Yarkony' 'Ragib Morshed' 'Alexander T. Ihler'\n 'Charless C. Fowlkes']",
"Julian Yarkony, Ragib Morshed, Alexander T. Ihler, Charless C. Fowlkes"
] |
cs.LG cs.NA stat.ML | null | 1202.3772 | null | null | http://arxiv.org/pdf/1202.3772v2 | 2012-10-09T21:00:59Z | 2012-02-14T16:41:17Z | Rank/Norm Regularization with Closed-Form Solutions: Application to
Subspace Clustering | When data is sampled from an unknown subspace, principal component analysis
(PCA) provides an effective way to estimate the subspace and hence reduce the
dimension of the data. At the heart of PCA is the Eckart-Young-Mirsky theorem,
which characterizes the best rank k approximation of a matrix. In this paper,
we prove a generalization of the Eckart-Young-Mirsky theorem under all
unitarily invariant norms. Using this result, we obtain closed-form solutions
for a set of rank/norm regularized problems, and derive closed-form solutions
for a general class of subspace clustering problems (where data is modelled by
unions of unknown subspaces). From these results we obtain new theoretical
insights and promising experimental results.
| [
"['Yao-Liang Yu' 'Dale Schuurmans']",
"Yao-Liang Yu, Dale Schuurmans"
] |
stat.ML cs.LG | null | 1202.3774 | null | null | http://arxiv.org/pdf/1202.3774v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Risk Bounds for Infinitely Divisible Distribution | In this paper, we study the risk bounds for samples independently drawn from
an infinitely divisible (ID) distribution. In particular, based on a martingale
method, we develop two deviation inequalities for a sequence of random
variables of an ID distribution with zero Gaussian component. By applying the
deviation inequalities, we obtain the risk bounds based on the covering number
for the ID distribution. Finally, we analyze the asymptotic convergence of the
risk bound derived from one of the two deviation inequalities and show that the
convergence rate of the bound is faster than the result for the generic i.i.d.
empirical process (Mendelson, 2003).
| [
"Chao Zhang, Dacheng Tao",
"['Chao Zhang' 'Dacheng Tao']"
] |
cs.LG stat.ML | null | 1202.3775 | null | null | http://arxiv.org/pdf/1202.3775v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Kernel-based Conditional Independence Test and Application in Causal
Discovery | Conditional independence testing is an important problem, especially in
Bayesian network learning and causal discovery. Due to the curse of
dimensionality, testing for conditional independence of continuous variables is
particularly challenging. We propose a Kernel-based Conditional Independence
test (KCI-test), by constructing an appropriate test statistic and deriving its
asymptotic distribution under the null hypothesis of conditional independence.
The proposed method is computationally efficient and easy to implement.
Experimental results show that it outperforms other methods, especially when
the conditioning set is large or the sample size is not very large, in which
case other methods encounter difficulties.
| [
"['Kun Zhang' 'Jonas Peters' 'Dominik Janzing' 'Bernhard Schoelkopf']",
"Kun Zhang, Jonas Peters, Dominik Janzing, Bernhard Schoelkopf"
] |
cs.LG stat.ML | null | 1202.3776 | null | null | http://arxiv.org/pdf/1202.3776v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Smoothing Multivariate Performance Measures | A Support Vector Method for multivariate performance measures was recently
introduced by Joachims (2005). The underlying optimization problem is currently
solved using cutting plane methods such as SVM-Perf and BMRM. One can show that
these algorithms converge to an eta accurate solution in O(1/Lambda*e)
iterations, where lambda is the trade-off parameter between the regularizer and
the loss function. We present a smoothing strategy for multivariate performance
scores, in particular precision/recall break-even point and ROCArea. When
combined with Nesterov's accelerated gradient algorithm our smoothing strategy
yields an optimization algorithm which converges to an eta accurate solution in
O(min{1/e,1/sqrt(lambda*e)}) iterations. Furthermore, the cost per iteration of
our scheme is the same as that of SVM-Perf and BMRM. Empirical evaluation on a
number of publicly available datasets shows that our method converges
significantly faster than cutting plane methods without sacrificing
generalization ability.
| [
"['Xinhua Zhang' 'Ankan Saha' 'S. V. N. Vishwanatan']",
"Xinhua Zhang, Ankan Saha, S. V.N. Vishwanatan"
] |
cs.LG stat.ML | null | 1202.3778 | null | null | http://arxiv.org/pdf/1202.3778v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Sparse Topical Coding | We present sparse topical coding (STC), a non-probabilistic formulation of
topic models for discovering latent representations of large collections of
data. Unlike probabilistic topic models, STC relaxes the normalization
constraint of admixture proportions and the constraint of defining a normalized
likelihood function. Such relaxations make STC amenable to: 1) directly control
the sparsity of inferred representations by using sparsity-inducing
regularizers; 2) be seamlessly integrated with a convex error function (e.g.,
SVM hinge loss) for supervised learning; and 3) be efficiently learned with a
simply structured coordinate descent algorithm. Our results demonstrate the
advantages of STC and supervised MedSTC on identifying topical meanings of
words and improving classification accuracy and time efficiency.
| [
"['Jun Zhu' 'Eric P. Xing']",
"Jun Zhu, Eric P. Xing"
] |
cs.LG stat.ML | null | 1202.3779 | null | null | http://arxiv.org/pdf/1202.3779v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Testing whether linear equations are causal: A free probability theory
approach | We propose a method that infers whether linear relations between two
high-dimensional variables X and Y are due to a causal influence from X to Y or
from Y to X. The earlier proposed so-called Trace Method is extended to the
regime where the dimension of the observed variables exceeds the sample size.
Based on previous work, we postulate conditions that characterize a causal
relation between X and Y. Moreover, we describe a statistical test and argue
that both causal directions are typically rejected if there is a common cause.
A full theoretical analysis is presented for the deterministic case but our
approach seems to be valid for the noisy case, too, for which we additionally
present an approach based on a sparsity constraint. The discussed method yields
promising results for both simulated and real world data.
| [
"Jakob Zscheischler, Dominik Janzing, Kun Zhang",
"['Jakob Zscheischler' 'Dominik Janzing' 'Kun Zhang']"
] |
cs.LG cs.AI stat.ML | null | 1202.3782 | null | null | http://arxiv.org/pdf/1202.3782v1 | 2012-02-14T16:41:17Z | 2012-02-14T16:41:17Z | Graphical Models for Bandit Problems | We introduce a rich class of graphical models for multi-armed bandit problems
that permit both the state or context space and the action space to be very
large, yet succinctly specify the payoffs for any context-action pair. Our main
result is an algorithm for such models whose regret is bounded by the number of
parameters and whose running time depends only on the treewidth of the graph
substructure induced by the action space.
| [
"Kareem Amin, Michael Kearns, Umar Syed",
"['Kareem Amin' 'Michael Kearns' 'Umar Syed']"
] |
cs.LG | null | 1202.3890 | null | null | http://arxiv.org/pdf/1202.3890v1 | 2012-02-17T11:59:55Z | 2012-02-17T11:59:55Z | PAC Bounds for Discounted MDPs | We study upper and lower bounds on the sample-complexity of learning
near-optimal behaviour in finite-state discounted Markov Decision Processes
(MDPs). For the upper bound we make the assumption that each action leads to at
most two possible next-states and prove a new bound for a UCRL-style algorithm
on the number of time-steps when it is not Probably Approximately Correct
(PAC). The new lower bound strengthens previous work by being both more general
(it applies to all policies) and tighter. The upper and lower bounds match up
to logarithmic factors.
| [
"Tor Lattimore and Marcus Hutter",
"['Tor Lattimore' 'Marcus Hutter']"
] |
cs.CV cs.LG | null | 1202.4002 | null | null | http://arxiv.org/pdf/1202.4002v1 | 2012-02-17T20:07:25Z | 2012-02-17T20:07:25Z | Generalized Principal Component Analysis (GPCA) | This paper presents an algebro-geometric solution to the problem of
segmenting an unknown number of subspaces of unknown and varying dimensions
from sample data points. We represent the subspaces with a set of homogeneous
polynomials whose degree is the number of subspaces and whose derivatives at a
data point give normal vectors to the subspace passing through the point. When
the number of subspaces is known, we show that these polynomials can be
estimated linearly from data; hence, subspace segmentation is reduced to
classifying one point per subspace. We select these points optimally from the
data set by minimizing certain distance function, thus dealing automatically
with moderate noise in the data. A basis for the complement of each subspace is
then recovered by applying standard PCA to the collection of derivatives
(normal vectors). Extensions of GPCA that deal with data in a high- dimensional
space and with an unknown number of subspaces are also presented. Our
experiments on low-dimensional data show that GPCA outperforms existing
algebraic algorithms based on polynomial factorization and provides a good
initialization to iterative techniques such as K-subspaces and Expectation
Maximization. We also present applications of GPCA to computer vision problems
such as face clustering, temporal video segmentation, and 3D motion
segmentation from point correspondences in multiple affine views.
| [
"Rene Vidal, Yi Ma, Shankar Sastry",
"['Rene Vidal' 'Yi Ma' 'Shankar Sastry']"
] |
cs.LG stat.ML | null | 1202.4050 | null | null | http://arxiv.org/pdf/1202.4050v2 | 2012-10-08T00:07:13Z | 2012-02-18T02:28:49Z | On the Sample Complexity of Predictive Sparse Coding | The goal of predictive sparse coding is to learn a representation of examples
as sparse linear combinations of elements from a dictionary, such that a
learned hypothesis linear in the new representation performs well on a
predictive task. Predictive sparse coding algorithms recently have demonstrated
impressive performance on a variety of supervised tasks, but their
generalization properties have not been studied. We establish the first
generalization error bounds for predictive sparse coding, covering two
settings: 1) the overcomplete setting, where the number of features k exceeds
the original dimensionality d; and 2) the high or infinite-dimensional setting,
where only dimension-free bounds are useful. Both learning bounds intimately
depend on stability properties of the learned sparse encoder, as measured on
the training sample. Consequently, we first present a fundamental stability
result for the LASSO, a result characterizing the stability of the sparse codes
with respect to perturbations to the dictionary. In the overcomplete setting,
we present an estimation error bound that decays as \tilde{O}(sqrt(d k/m)) with
respect to d and k. In the high or infinite-dimensional setting, we show a
dimension-free bound that is \tilde{O}(sqrt(k^2 s / m)) with respect to k and
s, where s is an upper bound on the number of non-zeros in the sparse code for
any training data point.
| [
"Nishant A. Mehta and Alexander G. Gray",
"['Nishant A. Mehta' 'Alexander G. Gray']"
] |
cs.LG cs.DS | null | 1202.4473 | null | null | http://arxiv.org/pdf/1202.4473v1 | 2012-02-20T21:29:28Z | 2012-02-20T21:29:28Z | The best of both worlds: stochastic and adversarial bandits | We present a new bandit algorithm, SAO (Stochastic and Adversarial Optimal),
whose regret is, essentially, optimal both for adversarial rewards and for
stochastic rewards. Specifically, SAO combines the square-root worst-case
regret of Exp3 (Auer et al., SIAM J. on Computing 2002) and the
(poly)logarithmic regret of UCB1 (Auer et al., Machine Learning 2002) for
stochastic rewards. Adversarial rewards and stochastic rewards are the two main
settings in the literature on (non-Bayesian) multi-armed bandits. Prior work on
multi-armed bandits treats them separately, and does not attempt to jointly
optimize for both. Our result falls into a general theme of achieving good
worst-case performance while also taking advantage of "nice" problem instances,
an important issue in the design of algorithms with partially known inputs.
| [
"Sebastien Bubeck and Aleksandrs Slivkins",
"['Sebastien Bubeck' 'Aleksandrs Slivkins']"
] |
cs.GT cs.AI cs.LG stat.ML | null | 1202.4478 | null | null | http://arxiv.org/pdf/1202.4478v1 | 2012-02-20T21:48:09Z | 2012-02-20T21:48:09Z | (weak) Calibration is Computationally Hard | We show that the existence of a computationally efficient calibration
algorithm, with a low weak calibration rate, would imply the existence of an
efficient algorithm for computing approximate Nash equilibria - thus implying
the unlikely conclusion that every problem in PPAD is solvable in polynomial
time.
| [
"['Elad Hazan' 'Sham Kakade']",
"Elad Hazan, Sham Kakade"
] |
q-bio.NC cs.LG nlin.AO | null | 1202.4482 | null | null | http://arxiv.org/pdf/1202.4482v2 | 2013-02-09T21:34:51Z | 2012-02-20T22:02:16Z | Metabolic cost as an organizing principle for cooperative learning | This paper investigates how neurons can use metabolic cost to facilitate
learning at a population level. Although decision-making by individual neurons
has been extensively studied, questions regarding how neurons should behave to
cooperate effectively remain largely unaddressed. Under assumptions that
capture a few basic features of cortical neurons, we show that constraining
reward maximization by metabolic cost aligns the information content of actions
with their expected reward. Thus, metabolic cost provides a mechanism whereby
neurons encode expected reward into their outputs. Further, aside from reducing
energy expenditures, imposing a tight metabolic constraint also increases the
accuracy of empirical estimates of rewards, increasing the robustness of
distributed learning. Finally, we present two implementations of metabolically
constrained learning that confirm our theoretical finding. These results
suggest that metabolic cost may be an organizing principle underlying the
neural code, and may also provide a useful guide to the design and analysis of
other cooperating populations.
| [
"David Balduzzi, Pedro A Ortega, Michel Besserve",
"['David Balduzzi' 'Pedro A Ortega' 'Michel Besserve']"
] |
cs.SY cs.LG | null | 1202.5298 | null | null | http://arxiv.org/pdf/1202.5298v2 | 2012-10-30T16:29:38Z | 2012-02-23T20:53:18Z | Min Max Generalization for Two-stage Deterministic Batch Mode
Reinforcement Learning: Relaxation Schemes | We study the minmax optimization problem introduced in [22] for computing
policies for batch mode reinforcement learning in a deterministic setting.
First, we show that this problem is NP-hard. In the two-stage case, we provide
two relaxation schemes. The first relaxation scheme works by dropping some
constraints in order to obtain a problem that is solvable in polynomial time.
The second relaxation scheme, based on a Lagrangian relaxation where all
constraints are dualized, leads to a conic quadratic programming problem. We
also theoretically prove and empirically illustrate that both relaxation
schemes provide better results than those given in [22].
| [
"['Raphael Fonteneau' 'Damien Ernst' 'Bernard Boigelot' 'Quentin Louveaux']",
"Raphael Fonteneau, Damien Ernst, Bernard Boigelot and Quentin Louveaux"
] |
stat.ML cs.LG | null | 1202.5514 | null | null | http://arxiv.org/pdf/1202.5514v2 | 2015-02-24T21:17:54Z | 2012-02-24T17:55:33Z | Classification approach based on association rules mining for unbalanced
data | This paper deals with the binary classification task when the target class
has the lower probability of occurrence. In such situation, it is not possible
to build a powerful classifier by using standard methods such as logistic
regression, classification tree, discriminant analysis, etc. To overcome this
short-coming of these methods which yield classifiers with low sensibility, we
tackled the classification problem here through an approach based on the
association rules learning. This approach has the advantage of allowing the
identification of the patterns that are well correlated with the target class.
Association rules learning is a well known method in the area of data-mining.
It is used when dealing with large database for unsupervised discovery of local
patterns that expresses hidden relationships between input variables. In
considering association rules from a supervised learning point of view, a
relevant set of weak classifiers is obtained from which one derives a
classifier that performs well.
| [
"Cheikh Ndour (1,2,3), Aliou Diop (1), Simplice Dossou-Gb\\'et\\'e (2)\n ((1) Universit\\'e Gaston Berger, Saint-Louis, S\\'en\\'egal (2) Universit\\'e de\n Pau et des Pays de l 'Adour, Pau, France (3) Universit\\'e de Bordeaux,\n Bordeaux, France)",
"['Cheikh Ndour' 'Aliou Diop' 'Simplice Dossou-Gbété']"
] |
cs.AI cs.LG | null | 1202.5597 | null | null | http://arxiv.org/pdf/1202.5597v3 | 2012-05-01T03:08:22Z | 2012-02-25T02:00:51Z | Hybrid Batch Bayesian Optimization | Bayesian Optimization aims at optimizing an unknown non-convex/concave
function that is costly to evaluate. We are interested in application scenarios
where concurrent function evaluations are possible. Under such a setting, BO
could choose to either sequentially evaluate the function, one input at a time
and wait for the output of the function before making the next selection, or
evaluate the function at a batch of multiple inputs at once. These two
different settings are commonly referred to as the sequential and batch
settings of Bayesian Optimization. In general, the sequential setting leads to
better optimization performance as each function evaluation is selected with
more information, whereas the batch setting has an advantage in terms of the
total experimental time (the number of iterations). In this work, our goal is
to combine the strength of both settings. Specifically, we systematically
analyze Bayesian optimization using Gaussian process as the posterior estimator
and provide a hybrid algorithm that, based on the current state, dynamically
switches between a sequential policy and a batch policy with variable batch
sizes. We provide theoretical justification for our algorithm and present
experimental results on eight benchmark BO problems. The results show that our
method achieves substantial speedup (up to %78) compared to a pure sequential
policy, without suffering any significant performance loss.
| [
"Javad Azimi, Ali Jalali and Xiaoli Fern",
"['Javad Azimi' 'Ali Jalali' 'Xiaoli Fern']"
] |
cs.LG stat.ML | null | 1202.5598 | null | null | http://arxiv.org/pdf/1202.5598v4 | 2012-04-13T06:52:44Z | 2012-02-25T02:10:20Z | Clustering using Max-norm Constrained Optimization | We suggest using the max-norm as a convex surrogate constraint for
clustering. We show how this yields a better exact cluster recovery guarantee
than previously suggested nuclear-norm relaxation, and study the effectiveness
of our method, and other related convex relaxations, compared to other
clustering approaches.
| [
"Ali Jalali and Nathan Srebro",
"['Ali Jalali' 'Nathan Srebro']"
] |
cs.LG stat.ML | null | 1202.5695 | null | null | http://arxiv.org/pdf/1202.5695v2 | 2012-07-05T12:15:40Z | 2012-02-25T20:23:37Z | Training Restricted Boltzmann Machines on Word Observations | The restricted Boltzmann machine (RBM) is a flexible tool for modeling
complex data, however there have been significant computational difficulties in
using RBMs to model high-dimensional multinomial observations. In natural
language processing applications, words are naturally modeled by K-ary discrete
distributions, where K is determined by the vocabulary size and can easily be
in the hundreds of thousands. The conventional approach to training RBMs on
word observations is limited because it requires sampling the states of K-way
softmax visible units during block Gibbs updates, an operation that takes time
linear in K. In this work, we address this issue by employing a more general
class of Markov chain Monte Carlo operators on the visible units, yielding
updates with computational complexity independent of K. We demonstrate the
success of our approach by training RBMs on hundreds of millions of word
n-grams using larger vocabularies than previously feasible and using the
learned features to improve performance on chunking and sentiment
classification tasks, achieving state-of-the-art results on the latter.
| [
"['George E. Dahl' 'Ryan P. Adams' 'Hugo Larochelle']",
"George E. Dahl, Ryan P. Adams and Hugo Larochelle"
] |
stat.ML cs.LG | null | 1202.6001 | null | null | http://arxiv.org/pdf/1202.6001v2 | 2012-02-28T02:34:47Z | 2012-02-27T17:17:16Z | Efficiently Sampling Multiplicative Attribute Graphs Using a
Ball-Dropping Process | We introduce a novel and efficient sampling algorithm for the Multiplicative
Attribute Graph Model (MAGM - Kim and Leskovec (2010)}). Our algorithm is
\emph{strictly} more efficient than the algorithm proposed by Yun and
Vishwanathan (2012), in the sense that our method extends the \emph{best} time
complexity guarantee of their algorithm to a larger fraction of parameter
space. Both in theory and in empirical evaluation on sparse graphs, our new
algorithm outperforms the previous one. To design our algorithm, we first
define a stochastic \emph{ball-dropping process} (BDP). Although a special case
of this process was introduced as an efficient approximate sampling algorithm
for the Kronecker Product Graph Model (KPGM - Leskovec et al. (2010)}), neither
\emph{why} such an approximation works nor \emph{what} is the actual
distribution this process is sampling from has been addressed so far to the
best of our knowledge. Our rigorous treatment of the BDP enables us to clarify
the rational behind a BDP approximation of KPGM, and design an efficient
sampling algorithm for the MAGM.
| [
"Hyokun Yun and S. V. N. Vishwanathan",
"['Hyokun Yun' 'S. V. N. Vishwanathan']"
] |
stat.ML cs.LG | null | 1202.6078 | null | null | http://arxiv.org/pdf/1202.6078v1 | 2012-02-27T21:33:32Z | 2012-02-27T21:33:32Z | Protocols for Learning Classifiers on Distributed Data | We consider the problem of learning classifiers for labeled data that has
been distributed across several nodes. Our goal is to find a single classifier,
with small approximation error, across all datasets while minimizing the
communication between nodes. This setting models real-world communication
bottlenecks in the processing of massive distributed datasets. We present
several very general sampling-based solutions as well as some two-way protocols
which have a provable exponential speed-up over any one-way protocol. We focus
on core problems for noiseless data distributed across two or more nodes. The
techniques we introduce are reminiscent of active learning, but rather than
actively probing labels, nodes actively communicate with each other, each node
simultaneously learning the important data from another node.
| [
"['Hal Daume III' 'Jeff M. Phillips' 'Avishek Saha'\n 'Suresh Venkatasubramanian']",
"Hal Daume III, Jeff M. Phillips, Avishek Saha, Suresh\n Venkatasubramanian"
] |
physics.data-an cs.LG | null | 1202.6103 | null | null | http://arxiv.org/pdf/1202.6103v2 | 2012-07-17T13:53:24Z | 2012-02-28T01:53:01Z | Nonlinear Laplacian spectral analysis: Capturing intermittent and
low-frequency spatiotemporal patterns in high-dimensional data | We present a technique for spatiotemporal data analysis called nonlinear
Laplacian spectral analysis (NLSA), which generalizes singular spectrum
analysis (SSA) to take into account the nonlinear manifold structure of complex
data sets. The key principle underlying NLSA is that the functions used to
represent temporal patterns should exhibit a degree of smoothness on the
nonlinear data manifold M; a constraint absent from classical SSA. NLSA
enforces such a notion of smoothness by requiring that temporal patterns belong
in low-dimensional Hilbert spaces V_l spanned by the leading l Laplace-Beltrami
eigenfunctions on M. These eigenfunctions can be evaluated efficiently in high
ambient-space dimensions using sparse graph-theoretic algorithms. Moreover,
they provide orthonormal bases to expand a family of linear maps, whose
singular value decomposition leads to sets of spatiotemporal patterns at
progressively finer resolution on the data manifold. The Riemannian measure of
M and an adaptive graph kernel width enhances the capability of NLSA to detect
important nonlinear processes, including intermittency and rare events. The
minimum dimension of V_l required to capture these features while avoiding
overfitting is estimated here using spectral entropy criteria.
| [
"Dimitrios Giannakis and Andrew J. Majda",
"['Dimitrios Giannakis' 'Andrew J. Majda']"
] |
cs.GT cs.AI cs.LG | null | 1202.6157 | null | null | http://arxiv.org/pdf/1202.6157v1 | 2012-02-28T09:51:29Z | 2012-02-28T09:51:29Z | Distributed Power Allocation with SINR Constraints Using Trial and Error
Learning | In this paper, we address the problem of global transmit power minimization
in a self-congiguring network where radio devices are subject to operate at a
minimum signal to interference plus noise ratio (SINR) level. We model the
network as a parallel Gaussian interference channel and we introduce a fully
decentralized algorithm (based on trial and error) able to statistically
achieve a congiguration where the performance demands are met. Contrary to
existing solutions, our algorithm requires only local information and can learn
stable and efficient working points by using only one bit feedback. We model
the network under two different game theoretical frameworks: normal form and
satisfaction form. We show that the converging points correspond to equilibrium
points, namely Nash and satisfaction equilibrium. Similarly, we provide
sufficient conditions for the algorithm to converge in both formulations.
Moreover, we provide analytical results to estimate the algorithm's
performance, as a function of the network parameters. Finally, numerical
results are provided to validate our theoretical conclusions. Keywords:
Learning, power control, trial and error, Nash equilibrium, spectrum sharing.
| [
"['Luca Rose' 'Samir M. Perlaza' 'Mérouane Debbah'\n 'Christophe J. Le Martret']",
"Luca Rose, Samir M. Perlaza, M\\'erouane Debbah, Christophe J. Le\n Martret"
] |
cs.LG | null | 1202.6221 | null | null | http://arxiv.org/pdf/1202.6221v2 | 2012-05-24T19:27:24Z | 2012-02-28T14:03:11Z | Confusion Matrix Stability Bounds for Multiclass Classification | In this paper, we provide new theoretical results on the generalization
properties of learning algorithms for multiclass classification problems. The
originality of our work is that we propose to use the confusion matrix of a
classifier as a measure of its quality; our contribution is in the line of work
which attempts to set up and study the statistical properties of new evaluation
measures such as, e.g. ROC curves. In the confusion-based learning framework we
propose, we claim that a targetted objective is to minimize the size of the
confusion matrix C, measured through its operator norm ||C||. We derive
generalization bounds on the (size of the) confusion matrix in an extended
framework of uniform stability, adapted to the case of matrix valued loss.
Pivotal to our study is a very recent matrix concentration inequality that
generalizes McDiarmid's inequality. As an illustration of the relevance of our
theoretical results, we show how two SVM learning procedures can be proved to
be confusion-friendly. To the best of our knowledge, the present paper is the
first that focuses on the confusion matrix from a theoretical point of view.
| [
"['Pierre Machart' 'Liva Ralaivola']",
"Pierre Machart (LIF, LSIS), Liva Ralaivola (LIF)"
] |
stat.ML cs.LG | null | 1202.6228 | null | null | http://arxiv.org/pdf/1202.6228v6 | 2013-10-22T08:25:52Z | 2012-02-28T14:13:01Z | PAC-Bayesian Generalization Bound on Confusion Matrix for Multi-Class
Classification | In this work, we propose a PAC-Bayes bound for the generalization risk of the
Gibbs classifier in the multi-class classification framework. The novelty of
our work is the critical use of the confusion matrix of a classifier as an
error measure; this puts our contribution in the line of work aiming at dealing
with performance measure that are richer than mere scalar criterion such as the
misclassification rate. Thanks to very recent and beautiful results on matrix
concentration inequalities, we derive two bounds showing that the true
confusion risk of the Gibbs classifier is upper-bounded by its empirical risk
plus a term depending on the number of training examples in each class. To the
best of our knowledge, this is the first PAC-Bayes bounds based on confusion
matrices.
| [
"Emilie Morvant (LIF), Sokol Ko\\c{c}o (LIF), Liva Ralaivola (LIF)",
"['Emilie Morvant' 'Sokol Koço' 'Liva Ralaivola']"
] |
math.OC cs.LG | null | 1202.6258 | null | null | http://arxiv.org/pdf/1202.6258v4 | 2013-03-11T19:54:48Z | 2012-02-28T15:42:51Z | A Stochastic Gradient Method with an Exponential Convergence Rate for
Finite Training Sets | We propose a new stochastic gradient method for optimizing the sum of a
finite set of smooth functions, where the sum is strongly convex. While
standard stochastic gradient methods converge at sublinear rates for this
problem, the proposed method incorporates a memory of previous gradient values
in order to achieve a linear convergence rate. In a machine learning context,
numerical experiments indicate that the new algorithm can dramatically
outperform standard algorithms, both in terms of optimizing the training error
and reducing the test error quickly.
| [
"['Nicolas Le Roux' 'Mark Schmidt' 'Francis Bach']",
"Nicolas Le Roux (INRIA Paris - Rocquencourt, LIENS), Mark Schmidt\n (INRIA Paris - Rocquencourt, LIENS), Francis Bach (INRIA Paris -\n Rocquencourt, LIENS)"
] |
stat.ML cs.LG | null | 1202.6504 | null | null | http://arxiv.org/pdf/1202.6504v2 | 2013-01-12T12:43:09Z | 2012-02-29T10:09:26Z | Learning from Distributions via Support Measure Machines | This paper presents a kernel-based discriminative learning framework on
probability measures. Rather than relying on large collections of vectorial
training examples, our framework learns using a collection of probability
distributions that have been constructed to meaningfully represent training
data. By representing these probability distributions as mean embeddings in the
reproducing kernel Hilbert space (RKHS), we are able to apply many standard
kernel-based learning techniques in straightforward fashion. To accomplish
this, we construct a generalization of the support vector machine (SVM) called
a support measure machine (SMM). Our analyses of SMMs provides several insights
into their relationship to traditional SVMs. Based on such insights, we propose
a flexible SVM (Flex-SVM) that places different kernel functions on each
training example. Experimental results on both synthetic and real-world data
demonstrate the effectiveness of our proposed framework.
| [
"Krikamol Muandet, Kenji Fukumizu, Francesco Dinuzzo, Bernhard\n Sch\\\"olkopf",
"['Krikamol Muandet' 'Kenji Fukumizu' 'Francesco Dinuzzo'\n 'Bernhard Schölkopf']"
] |
cs.MS cs.LG stat.ML | null | 1202.6548 | null | null | http://arxiv.org/pdf/1202.6548v2 | 2012-03-01T13:31:54Z | 2012-02-29T13:49:10Z | mlpy: Machine Learning Python | mlpy is a Python Open Source Machine Learning library built on top of
NumPy/SciPy and the GNU Scientific Libraries. mlpy provides a wide range of
state-of-the-art machine learning methods for supervised and unsupervised
problems and it is aimed at finding a reasonable compromise among modularity,
maintainability, reproducibility, usability and efficiency. mlpy is
multiplatform, it works with Python 2 and 3 and it is distributed under GPL3 at
the website http://mlpy.fbk.eu.
| [
"Davide Albanese and Roberto Visintainer and Stefano Merler and\n Samantha Riccadonna and Giuseppe Jurman and Cesare Furlanello",
"['Davide Albanese' 'Roberto Visintainer' 'Stefano Merler'\n 'Samantha Riccadonna' 'Giuseppe Jurman' 'Cesare Furlanello']"
] |
stat.ML cs.LG | 10.1109/LSP.2012.2184795 | 1203.0038 | null | null | http://arxiv.org/abs/1203.0038v1 | 2012-02-29T22:40:56Z | 2012-02-29T22:40:56Z | Inference in Hidden Markov Models with Explicit State Duration
Distributions | In this letter we borrow from the inference techniques developed for
unbounded state-cardinality (nonparametric) variants of the HMM and use them to
develop a tuning-parameter free, black-box inference procedure for
Explicit-state-duration hidden Markov models (EDHMM). EDHMMs are HMMs that have
latent states consisting of both discrete state-indicator and discrete
state-duration random variables. In contrast to the implicit geometric state
duration distribution possessed by the standard HMM, EDHMMs allow the direct
parameterisation and estimation of per-state duration distributions. As most
duration distributions are defined over the positive integers, truncation or
other approximations are usually required to perform EDHMM inference.
| [
"Michael Dewar, Chris Wiggins, Frank Wood",
"['Michael Dewar' 'Chris Wiggins' 'Frank Wood']"
] |
cs.DB cs.LG | null | 1203.0058 | null | null | http://arxiv.org/pdf/1203.0058v1 | 2012-03-01T00:17:31Z | 2012-03-01T00:17:31Z | A Bayesian Approach to Discovering Truth from Conflicting Sources for
Data Integration | In practical data integration systems, it is common for the data sources
being integrated to provide conflicting information about the same entity.
Consequently, a major challenge for data integration is to derive the most
complete and accurate integrated records from diverse and sometimes conflicting
sources. We term this challenge the truth finding problem. We observe that some
sources are generally more reliable than others, and therefore a good model of
source quality is the key to solving the truth finding problem. In this work,
we propose a probabilistic graphical model that can automatically infer true
records and source quality without any supervision. In contrast to previous
methods, our principled approach leverages a generative process of two types of
errors (false positive and false negative) by modeling two different aspects of
source quality. In so doing, ours is also the first approach designed to merge
multi-valued attribute types. Our method is scalable, due to an efficient
sampling-based inference algorithm that needs very few iterations in practice
and enjoys linear time complexity, with an even faster incremental variant.
Experiments on two real world datasets show that our new method outperforms
existing state-of-the-art approaches to the truth finding problem.
| [
"['Bo Zhao' 'Benjamin I. P. Rubinstein' 'Jim Gemmell' 'Jiawei Han']",
"Bo Zhao, Benjamin I. P. Rubinstein, Jim Gemmell, Jiawei Han"
] |
cs.DB cs.LG cs.PF | null | 1203.0160 | null | null | http://arxiv.org/pdf/1203.0160v2 | 2012-03-02T10:14:58Z | 2012-03-01T11:43:43Z | Scaling Datalog for Machine Learning on Big Data | In this paper, we present the case for a declarative foundation for
data-intensive machine learning systems. Instead of creating a new system for
each specific flavor of machine learning task, or hardcoding new optimizations,
we argue for the use of recursive queries to program a variety of machine
learning systems. By taking this approach, database query optimization
techniques can be utilized to identify effective execution plans, and the
resulting runtime plans can be executed on a single unified data-parallel query
processing engine. As a proof of concept, we consider two programming
models--Pregel and Iterative Map-Reduce-Update---from the machine learning
domain, and show how they can be captured in Datalog, tuned for a specific
task, and then compiled into an optimized physical plan. Experiments performed
on a large computing cluster with real data demonstrate that this declarative
approach can provide very good performance while offering both increased
generality and programming ease.
| [
"Yingyi Bu, Vinayak Borkar, Michael J. Carey, Joshua Rosen, Neoklis\n Polyzotis, Tyson Condie, Markus Weimer, Raghu Ramakrishnan",
"['Yingyi Bu' 'Vinayak Borkar' 'Michael J. Carey' 'Joshua Rosen'\n 'Neoklis Polyzotis' 'Tyson Condie' 'Markus Weimer' 'Raghu Ramakrishnan']"
] |
cs.LG stat.ML | null | 1203.0203 | null | null | http://arxiv.org/pdf/1203.0203v1 | 2012-02-29T17:23:15Z | 2012-02-29T17:23:15Z | Fast Reinforcement Learning with Large Action Sets using
Error-Correcting Output Codes for MDP Factorization | The use of Reinforcement Learning in real-world scenarios is strongly limited
by issues of scale. Most RL learning algorithms are unable to deal with
problems composed of hundreds or sometimes even dozens of possible actions, and
therefore cannot be applied to many real-world problems. We consider the RL
problem in the supervised classification framework where the optimal policy is
obtained through a multiclass classifier, the set of classes being the set of
actions of the problem. We introduce error-correcting output codes (ECOCs) in
this setting and propose two new methods for reducing complexity when using
rollouts-based approaches. The first method consists in using an ECOC-based
classifier as the multiclass classifier, reducing the learning complexity from
O(A2) to O(Alog(A)). We then propose a novel method that profits from the
ECOC's coding dictionary to split the initial MDP into O(log(A)) seperate
two-action MDPs. This second method reduces learning complexity even further,
from O(A2) to O(log(A)), thus rendering problems with large action sets
tractable. We finish by experimentally demonstrating the advantages of our
approach on a set of benchmark problems, both in speed and performance.
| [
"['Gabriel Dulac-Arnold' 'Ludovic Denoyer' 'Philippe Preux'\n 'Patrick Gallinari']",
"Gabriel Dulac-Arnold, Ludovic Denoyer, Philippe Preux, Patrick\n Gallinari"
] |
cs.LG | null | 1203.0298 | null | null | http://arxiv.org/pdf/1203.0298v2 | 2012-03-06T09:01:19Z | 2012-03-01T14:40:02Z | Application of Gist SVM in Cancer Detection | In this paper, we study the application of GIST SVM in disease prediction
(detection of cancer). Pattern classification problems can be effectively
solved by Support vector machines. Here we propose a classifier which can
differentiate patients having benign and malignant cancer cells. To improve the
accuracy of classification, we propose to determine the optimal size of the
training set and perform feature selection. To find the optimal size of the
training set, different sizes of training sets are experimented and the one
with highest classification rate is selected. The optimal features are selected
through their F-Scores.
| [
"S. Aruna, S. P. Rajagopalan and L. V. Nandakishore",
"['S. Aruna' 'S. P. Rajagopalan' 'L. V. Nandakishore']"
] |
stat.ML cs.LG stat.ME | 10.1016/j.neunet.2013.01.012 | 1203.0453 | null | null | http://arxiv.org/abs/1203.0453v2 | 2013-01-16T06:44:58Z | 2012-03-02T13:12:03Z | Change-Point Detection in Time-Series Data by Relative Density-Ratio
Estimation | The objective of change-point detection is to discover abrupt property
changes lying behind time-series data. In this paper, we present a novel
statistical change-point detection algorithm based on non-parametric divergence
estimation between time-series samples from two retrospective segments. Our
method uses the relative Pearson divergence as a divergence measure, and it is
accurately and efficiently estimated by a method of direct density-ratio
estimation. Through experiments on artificial and real-world datasets including
human-activity sensing, speech, and Twitter messages, we demonstrate the
usefulness of the proposed method.
| [
"['Song Liu' 'Makoto Yamada' 'Nigel Collier' 'Masashi Sugiyama']",
"Song Liu, Makoto Yamada, Nigel Collier, Masashi Sugiyama"
] |
cs.LG cs.AI | null | 1203.0550 | null | null | http://arxiv.org/pdf/1203.0550v3 | 2024-04-29T18:15:29Z | 2012-03-02T19:20:42Z | Algorithms for Learning Kernels Based on Centered Alignment | This paper presents new and effective algorithms for learning kernels. In
particular, as shown by our empirical results, these algorithms consistently
outperform the so-called uniform combination solution that has proven to be
difficult to improve upon in the past, as well as other algorithms for learning
kernels based on convex combinations of base kernels in both classification and
regression. Our algorithms are based on the notion of centered alignment which
is used as a similarity measure between kernels or kernel matrices. We present
a number of novel algorithmic, theoretical, and empirical results for learning
kernels based on our notion of centered alignment. In particular, we describe
efficient algorithms for learning a maximum alignment kernel by showing that
the problem can be reduced to a simple QP and discuss a one-stage algorithm for
learning both a kernel and a hypothesis based on that kernel using an
alignment-based regularization. Our theoretical results include a novel
concentration bound for centered alignment between kernel matrices, the proof
of the existence of effective predictors for kernels with high alignment, both
for classification and for regression, and the proof of stability-based
generalization bounds for a broad family of algorithms for learning kernels
based on centered alignment. We also report the results of experiments with our
centered alignment-based algorithms in both classification and regression.
| [
"['Corinna Cortes' 'Mehryar Mohri' 'Afshin Rostamizadeh']",
"Corinna Cortes, Mehryar Mohri, Afshin Rostamizadeh"
] |
cs.LG cs.CC cs.DS | null | 1203.0594 | null | null | http://arxiv.org/pdf/1203.0594v3 | 2013-04-03T05:14:46Z | 2012-03-03T00:43:08Z | Learning DNF Expressions from Fourier Spectrum | Since its introduction by Valiant in 1984, PAC learning of DNF expressions
remains one of the central problems in learning theory. We consider this
problem in the setting where the underlying distribution is uniform, or more
generally, a product distribution. Kalai, Samorodnitsky and Teng (2009) showed
that in this setting a DNF expression can be efficiently approximated from its
"heavy" low-degree Fourier coefficients alone. This is in contrast to previous
approaches where boosting was used and thus Fourier coefficients of the target
function modified by various distributions were needed. This property is
crucial for learning of DNF expressions over smoothed product distributions, a
learning model introduced by Kalai et al. (2009) and inspired by the seminal
smoothed analysis model of Spielman and Teng (2001).
We introduce a new approach to learning (or approximating) a polynomial
threshold functions which is based on creating a function with range [-1,1]
that approximately agrees with the unknown function on low-degree Fourier
coefficients. We then describe conditions under which this is sufficient for
learning polynomial threshold functions. Our approach yields a new, simple
algorithm for approximating any polynomial-size DNF expression from its "heavy"
low-degree Fourier coefficients alone. Our algorithm greatly simplifies the
proof of learnability of DNF expressions over smoothed product distributions.
We also describe an application of our algorithm to learning monotone DNF
expressions over product distributions. Building on the work of Servedio
(2001), we give an algorithm that runs in time $\poly((s \cdot
\log{(s/\eps)})^{\log{(s/\eps)}}, n)$, where $s$ is the size of the target DNF
expression and $\eps$ is the accuracy. This improves on $\poly((s \cdot
\log{(ns/\eps)})^{\log{(s/\eps)} \cdot \log{(1/\eps)}}, n)$ bound of Servedio
(2001).
| [
"['Vitaly Feldman']",
"Vitaly Feldman"
] |
cs.DM cs.CC cs.LG | null | 1203.0631 | null | null | http://arxiv.org/pdf/1203.0631v3 | 2012-05-28T18:16:35Z | 2012-03-03T09:02:40Z | Checking Tests for Read-Once Functions over Arbitrary Bases | A Boolean function is called read-once over a basis B if it can be expressed
by a formula over B where no variable appears more than once. A checking test
for a read-once function f over B depending on all its variables is a set of
input vectors distinguishing f from all other read-once functions of the same
variables. We show that every read-once function f over B has a checking test
containing O(n^l) vectors, where n is the number of relevant variables of f and
l is the largest arity of functions in B. For some functions, this bound cannot
be improved by more than a constant factor. The employed technique involves
reconstructing f from its l-variable projections and provides a stronger form
of Kuznetsov's classic theorem on read-once representations.
| [
"Dmitry V. Chistikov",
"['Dmitry V. Chistikov']"
] |
cs.LG stat.ML | null | 1203.0683 | null | null | http://arxiv.org/pdf/1203.0683v3 | 2012-09-05T21:41:59Z | 2012-03-03T20:55:54Z | A Method of Moments for Mixture Models and Hidden Markov Models | Mixture models are a fundamental tool in applied statistics and machine
learning for treating data taken from multiple subpopulations. The current
practice for estimating the parameters of such models relies on local search
heuristics (e.g., the EM algorithm) which are prone to failure, and existing
consistent methods are unfavorable due to their high computational and sample
complexity which typically scale exponentially with the number of mixture
components. This work develops an efficient method of moments approach to
parameter estimation for a broad class of high-dimensional mixture models with
many components, including multi-view mixtures of Gaussians (such as mixtures
of axis-aligned Gaussians) and hidden Markov models. The new method leads to
rigorous unsupervised learning results for mixture models that were not
achieved by previous works; and, because of its simplicity, it offers a viable
alternative to EM for practical deployment.
| [
"['Animashree Anandkumar' 'Daniel Hsu' 'Sham M. Kakade']",
"Animashree Anandkumar, Daniel Hsu, Sham M. Kakade"
] |
stat.ML cs.AI cs.LG | null | 1203.0697 | null | null | http://arxiv.org/pdf/1203.0697v2 | 2012-06-30T18:54:30Z | 2012-03-04T01:19:25Z | Learning High-Dimensional Mixtures of Graphical Models | We consider unsupervised estimation of mixtures of discrete graphical models,
where the class variable corresponding to the mixture components is hidden and
each mixture component over the observed variables can have a potentially
different Markov graph structure and parameters. We propose a novel approach
for estimating the mixture components, and our output is a tree-mixture model
which serves as a good approximation to the underlying graphical model mixture.
Our method is efficient when the union graph, which is the union of the Markov
graphs of the mixture components, has sparse vertex separators between any pair
of observed variables. This includes tree mixtures and mixtures of bounded
degree graphs. For such models, we prove that our method correctly recovers the
union graph structure and the tree structures corresponding to
maximum-likelihood tree approximations of the mixture components. The sample
and computational complexities of our method scale as $\poly(p, r)$, for an
$r$-component mixture of $p$-variate graphical models. We further extend our
results to the case when the union graph has sparse local separators between
any pair of observed variables, such as mixtures of locally tree-like graphs,
and the mixture components are in the regime of correlation decay.
| [
"A. Anandkumar, D. Hsu, F. Huang and S. M. Kakade",
"['A. Anandkumar' 'D. Hsu' 'F. Huang' 'S. M. Kakade']"
] |
cs.LG astro-ph.IM stat.ML | null | 1203.0970 | null | null | http://arxiv.org/pdf/1203.0970v2 | 2013-05-20T04:07:12Z | 2012-03-05T17:07:10Z | Infinite Shift-invariant Grouped Multi-task Learning for Gaussian
Processes | Multi-task learning leverages shared information among data sets to improve
the learning performance of individual tasks. The paper applies this framework
for data where each task is a phase-shifted periodic time series. In
particular, we develop a novel Bayesian nonparametric model capturing a mixture
of Gaussian processes where each task is a sum of a group-specific function and
a component capturing individual variation, in addition to each task being
phase shifted. We develop an efficient \textsc{em} algorithm to learn the
parameters of the model. As a special case we obtain the Gaussian mixture model
and \textsc{em} algorithm for phased-shifted periodic time series. Furthermore,
we extend the proposed model by using a Dirichlet Process prior and thereby
leading to an infinite mixture model that is capable of doing automatic model
selection. A Variational Bayesian approach is developed for inference in this
model. Experiments in regression, classification and class discovery
demonstrate the performance of the proposed models using both synthetic data
and real-world time series data from astrophysics. Our methods are particularly
useful when the time series are sparsely and non-synchronously sampled.
| [
"['Yuyang Wang' 'Roni Khardon' 'Pavlos Protopapas']",
"Yuyang Wang, Roni Khardon, Pavlos Protopapas"
] |
cs.CV cs.IR cs.IT cs.LG math.IT math.OC stat.ML | null | 1203.1005 | null | null | http://arxiv.org/pdf/1203.1005v3 | 2013-02-05T03:22:00Z | 2012-03-05T18:58:32Z | Sparse Subspace Clustering: Algorithm, Theory, and Applications | In many real-world problems, we are dealing with collections of
high-dimensional data, such as images, videos, text and web documents, DNA
microarray data, and more. Often, high-dimensional data lie close to
low-dimensional structures corresponding to several classes or categories the
data belongs to. In this paper, we propose and study an algorithm, called
Sparse Subspace Clustering (SSC), to cluster data points that lie in a union of
low-dimensional subspaces. The key idea is that, among infinitely many possible
representations of a data point in terms of other points, a sparse
representation corresponds to selecting a few points from the same subspace.
This motivates solving a sparse optimization program whose solution is used in
a spectral clustering framework to infer the clustering of data into subspaces.
Since solving the sparse optimization program is in general NP-hard, we
consider a convex relaxation and show that, under appropriate conditions on the
arrangement of subspaces and the distribution of data, the proposed
minimization program succeeds in recovering the desired sparse representations.
The proposed algorithm can be solved efficiently and can handle data points
near the intersections of subspaces. Another key advantage of the proposed
algorithm with respect to the state of the art is that it can deal with data
nuisances, such as noise, sparse outlying entries, and missing entries,
directly by incorporating the model of the data into the sparse optimization
program. We demonstrate the effectiveness of the proposed algorithm through
experiments on synthetic data as well as the two real-world problems of motion
segmentation and face clustering.
| [
"Ehsan Elhamifar and Rene Vidal",
"['Ehsan Elhamifar' 'Rene Vidal']"
] |
Subsets and Splits