categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG math.ST stat.ML stat.TH | null | 1206.4682 | null | null | http://arxiv.org/pdf/1206.4682v1 | 2012-06-18T15:40:32Z | 2012-06-18T15:40:32Z | Copula-based Kernel Dependency Measures | The paper presents a new copula based method for measuring dependence between
random variables. Our approach extends the Maximum Mean Discrepancy to the
copula of the joint distribution. We prove that this approach has several
advantageous properties. Similarly to Shannon mutual information, the proposed
dependence measure is invariant to any strictly increasing transformation of
the marginal variables. This is important in many applications, for example in
feature selection. The estimator is consistent, robust to outliers, and uses
rank statistics only. We derive upper bounds on the convergence rate and
propose independence tests too. We illustrate the theoretical contributions
through a series of experiments in feature selection and low-dimensional
embedding of distributions.
| [
"Barnabas Poczos (Carnegie Mellon University), Zoubin Ghahramani\n (University of Cambridge), Jeff Schneider (Carnegie Mellon University)",
"['Barnabas Poczos' 'Zoubin Ghahramani' 'Jeff Schneider']"
] |
cs.LG | null | 1206.4683 | null | null | http://arxiv.org/pdf/1206.4683v1 | 2012-06-18T15:40:50Z | 2012-06-18T15:40:50Z | Marginalized Denoising Autoencoders for Domain Adaptation | Stacked denoising autoencoders (SDAs) have been successfully used to learn
new representations for domain adaptation. Recently, they have attained record
accuracy on standard benchmark tasks of sentiment analysis across different
text domains. SDAs learn robust data representations by reconstruction,
recovering original features from data that are artificially corrupted with
noise. In this paper, we propose marginalized SDA (mSDA) that addresses two
crucial limitations of SDAs: high computational cost and lack of scalability to
high-dimensional features. In contrast to SDAs, our approach of mSDA
marginalizes noise and thus does not require stochastic gradient descent or
other optimization algorithms to learn parameters ? in fact, they are computed
in closed-form. Consequently, mSDA, which can be implemented in only 20 lines
of MATLAB^{TM}, significantly speeds up SDAs by two orders of magnitude.
Furthermore, the representations learnt by mSDA are as effective as the
traditional SDAs, attaining almost identical accuracies in benchmark tasks.
| [
"Minmin Chen (Washington University), Zhixiang Xu (Washington\n University), Kilian Weinberger (Washington University), Fei Sha (University\n of Southern California)",
"['Minmin Chen' 'Zhixiang Xu' 'Kilian Weinberger' 'Fei Sha']"
] |
stat.ME cs.LG stat.AP | null | 1206.4685 | null | null | http://arxiv.org/pdf/1206.4685v1 | 2012-06-18T15:42:15Z | 2012-06-18T15:42:15Z | Sparse-GEV: Sparse Latent Space Model for Multivariate Extreme Value
Time Serie Modeling | In many applications of time series models, such as climate analysis and
social media analysis, we are often interested in extreme events, such as
heatwave, wind gust, and burst of topics. These time series data usually
exhibit a heavy-tailed distribution rather than a Gaussian distribution. This
poses great challenges to existing approaches due to the significantly
different assumptions on the data distributions and the lack of sufficient past
data on extreme events. In this paper, we propose the Sparse-GEV model, a
latent state model based on the theory of extreme value modeling to
automatically learn sparse temporal dependence and make predictions. Our model
is theoretically significant because it is among the first models to learn
sparse temporal dependencies among multivariate extreme value time series. We
demonstrate the superior performance of our algorithm to the state-of-art
methods, including Granger causality, copula approach, and transfer entropy, on
one synthetic dataset, one climate dataset and two Twitter datasets.
| [
"['Yan Liu' 'Taha Bahadori' 'Hongfei Li']",
"Yan Liu (USC), Taha Bahadori (USC), Hongfei Li (IBM T.J. Watson\n Research Center)"
] |
cs.LG stat.ML | null | 1206.4686 | null | null | http://arxiv.org/pdf/1206.4686v1 | 2012-06-18T15:42:34Z | 2012-06-18T15:42:34Z | Discriminative Probabilistic Prototype Learning | In this paper we propose a simple yet powerful method for learning
representations in supervised learning scenarios where each original input
datapoint is described by a set of vectors and their associated outputs may be
given by soft labels indicating, for example, class probabilities. We represent
an input datapoint as a mixture of probabilities over the corresponding set of
feature vectors where each probability indicates how likely each vector is to
belong to an unknown prototype pattern. We propose a probabilistic model that
parameterizes these prototype patterns in terms of hidden variables and
therefore it can be trained with conventional approaches based on likelihood
maximization. More importantly, both the model parameters and the prototype
patterns can be learned from data in a discriminative way. We show that our
model can be seen as a probabilistic generalization of learning vector
quantization (LVQ). We apply our method to the problems of shape
classification, hyperspectral imaging classification and people's work class
categorization, showing the superior performance of our method compared to the
standard prototype-based classification approach and other competitive
benchmark methods.
| [
"['Edwin Bonilla' 'Antonio Robles-Kelly']",
"Edwin Bonilla (NICTA), Antonio Robles-Kelly (NICTA)"
] |
cs.LG cs.CE q-bio.QM | 10.1145/2382936.2383060 | 1206.4822 | null | null | http://arxiv.org/abs/1206.4822v3 | 2012-12-05T08:52:31Z | 2012-06-21T10:09:41Z | Feature extraction in protein sequences classification : a new stability
measure | Feature extraction is an unavoidable task, especially in the critical step of
preprocessing biological sequences. This step consists for example in
transforming the biological sequences into vectors of motifs where each motif
is a subsequence that can be seen as a property (or attribute) characterizing
the sequence. Hence, we obtain an object-property table where objects are
sequences and properties are motifs extracted from sequences. This output can
be used to apply standard machine learning tools to perform data mining tasks
such as classification. Several previous works have described feature
extraction methods for bio-sequence classification, but none of them discussed
the robustness of these methods when perturbing the input data. In this work,
we introduce the notion of stability of the generated motifs in order to study
the robustness of motif extraction methods. We express this robustness in terms
of the ability of the method to reveal any change occurring in the input data
and also its ability to target the interesting motifs. We use these criteria to
evaluate and experimentally compare four existing extraction methods for
biological sequences.
| [
"['Rabie Saidi' 'Sabeur Aridhi' 'Mondher Maddouri' 'Engelbert Mephu Nguifo']",
"Rabie Saidi, Sabeur Aridhi, Mondher Maddouri and Engelbert Mephu\n Nguifo"
] |
cs.IT cs.LG math.IT stat.ME | 10.1145/2628434 | 1206.4832 | null | null | http://arxiv.org/abs/1206.4832v6 | 2014-07-03T04:56:30Z | 2012-06-21T11:03:50Z | Smoothed Functional Algorithms for Stochastic Optimization using
q-Gaussian Distributions | Smoothed functional (SF) schemes for gradient estimation are known to be
efficient in stochastic optimization algorithms, specially when the objective
is to improve the performance of a stochastic system. However, the performance
of these methods depends on several parameters, such as the choice of a
suitable smoothing kernel. Different kernels have been studied in literature,
which include Gaussian, Cauchy and uniform distributions among others. This
paper studies a new class of kernels based on the q-Gaussian distribution, that
has gained popularity in statistical physics over the last decade. Though the
importance of this family of distributions is attributed to its ability to
generalize the Gaussian distribution, we observe that this class encompasses
almost all existing smoothing kernels. This motivates us to study SF schemes
for gradient estimation using the q-Gaussian distribution. Using the derived
gradient estimates, we propose two-timescale algorithms for optimization of a
stochastic objective function in a constrained setting with projected gradient
search approach. We prove the convergence of our algorithms to the set of
stationary points of an associated ODE. We also demonstrate their performance
numerically through simulations on a queuing model.
| [
"Debarghya Ghoshdastidar, Ambedkar Dukkipati, Shalabh Bhatnagar",
"['Debarghya Ghoshdastidar' 'Ambedkar Dukkipati' 'Shalabh Bhatnagar']"
] |
stat.ML cs.LG | null | 1206.5036 | null | null | http://arxiv.org/pdf/1206.5036v2 | 2012-09-06T13:27:18Z | 2012-06-22T00:12:05Z | Estimating Densities with Non-Parametric Exponential Families | We propose a novel approach for density estimation with exponential families
for the case when the true density may not fall within the chosen family. Our
approach augments the sufficient statistics with features designed to
accumulate probability mass in the neighborhood of the observed points,
resulting in a non-parametric model similar to kernel density estimators. We
show that under mild conditions, the resulting model uses only the sufficient
statistics if the density is within the chosen exponential family, and
asymptotically, it approximates densities outside of the chosen exponential
family. Using the proposed approach, we modify the exponential random graph
model, commonly used for modeling small-size graph distributions, to address
the well-known issue of model degeneracy.
| [
"Lin Yuan, Sergey Kirshner and Robert Givan",
"['Lin Yuan' 'Sergey Kirshner' 'Robert Givan']"
] |
cs.LG math.ST stat.TH | null | 1206.5057 | null | null | http://arxiv.org/pdf/1206.5057v5 | 2012-10-11T16:01:22Z | 2012-06-22T05:29:48Z | The Robustness and Super-Robustness of L^p Estimation, when p < 1 | In robust statistics, the breakdown point of an estimator is the percentage
of outliers with which an estimator still generates reliable estimation. The
upper bound of breakdown point is 50%, which means it is not possible to
generate reliable estimation with more than half outliers.
In this paper, it is shown that for majority of experiences, when the
outliers exceed 50%, but if they are distributed randomly enough, it is still
possible to generate a reliable estimation from minority good observations. The
phenomenal of that the breakdown point is larger than 50% is named as super
robustness. And, in this paper, a robust estimator is called strict robust if
it generates a perfect estimation when all the good observations are perfect.
More specifically, the super robustness of the maximum likelihood estimator
of the exponential power distribution, or L^p estimation, where p<1, is
investigated. This paper starts with proving that L^p (p<1) is a strict robust
location estimator. Further, it is proved that L^p (p < 1)has the property of
strict super-robustness on translation, rotation, scaling transformation and
robustness on Euclidean transform.
| [
"Qinghuai Gao",
"['Qinghuai Gao']"
] |
stat.ML cs.LG stat.CO | null | 1206.5102 | null | null | http://arxiv.org/pdf/1206.5102v1 | 2012-06-22T10:24:55Z | 2012-06-22T10:24:55Z | Hidden Markov Models with mixtures as emission distributions | In unsupervised classification, Hidden Markov Models (HMM) are used to
account for a neighborhood structure between observations. The emission
distributions are often supposed to belong to some parametric family. In this
paper, a semiparametric modeling where the emission distributions are a mixture
of parametric distributions is proposed to get a higher flexibility. We show
that the classical EM algorithm can be adapted to infer the model parameters.
For the initialisation step, starting from a large number of components, a
hierarchical method to combine them into the hidden states is proposed. Three
likelihood-based criteria to select the components to be combined are
discussed. To estimate the number of hidden states, BIC-like criteria are
derived. A simulation study is carried out both to determine the best
combination between the merging criteria and the model selection criteria and
to evaluate the accuracy of classification. The proposed method is also
illustrated using a biological dataset from the model plant Arabidopsis
thaliana. A R package HMMmix is freely available on the CRAN.
| [
"Stevenn Volant, Caroline B\\'erard, Marie-Laure Martin-Magniette and\n St\\'ephane Robin",
"['Stevenn Volant' 'Caroline Bérard' 'Marie-Laure Martin-Magniette'\n 'Stéphane Robin']"
] |
cs.LG stat.ML | null | 1206.5162 | null | null | http://arxiv.org/pdf/1206.5162v2 | 2012-12-04T19:35:34Z | 2012-06-22T14:36:15Z | Fast Variational Inference in the Conjugate Exponential Family | We present a general method for deriving collapsed variational inference
algo- rithms for probabilistic models in the conjugate exponential family. Our
method unifies many existing approaches to collapsed variational inference. Our
collapsed variational inference leads to a new lower bound on the marginal
likelihood. We exploit the information geometry of the bound to derive much
faster optimization methods based on conjugate gradients for these models. Our
approach is very general and is easily applied to any model where the mean
field update equations have been derived. Empirically we show significant
speed-ups for probabilistic models optimized using our bound.
| [
"James Hensman, Magnus Rattray and Neil D. Lawrence",
"['James Hensman' 'Magnus Rattray' 'Neil D. Lawrence']"
] |
q-fin.ST cs.LG | 10.1109/BWSS.2012.23 | 1206.5224 | null | null | http://arxiv.org/abs/1206.5224v4 | 2012-09-13T16:17:59Z | 2012-06-22T18:30:05Z | Stock prices assessment: proposal of a new index based on volume
weighted historical prices through the use of computer modeling | The importance of considering the volumes to analyze stock prices movements
can be considered as a well-accepted practice in the financial area. However,
when we look at the scientific production in this field, we still cannot find a
unified model that includes volume and price variations for stock assessment
purposes. In this paper we present a computer model that could fulfill this
gap, proposing a new index to evaluate stock prices based on their historical
prices and volumes traded. Besides the model can be considered mathematically
very simple, it was able to improve significantly the performance of agents
operating with real financial data. Based on the results obtained, and also on
the very intuitive logic of our model, we believe that the index proposed here
can be very useful to help investors on the activity of determining ideal price
ranges for buying and selling stocks in the financial market.
| [
"Tiago Colliri, Fernando F. Ferreira",
"['Tiago Colliri' 'Fernando F. Ferreira']"
] |
cs.LG stat.ML | null | 1206.5240 | null | null | http://arxiv.org/pdf/1206.5240v1 | 2012-06-20T14:52:04Z | 2012-06-20T14:52:04Z | Analysis of Semi-Supervised Learning with the Yarowsky Algorithm | The Yarowsky algorithm is a rule-based semi-supervised learning algorithm
that has been successfully applied to some problems in computational
linguistics. The algorithm was not mathematically well understood until (Abney
2004) which analyzed some specific variants of the algorithm, and also proposed
some new algorithms for bootstrapping. In this paper, we extend Abney's work
and show that some of his proposed algorithms actually optimize (an upper-bound
on) an objective function based on a new definition of cross-entropy which is
based on a particular instantiation of the Bregman distance between probability
distributions. Moreover, we suggest some new algorithms for rule-based
semi-supervised learning and show connections with harmonic functions and
minimum multi-way cuts in graph-based semi-supervised learning.
| [
"Gholam Reza Haffari, Anoop Sarkar",
"['Gholam Reza Haffari' 'Anoop Sarkar']"
] |
cs.LG stat.ML | null | 1206.5241 | null | null | http://arxiv.org/pdf/1206.5241v1 | 2012-06-20T14:52:49Z | 2012-06-20T14:52:49Z | Shift-Invariance Sparse Coding for Audio Classification | Sparse coding is an unsupervised learning algorithm that learns a succinct
high-level representation of the inputs given only unlabeled data; it
represents each input as a sparse linear combination of a set of basis
functions. Originally applied to modeling the human visual cortex, sparse
coding has also been shown to be useful for self-taught learning, in which the
goal is to solve a supervised classification task given access to additional
unlabeled data drawn from different classes than that in the supervised
learning problem. Shift-invariant sparse coding (SISC) is an extension of
sparse coding which reconstructs a (usually time-series) input using all of the
basis functions in all possible shifts. In this paper, we present an efficient
algorithm for learning SISC bases. Our method is based on iteratively solving
two large convex optimization problems: The first, which computes the linear
coefficients, is an L1-regularized linear least squares problem with
potentially hundreds of thousands of variables. Existing methods typically use
a heuristic to select a small subset of the variables to optimize, but we
present a way to efficiently compute the exact solution. The second, which
solves for bases, is a constrained linear least squares problem. By optimizing
over complex-valued variables in the Fourier domain, we reduce the coupling
between the different variables, allowing the problem to be solved efficiently.
We show that SISC's learned high-level representations of speech and music
provide useful features for classification tasks within those domains. When
applied to classification, under certain conditions the learned features
outperform state of the art spectral and cepstral features.
| [
"Roger Grosse, Rajat Raina, Helen Kwong, Andrew Y. Ng",
"['Roger Grosse' 'Rajat Raina' 'Helen Kwong' 'Andrew Y. Ng']"
] |
cs.LG stat.ML | null | 1206.5243 | null | null | http://arxiv.org/pdf/1206.5243v1 | 2012-06-20T14:53:26Z | 2012-06-20T14:53:26Z | Convergent Propagation Algorithms via Oriented Trees | Inference problems in graphical models are often approximated by casting them
as constrained optimization problems. Message passing algorithms, such as
belief propagation, have previously been suggested as methods for solving these
optimization problems. However, there are few convergence guarantees for such
algorithms, and the algorithms are therefore not guaranteed to solve the
corresponding optimization problem. Here we present an oriented tree
decomposition algorithm that is guaranteed to converge to the global optimum of
the Tree-Reweighted (TRW) variational problem. Our algorithm performs local
updates in the convex dual of the TRW problem - an unconstrained generalized
geometric program. Primal updates, also local, correspond to oriented
reparametrization operations that leave the distribution intact.
| [
"Amir Globerson, Tommi S. Jaakkola",
"['Amir Globerson' 'Tommi S. Jaakkola']"
] |
cs.AI cs.LG stat.ME | null | 1206.5245 | null | null | http://arxiv.org/pdf/1206.5245v1 | 2012-06-20T14:54:06Z | 2012-06-20T14:54:06Z | A new parameter Learning Method for Bayesian Networks with Qualitative
Influences | We propose a new method for parameter learning in Bayesian networks with
qualitative influences. This method extends our previous work from networks of
binary variables to networks of discrete variables with ordered values. The
specified qualitative influences correspond to certain order restrictions on
the parameters in the network. These parameters may therefore be estimated
using constrained maximum likelihood estimation. We propose an alternative
method, based on the isotonic regression. The constrained maximum likelihood
estimates are fairly complicated to compute, whereas computation of the
isotonic regression estimates only requires the repeated application of the
Pool Adjacent Violators algorithm for linear orders. Therefore, the isotonic
regression estimator is to be preferred from the viewpoint of computational
complexity. Through experiments on simulated and real data, we show that the
new learning method is competitive in performance to the constrained maximum
likelihood estimator, and that both estimators improve on the standard
estimator.
| [
"['Ad Feelders']",
"Ad Feelders"
] |
cs.LG stat.ML | null | 1206.5247 | null | null | http://arxiv.org/pdf/1206.5247v1 | 2012-06-20T14:54:43Z | 2012-06-20T14:54:43Z | Bayesian structure learning using dynamic programming and MCMC | MCMC methods for sampling from the space of DAGs can mix poorly due to the
local nature of the proposals that are commonly used. It has been shown that
sampling from the space of node orders yields better results [FK03, EW06].
Recently, Koivisto and Sood showed how one can analytically marginalize over
orders using dynamic programming (DP) [KS04, Koi06]. Their method computes the
exact marginal posterior edge probabilities, thus avoiding the need for MCMC.
Unfortunately, there are four drawbacks to the DP technique: it can only use
modular priors, it can only compute posteriors over modular features, it is
difficult to compute a predictive density, and it takes exponential time and
space. We show how to overcome the first three of these problems by using the
DP algorithm as a proposal distribution for MCMC in DAG space. We show that
this hybrid technique converges to the posterior faster than other methods,
resulting in more accurate structure learning and higher predictive likelihoods
on test data.
| [
"Daniel Eaton, Kevin Murphy",
"['Daniel Eaton' 'Kevin Murphy']"
] |
cs.LG cs.CV cs.IR stat.ML | null | 1206.5248 | null | null | http://arxiv.org/pdf/1206.5248v1 | 2012-06-20T14:55:04Z | 2012-06-20T14:55:04Z | Statistical Translation, Heat Kernels and Expected Distances | High dimensional structured data such as text and images is often poorly
understood and misrepresented in statistical modeling. The standard histogram
representation suffers from high variance and performs poorly in general. We
explore novel connections between statistical translation, heat kernels on
manifolds and graphs, and expected distances. These connections provide a new
framework for unsupervised metric learning for text documents. Experiments
indicate that the resulting distances are generally superior to their more
standard counterparts.
| [
"['Joshua Dillon' 'Yi Mao' 'Guy Lebanon' 'Jian Zhang']",
"Joshua Dillon, Yi Mao, Guy Lebanon, Jian Zhang"
] |
cs.CE cs.LG q-bio.QM stat.AP | null | 1206.5256 | null | null | http://arxiv.org/pdf/1206.5256v1 | 2012-06-20T14:58:18Z | 2012-06-20T14:58:18Z | Discovering Patterns in Biological Sequences by Optimal Segmentation | Computational methods for discovering patterns of local correlations in
sequences are important in computational biology. Here we show how to determine
the optimal partitioning of aligned sequences into non-overlapping segments
such that positions in the same segment are strongly correlated while positions
in different segments are not. Our approach involves discovering the hidden
variables of a Bayesian network that interact with observed sequences so as to
form a set of independent mixture models. We introduce a dynamic program to
efficiently discover the optimal segmentation, or equivalently the optimal set
of hidden variables. We evaluate our approach on two computational biology
tasks. One task is related to the design of vaccines against polymorphic
pathogens and the other task involves analysis of single nucleotide
polymorphisms (SNPs) in human DNA. We show how common tasks in these problems
naturally correspond to inference procedures in the learned models. Error rates
of our learned models for the prediction of missing SNPs are up to 1/3 less
than the error rates of a state-of-the-art SNP prediction method. Source code
is available at www.uwm.edu/~joebock/segmentation.
| [
"Joseph Bockhorst, Nebojsa Jojic",
"['Joseph Bockhorst' 'Nebojsa Jojic']"
] |
cs.LG cs.AI stat.ML | null | 1206.5261 | null | null | http://arxiv.org/pdf/1206.5261v1 | 2012-06-20T15:00:46Z | 2012-06-20T15:00:46Z | Mixture-of-Parents Maximum Entropy Markov Models | We present the mixture-of-parents maximum entropy Markov model (MoP-MEMM), a
class of directed graphical models extending MEMMs. The MoP-MEMM allows
tractable incorporation of long-range dependencies between nodes by restricting
the conditional distribution of each node to be a mixture of distributions
given the parents. We show how to efficiently compute the exact marginal
posterior node distributions, regardless of the range of the dependencies. This
enables us to model non-sequential correlations present within text documents,
as well as between interconnected documents, such as hyperlinked web pages. We
apply the MoP-MEMM to a named entity recognition task and a web page
classification task. In each, our model shows significant improvement over the
basic MEMM, and is competitive with other long-range sequence models that use
approximate inference.
| [
"David S. Rosenberg, Dan Klein, Ben Taskar",
"['David S. Rosenberg' 'Dan Klein' 'Ben Taskar']"
] |
cs.AI cs.LG stat.ML | null | 1206.5263 | null | null | http://arxiv.org/pdf/1206.5263v1 | 2012-06-20T15:01:43Z | 2012-06-20T15:01:43Z | Reading Dependencies from Polytree-Like Bayesian Networks | We present a graphical criterion for reading dependencies from the minimal
directed independence map G of a graphoid p when G is a polytree and p
satisfies composition and weak transitivity. We prove that the criterion is
sound and complete. We argue that assuming composition and weak transitivity is
not too restrictive.
| [
"['Jose M. Pena']",
"Jose M. Pena"
] |
cs.LG stat.ML | null | 1206.5264 | null | null | http://arxiv.org/pdf/1206.5264v1 | 2012-06-20T15:02:01Z | 2012-06-20T15:02:01Z | Apprenticeship Learning using Inverse Reinforcement Learning and
Gradient Methods | In this paper we propose a novel gradient algorithm to learn a policy from an
expert's observed behavior assuming that the expert behaves optimally with
respect to some unknown reward function of a Markovian Decision Problem. The
algorithm's aim is to find a reward function such that the resulting optimal
policy matches well the expert's observed behavior. The main difficulty is that
the mapping from the parameters to policies is both nonsmooth and highly
redundant. Resorting to subdifferentials solves the first difficulty, while the
second one is over- come by computing natural gradients. We tested the proposed
method in two artificial domains and found it to be more reliable and efficient
than some previous methods.
| [
"['Gergely Neu' 'Csaba Szepesvari']",
"Gergely Neu, Csaba Szepesvari"
] |
cs.LG cs.AI stat.ML | null | 1206.5265 | null | null | http://arxiv.org/pdf/1206.5265v1 | 2012-06-20T15:02:29Z | 2012-06-20T15:02:29Z | Consensus ranking under the exponential model | We analyze the generalized Mallows model, a popular exponential model over
rankings. Estimating the central (or consensus) ranking from data is NP-hard.
We obtain the following new results: (1) We show that search methods can
estimate both the central ranking pi0 and the model parameters theta exactly.
The search is n! in the worst case, but is tractable when the true distribution
is concentrated around its mode; (2) We show that the generalized Mallows model
is jointly exponential in (pi0; theta), and introduce the conjugate prior for
this model class; (3) The sufficient statistics are the pairwise marginal
probabilities that item i is preferred to item j. Preliminary experiments
confirm the theoretical predictions and compare the new algorithm and existing
heuristics.
| [
"['Marina Meila' 'Kapil Phadnis' 'Arthur Patterson' 'Jeff A. Bilmes']",
"Marina Meila, Kapil Phadnis, Arthur Patterson, Jeff A. Bilmes"
] |
cs.LG cs.IR stat.ML | null | 1206.5267 | null | null | http://arxiv.org/pdf/1206.5267v1 | 2012-06-20T15:03:41Z | 2012-06-20T15:03:41Z | Collaborative Filtering and the Missing at Random Assumption | Rating prediction is an important application, and a popular research topic
in collaborative filtering. However, both the validity of learning algorithms,
and the validity of standard testing procedures rest on the assumption that
missing ratings are missing at random (MAR). In this paper we present the
results of a user study in which we collect a random sample of ratings from
current users of an online radio service. An analysis of the rating data
collected in the study shows that the sample of random ratings has markedly
different properties than ratings of user-selected songs. When asked to report
on their own rating behaviour, a large number of users indicate they believe
their opinion of a song does affect whether they choose to rate that song, a
violation of the MAR condition. Finally, we present experimental results
showing that incorporating an explicit model of the missing data mechanism can
lead to significant improvements in prediction performance on the random sample
of ratings.
| [
"['Benjamin Marlin' 'Richard S. Zemel' 'Sam Roweis' 'Malcolm Slaney']",
"Benjamin Marlin, Richard S. Zemel, Sam Roweis, Malcolm Slaney"
] |
cs.IR cs.LG stat.ML | null | 1206.5270 | null | null | http://arxiv.org/pdf/1206.5270v1 | 2012-06-20T15:04:47Z | 2012-06-20T15:04:47Z | Nonparametric Bayes Pachinko Allocation | Recent advances in topic models have explored complicated structured
distributions to represent topic correlation. For example, the pachinko
allocation model (PAM) captures arbitrary, nested, and possibly sparse
correlations between topics using a directed acyclic graph (DAG). While PAM
provides more flexibility and greater expressive power than previous models
like latent Dirichlet allocation (LDA), it is also more difficult to determine
the appropriate topic structure for a specific dataset. In this paper, we
propose a nonparametric Bayesian prior for PAM based on a variant of the
hierarchical Dirichlet process (HDP). Although the HDP can capture topic
correlations defined by nested data structure, it does not automatically
discover such correlations from unstructured data. By assuming an HDP-based
prior for PAM, we are able to learn both the number of topics and how the
topics are correlated. We evaluate our model on synthetic and real-world text
datasets, and show that nonparametric PAM achieves performance matching the
best of PAM without manually tuning the number of topics.
| [
"['Wei Li' 'David Blei' 'Andrew McCallum']",
"Wei Li, David Blei, Andrew McCallum"
] |
cs.LG stat.ML | null | 1206.5274 | null | null | http://arxiv.org/pdf/1206.5274v1 | 2012-06-20T15:06:08Z | 2012-06-20T15:06:08Z | On Discarding, Caching, and Recalling Samples in Active Learning | We address challenges of active learning under scarce informational resources
in non-stationary environments. In real-world settings, data labeled and
integrated into a predictive model may become invalid over time. However, the
data can become informative again with switches in context and such changes may
indicate unmodeled cyclic or other temporal dynamics. We explore principles for
discarding, caching, and recalling labeled data points in active learning based
on computations of value of information. We review key concepts and study the
value of the methods via investigations of predictive performance and costs of
acquiring data for simulated and real-world data sets.
| [
"['Ashish Kapoor' 'Eric J. Horvitz']",
"Ashish Kapoor, Eric J. Horvitz"
] |
cs.AI cs.LG stat.ML | null | 1206.5277 | null | null | http://arxiv.org/pdf/1206.5277v1 | 2012-06-20T15:07:42Z | 2012-06-20T15:07:42Z | Accuracy Bounds for Belief Propagation | The belief propagation (BP) algorithm is widely applied to perform
approximate inference on arbitrary graphical models, in part due to its
excellent empirical properties and performance. However, little is known
theoretically about when this algorithm will perform well. Using recent
analysis of convergence and stability properties in BP and new results on
approximations in binary systems, we derive a bound on the error in BP's
estimates for pairwise Markov random fields over discrete valued random
variables. Our bound is relatively simple to compute, and compares favorably
with a previous method of bounding the accuracy of BP.
| [
"['Alexander T. Ihler']",
"Alexander T. Ihler"
] |
stat.ME cs.LG stat.ML | null | 1206.5278 | null | null | http://arxiv.org/pdf/1206.5278v1 | 2012-06-20T15:08:36Z | 2012-06-20T15:08:36Z | Fast Nonparametric Conditional Density Estimation | Conditional density estimation generalizes regression by modeling a full
density f(yjx) rather than only the expected value E(yjx). This is important
for many tasks, including handling multi-modality and generating prediction
intervals. Though fundamental and widely applicable, nonparametric conditional
density estimators have received relatively little attention from statisticians
and little or none from the machine learning community. None of that work has
been applied to greater than bivariate data, presumably due to the
computational difficulty of data-driven bandwidth selection. We describe the
double kernel conditional density estimator and derive fast dual-tree-based
algorithms for bandwidth selection using a maximum likelihood criterion. These
techniques give speedups of up to 3.8 million in our experiments, and enable
the first applications to previously intractable large multivariate datasets,
including a redshift prediction problem from the Sloan Digital Sky Survey.
| [
"Michael P. Holmes, Alexander G. Gray, Charles Lee Isbell",
"['Michael P. Holmes' 'Alexander G. Gray' 'Charles Lee Isbell']"
] |
cs.LG stat.ML | null | 1206.5281 | null | null | http://arxiv.org/pdf/1206.5281v1 | 2012-06-20T15:12:35Z | 2012-06-20T15:12:35Z | Learning Selectively Conditioned Forest Structures with Applications to
DBNs and Classification | Dealing with uncertainty in Bayesian Network structures using maximum a
posteriori (MAP) estimation or Bayesian Model Averaging (BMA) is often
intractable due to the superexponential number of possible directed, acyclic
graphs. When the prior is decomposable, two classes of graphs where efficient
learning can take place are tree structures, and fixed-orderings with limited
in-degree. We show how MAP estimates and BMA for selectively conditioned
forests (SCF), a combination of these two classes, can be computed efficiently
for ordered sets of variables. We apply SCFs to temporal data to learn Dynamic
Bayesian Networks having an intra-timestep forest and inter-timestep limited
in-degree structure, improving model accuracy over DBNs without the combination
of structures. We also apply SCFs to Bayes Net classification to learn
selective forest augmented Naive Bayes classifiers. We argue that the built-in
feature selection of selective augmented Bayes classifiers makes them
preferable to similar non-selective classifiers based on empirical evidence.
| [
"['Brian D. Ziebart' 'Anind K. Dey' 'J Andrew Bagnell']",
"Brian D. Ziebart, Anind K. Dey, J Andrew Bagnell"
] |
stat.ME cs.LG stat.ML | null | 1206.5282 | null | null | http://arxiv.org/pdf/1206.5282v1 | 2012-06-20T15:14:16Z | 2012-06-20T15:14:16Z | A Characterization of Markov Equivalence Classes for Directed Acyclic
Graphs with Latent Variables | Different directed acyclic graphs (DAGs) may be Markov equivalent in the
sense that they entail the same conditional independence relations among the
observed variables. Meek (1995) characterizes Markov equivalence classes for
DAGs (with no latent variables) by presenting a set of orientation rules that
can correctly identify all arrow orientations shared by all DAGs in a Markov
equivalence class, given a member of that class. For DAG models with latent
variables, maximal ancestral graphs (MAGs) provide a neat representation that
facilitates model search. Earlier work (Ali et al. 2005) has identified a set
of orientation rules sufficient to construct all arrowheads common to a Markov
equivalence class of MAGs. In this paper, we provide extra rules sufficient to
construct all common tails as well. We end up with a set of orientation rules
sound and complete for identifying commonalities across a Markov equivalence
class of MAGs, which is particularly useful for causal inference.
| [
"['Jiji Zhang']",
"Jiji Zhang"
] |
cs.LG stat.ML | null | 1206.5283 | null | null | http://arxiv.org/pdf/1206.5283v1 | 2012-06-20T15:14:55Z | 2012-06-20T15:14:55Z | Bayesian Active Distance Metric Learning | Distance metric learning is an important component for many tasks, such as
statistical classification and content-based image retrieval. Existing
approaches for learning distance metrics from pairwise constraints typically
suffer from two major problems. First, most algorithms only offer point
estimation of the distance metric and can therefore be unreliable when the
number of training examples is small. Second, since these algorithms generally
select their training examples at random, they can be inefficient if labeling
effort is limited. This paper presents a Bayesian framework for distance metric
learning that estimates a posterior distribution for the distance metric from
labeled pairwise constraints. We describe an efficient algorithm based on the
variational method for the proposed Bayesian approach. Furthermore, we apply
the proposed Bayesian framework to active distance metric learning by selecting
those unlabeled example pairs with the greatest uncertainty in relative
distance. Experiments in classification demonstrate that the proposed framework
achieves higher classification accuracy and identifies more informative
training examples than the non-Bayesian approach and state-of-the-art distance
metric learning algorithms.
| [
"['Liu Yang' 'Rong Jin' 'Rahul Sukthankar']",
"Liu Yang, Rong Jin, Rahul Sukthankar"
] |
cs.AI cs.LG stat.ML | null | 1206.5286 | null | null | http://arxiv.org/pdf/1206.5286v1 | 2012-06-20T15:16:08Z | 2012-06-20T15:16:08Z | MAP Estimation, Linear Programming and Belief Propagation with Convex
Free Energies | Finding the most probable assignment (MAP) in a general graphical model is
known to be NP hard but good approximations have been attained with max-product
belief propagation (BP) and its variants. In particular, it is known that using
BP on a single-cycle graph or tree reweighted BP on an arbitrary graph will
give the MAP solution if the beliefs have no ties. In this paper we extend the
setting under which BP can be used to provably extract the MAP. We define
Convex BP as BP algorithms based on a convex free energy approximation and show
that this class includes ordinary BP with single-cycle, tree reweighted BP and
many other BP variants. We show that when there are no ties, fixed-points of
convex max-product BP will provably give the MAP solution. We also show that
convex sum-product BP at sufficiently small temperatures can be used to solve
linear programs that arise from relaxing the MAP problem. Finally, we derive a
novel condition that allows us to derive the MAP solution even if some of the
convex BP beliefs have ties. In experiments, we show that our theorems allow us
to find the MAP in many real-world instances of graphical models where exact
inference using junction-tree is impossible.
| [
"['Yair Weiss' 'Chen Yanover' 'Talya Meltzer']",
"Yair Weiss, Chen Yanover, Talya Meltzer"
] |
cs.LG cs.AI stat.ML | null | 1206.5290 | null | null | http://arxiv.org/pdf/1206.5290v1 | 2012-06-20T15:18:02Z | 2012-06-20T15:18:02Z | Imitation Learning with a Value-Based Prior | The goal of imitation learning is for an apprentice to learn how to behave in
a stochastic environment by observing a mentor demonstrating the correct
behavior. Accurate prior knowledge about the correct behavior can reduce the
need for demonstrations from the mentor. We present a novel approach to
encoding prior knowledge about the correct behavior, where we assume that this
prior knowledge takes the form of a Markov Decision Process (MDP) that is used
by the apprentice as a rough and imperfect model of the mentor's behavior.
Specifically, taking a Bayesian approach, we treat the value of a policy in
this modeling MDP as the log prior probability of the policy. In other words,
we assume a priori that the mentor's behavior is likely to be a high value
policy in the modeling MDP, though quite possibly different from the optimal
policy. We describe an efficient algorithm that, given a modeling MDP and a set
of demonstrations by a mentor, provably converges to a stationary point of the
log posterior of the mentor's policy, where the posterior is computed with
respect to the "value based" prior. We also present empirical evidence that
this prior does in fact speed learning of the mentor's policy, and is an
improvement in our experiments over similar previous methods.
| [
"['Umar Syed' 'Robert E. Schapire']",
"Umar Syed, Robert E. Schapire"
] |
cs.LG cs.AI stat.ML | null | 1206.5291 | null | null | http://arxiv.org/pdf/1206.5291v1 | 2012-06-20T15:18:24Z | 2012-06-20T15:18:24Z | Improved Dynamic Schedules for Belief Propagation | Belief propagation and its variants are popular methods for approximate
inference, but their running time and even their convergence depend greatly on
the schedule used to send the messages. Recently, dynamic update schedules have
been shown to converge much faster on hard networks than static schedules,
namely the residual BP schedule of Elidan et al. [2006]. But that RBP algorithm
wastes message updates: many messages are computed solely to determine their
priority, and are never actually performed. In this paper, we show that
estimating the residual, rather than calculating it directly, leads to
significant decreases in the number of messages required for convergence, and
in the total running time. The residual is estimated using an upper bound based
on recent work on message errors in BP. On both synthetic and real-world
networks, this dramatically decreases the running time of BP, in some cases by
a factor of five, without affecting the quality of the solution.
| [
"Charles Sutton, Andrew McCallum",
"['Charles Sutton' 'Andrew McCallum']"
] |
cs.LG stat.ML | null | 1206.5293 | null | null | http://arxiv.org/pdf/1206.5293v1 | 2012-06-20T15:19:06Z | 2012-06-20T15:19:06Z | On Sensitivity of the MAP Bayesian Network Structure to the Equivalent
Sample Size Parameter | BDeu marginal likelihood score is a popular model selection criterion for
selecting a Bayesian network structure based on sample data. This
non-informative scoring criterion assigns same score for network structures
that encode same independence statements. However, before applying the BDeu
score, one must determine a single parameter, the equivalent sample size alpha.
Unfortunately no generally accepted rule for determining the alpha parameter
has been suggested. This is disturbing, since in this paper we show through a
series of concrete experiments that the solution of the network structure
optimization problem is highly sensitive to the chosen alpha parameter value.
Based on these results, we are able to give explanations for how and why this
phenomenon happens, and discuss ideas for solving this problem.
| [
"['Tomi Silander' 'Petri Kontkanen' 'Petri Myllymaki']",
"Tomi Silander, Petri Kontkanen, Petri Myllymaki"
] |
cs.LG | null | 1206.5345 | null | null | http://arxiv.org/pdf/1206.5345v4 | 2012-10-27T00:43:47Z | 2012-06-23T00:36:08Z | Dynamic Pricing under Finite Space Demand Uncertainty: A Multi-Armed
Bandit with Dependent Arms | We consider a dynamic pricing problem under unknown demand models. In this
problem a seller offers prices to a stream of customers and observes either
success or failure in each sale attempt. The underlying demand model is unknown
to the seller and can take one of N possible forms. In this paper, we show that
this problem can be formulated as a multi-armed bandit with dependent arms. We
propose a dynamic pricing policy based on the likelihood ratio test. We show
that the proposed policy achieves complete learning, i.e., it offers a bounded
regret where regret is defined as the revenue loss with respect to the case
with a known demand model. This is in sharp contrast with the logarithmic
growing regret in multi-armed bandit with independent arms.
| [
"['Pouya Tehrani' 'Yixuan Zhai' 'Qing Zhao']",
"Pouya Tehrani, Yixuan Zhai, Qing Zhao"
] |
cs.LG cs.DS | null | 1206.5349 | null | null | http://arxiv.org/pdf/1206.5349v2 | 2012-11-12T01:42:37Z | 2012-06-23T01:33:37Z | Provable ICA with Unknown Gaussian Noise, and Implications for Gaussian
Mixtures and Autoencoders | We present a new algorithm for Independent Component Analysis (ICA) which has
provable performance guarantees. In particular, suppose we are given samples of
the form $y = Ax + \eta$ where $A$ is an unknown $n \times n$ matrix and $x$ is
a random variable whose components are independent and have a fourth moment
strictly less than that of a standard Gaussian random variable and $\eta$ is an
$n$-dimensional Gaussian random variable with unknown covariance $\Sigma$: We
give an algorithm that provable recovers $A$ and $\Sigma$ up to an additive
$\epsilon$ and whose running time and sample complexity are polynomial in $n$
and $1 / \epsilon$. To accomplish this, we introduce a novel "quasi-whitening"
step that may be useful in other contexts in which the covariance of Gaussian
noise is not known in advance. We also give a general framework for finding all
local optima of a function (given an oracle for approximately finding just one)
and this is a crucial step in our algorithm, one that has been overlooked in
previous attempts, and allows us to control the accumulation of error when we
find the columns of $A$ one by one via local search.
| [
"Sanjeev Arora, Rong Ge, Ankur Moitra, Sushant Sachdeva",
"['Sanjeev Arora' 'Rong Ge' 'Ankur Moitra' 'Sushant Sachdeva']"
] |
cs.LG | null | 1206.5533 | null | null | http://arxiv.org/pdf/1206.5533v2 | 2012-09-16T17:49:12Z | 2012-06-24T19:17:35Z | Practical recommendations for gradient-based training of deep
architectures | Learning algorithms related to artificial neural networks and in particular
for Deep Learning may seem to involve many bells and whistles, called
hyper-parameters. This chapter is meant as a practical guide with
recommendations for some of the most commonly used hyper-parameters, in
particular in the context of learning algorithms based on back-propagated
gradient and gradient-based optimization. It also discusses how to deal with
the fact that more interesting results can be obtained when allowing one to
adjust many hyper-parameters. Overall, it describes elements of the practice
used to successfully and efficiently train and debug large-scale and often deep
multi-layer neural networks. It closes with open questions about the training
difficulties observed with deeper architectures.
| [
"['Yoshua Bengio']",
"Yoshua Bengio"
] |
cs.LG | null | 1206.5538 | null | null | http://arxiv.org/pdf/1206.5538v3 | 2014-04-23T11:48:51Z | 2012-06-24T20:51:38Z | Representation Learning: A Review and New Perspectives | The success of machine learning algorithms generally depends on data
representation, and we hypothesize that this is because different
representations can entangle and hide more or less the different explanatory
factors of variation behind the data. Although specific domain knowledge can be
used to help design representations, learning with generic priors can also be
used, and the quest for AI is motivating the design of more powerful
representation-learning algorithms implementing such priors. This paper reviews
recent work in the area of unsupervised feature learning and deep learning,
covering advances in probabilistic models, auto-encoders, manifold learning,
and deep networks. This motivates longer-term unanswered questions about the
appropriate objectives for learning good representations, for computing
representations (i.e., inference), and the geometrical connections between
representation learning, density estimation and manifold learning.
| [
"['Yoshua Bengio' 'Aaron Courville' 'Pascal Vincent']",
"Yoshua Bengio and Aaron Courville and Pascal Vincent"
] |
cs.LG stat.ML | null | 1206.5580 | null | null | http://arxiv.org/pdf/1206.5580v2 | 2014-03-15T04:33:18Z | 2012-06-25T05:57:29Z | A Geometric Algorithm for Scalable Multiple Kernel Learning | We present a geometric formulation of the Multiple Kernel Learning (MKL)
problem. To do so, we reinterpret the problem of learning kernel weights as
searching for a kernel that maximizes the minimum (kernel) distance between two
convex polytopes. This interpretation combined with novel structural insights
from our geometric formulation allows us to reduce the MKL problem to a simple
optimization routine that yields provable convergence as well as quality
guarantees. As a result our method scales efficiently to much larger data sets
than most prior methods can handle. Empirical evaluation on eleven datasets
shows that we are significantly faster and even compare favorably with a
uniform unweighted combination of kernels.
| [
"['John Moeller' 'Parasaran Raman' 'Avishek Saha'\n 'Suresh Venkatasubramanian']",
"John Moeller, Parasaran Raman, Avishek Saha, Suresh Venkatasubramanian"
] |
cs.LG stat.ML | null | 1206.5766 | null | null | http://arxiv.org/pdf/1206.5766v4 | 2012-10-28T07:03:15Z | 2012-06-25T18:49:44Z | Learning mixtures of spherical Gaussians: moment methods and spectral
decompositions | This work provides a computationally efficient and statistically consistent
moment-based estimator for mixtures of spherical Gaussians. Under the condition
that component means are in general position, a simple spectral decomposition
technique yields consistent parameter estimates from low-order observable
moments, without additional minimum separation assumptions needed by previous
computationally efficient estimation procedures. Thus computational and
information-theoretic barriers to efficient estimation in mixture models are
precluded when the mixture components have means in general position and
spherical covariances. Some connections are made to estimation problems related
to independent component analysis.
| [
"['Daniel Hsu' 'Sham M. Kakade']",
"Daniel Hsu, Sham M. Kakade"
] |
cs.LG cs.IT math.IT | null | 1206.5882 | null | null | http://arxiv.org/pdf/1206.5882v1 | 2012-06-26T05:10:36Z | 2012-06-26T05:10:36Z | Exact Recovery of Sparsely-Used Dictionaries | We consider the problem of learning sparsely used dictionaries with an
arbitrary square dictionary and a random, sparse coefficient matrix. We prove
that $O (n \log n)$ samples are sufficient to uniquely determine the
coefficient matrix. Based on this proof, we design a polynomial-time algorithm,
called Exact Recovery of Sparsely-Used Dictionaries (ER-SpUD), and prove that
it probably recovers the dictionary and coefficient matrix when the coefficient
matrix is sufficiently sparse. Simulation results show that ER-SpUD reveals the
true dictionary as well as the coefficients with probability higher than many
state-of-the-art algorithms.
| [
"Daniel A. Spielman, Huan Wang, John Wright",
"['Daniel A. Spielman' 'Huan Wang' 'John Wright']"
] |
cs.LG | null | 1206.5915 | null | null | http://arxiv.org/pdf/1206.5915v1 | 2012-06-26T08:29:43Z | 2012-06-26T08:29:43Z | Graph Based Classification Methods Using Inaccurate External Classifier
Information | In this paper we consider the problem of collectively classifying entities
where relational information is available across the entities. In practice
inaccurate class distribution for each entity is often available from another
(external) classifier. For example this distribution could come from a
classifier built using content features or a simple dictionary. Given the
relational and inaccurate external classifier information, we consider two
graph based settings in which the problem of collective classification can be
solved. In the first setting the class distribution is used to fix labels to a
subset of nodes and the labels for the remaining nodes are obtained like in a
transductive setting. In the other setting the class distributions of all nodes
are used to define the fitting function part of a graph regularized objective
function. We define a generalized objective function that handles both the
settings. Methods like harmonic Gaussian field and local-global consistency
(LGC) reported in the literature can be seen as special cases. We extend the
LGC and weighted vote relational neighbor classification (WvRN) methods to
support usage of external classifier information. We also propose an efficient
least squares regularization (LSR) based method and relate it to information
regularization methods. All the methods are evaluated on several benchmark and
real world datasets. Considering together speed, robustness and accuracy,
experimental results indicate that the LSR and WvRN-extension methods perform
better than other methods.
| [
"Sundararajan Sellamanickam, Sathiya Keerthi Selvaraj",
"['Sundararajan Sellamanickam' 'Sathiya Keerthi Selvaraj']"
] |
cs.LG stat.ML | null | 1206.6015 | null | null | http://arxiv.org/pdf/1206.6015v1 | 2012-06-26T14:56:33Z | 2012-06-26T14:56:33Z | Transductive Classification Methods for Mixed Graphs | In this paper we provide a principled approach to solve a transductive
classification problem involving a similar graph (edges tend to connect nodes
with same labels) and a dissimilar graph (edges tend to connect nodes with
opposing labels). Most of the existing methods, e.g., Information
Regularization (IR), Weighted vote Relational Neighbor classifier (WvRN) etc,
assume that the given graph is only a similar graph. We extend the IR and WvRN
methods to deal with mixed graphs. We evaluate the proposed extensions on
several benchmark datasets as well as two real world datasets and demonstrate
the usefulness of our ideas.
| [
"Sundararajan Sellamanickam, Sathiya Keerthi Selvaraj",
"['Sundararajan Sellamanickam' 'Sathiya Keerthi Selvaraj']"
] |
cs.LG stat.ML | null | 1206.6030 | null | null | http://arxiv.org/pdf/1206.6030v1 | 2012-06-26T15:58:21Z | 2012-06-26T15:58:21Z | An Additive Model View to Sparse Gaussian Process Classifier Design | We consider the problem of designing a sparse Gaussian process classifier
(SGPC) that generalizes well. Viewing SGPC design as constructing an additive
model like in boosting, we present an efficient and effective SGPC design
method to perform a stage-wise optimization of a predictive loss function. We
introduce new methods for two key components viz., site parameter estimation
and basis vector selection in any SGPC design. The proposed adaptive sampling
based basis vector selection method aids in achieving improved generalization
performance at a reduced computational cost. This method can also be used in
conjunction with any other site parameter estimation methods. It has similar
computational and storage complexities as the well-known information vector
machine and is suitable for large datasets. The hyperparameters can be
determined by optimizing a predictive loss function. The experimental results
show better generalization performance of the proposed basis vector selection
method on several benchmark datasets, particularly for relatively smaller basis
vector set sizes or on difficult datasets.
| [
"['Sundararajan Sellamanickam' 'Shirish Shevade']",
"Sundararajan Sellamanickam, Shirish Shevade"
] |
cs.LG stat.ML | null | 1206.6038 | null | null | http://arxiv.org/pdf/1206.6038v1 | 2012-06-26T16:19:51Z | 2012-06-26T16:19:51Z | Predictive Approaches For Gaussian Process Classifier Model Selection | In this paper we consider the problem of Gaussian process classifier (GPC)
model selection with different Leave-One-Out (LOO) Cross Validation (CV) based
optimization criteria and provide a practical algorithm using LOO predictive
distributions with such criteria to select hyperparameters. Apart from the
standard average negative logarithm of predictive probability (NLP), we also
consider smoothed versions of criteria such as F-measure and Weighted Error
Rate (WER), which are useful for handling imbalanced data. Unlike the
regression case, LOO predictive distributions for the classifier case are
intractable. We use approximate LOO predictive distributions arrived from
Expectation Propagation (EP) approximation. We conduct experiments on several
real world benchmark datasets. When the NLP criterion is used for optimizing
the hyperparameters, the predictive approaches show better or comparable NLP
generalization performance with existing GPC approaches. On the other hand,
when the F-measure criterion is used, the F-measure generalization performance
improves significantly on several datasets. Overall, the EP-based predictive
algorithm comes out as an excellent choice for GP classifier model selection
with different optimization criteria.
| [
"Sundararajan Sellamanickam, Sathiya Keerthi Selvaraj",
"['Sundararajan Sellamanickam' 'Sathiya Keerthi Selvaraj']"
] |
cs.LG cs.SY stat.ML | null | 1206.6141 | null | null | http://arxiv.org/pdf/1206.6141v1 | 2012-06-26T23:39:00Z | 2012-06-26T23:39:00Z | Directed Time Series Regression for Control | We propose directed time series regression, a new approach to estimating
parameters of time-series models for use in certainty equivalent model
predictive control. The approach combines merits of least squares regression
and empirical optimization. Through a computational study involving a
stochastic version of a well known inverted pendulum balancing problem, we
demonstrate that directed time series regression can generate significant
improvements in controller performance over either of the aforementioned
alternatives.
| [
"Yi-Hao Kao and Benjamin Van Roy",
"['Yi-Hao Kao' 'Benjamin Van Roy']"
] |
cs.LG cs.DB | 10.1109/TKDE.2012.131 | 1206.6196 | null | null | http://arxiv.org/abs/1206.6196v1 | 2012-06-27T07:44:15Z | 2012-06-27T07:44:15Z | Discrete Elastic Inner Vector Spaces with Application in Time Series and
Sequence Mining | This paper proposes a framework dedicated to the construction of what we call
discrete elastic inner product allowing one to embed sets of non-uniformly
sampled multivariate time series or sequences of varying lengths into inner
product space structures. This framework is based on a recursive definition
that covers the case of multiple embedded time elastic dimensions. We prove
that such inner products exist in our general framework and show how a simple
instance of this inner product class operates on some prospective applications,
while generalizing the Euclidean inner product. Classification experimentations
on time series and symbolic sequences datasets demonstrate the benefits that we
can expect by embedding time series or sequences into elastic inner spaces
rather than into classical Euclidean spaces. These experiments show good
accuracy when compared to the euclidean distance or even dynamic programming
algorithms while maintaining a linear algorithmic complexity at exploitation
stage, although a quadratic indexing phase beforehand is required.
| [
"['Pierre-François Marteau' 'Nicolas Bonnel' 'Gilbas Ménier']",
"Pierre-Fran\\c{c}ois Marteau (IRISA), Nicolas Bonnel (IRISA), Gilbas\n M\\'enier (IRISA)"
] |
cs.LG cs.AI cs.DC cs.MA cs.RO | null | 1206.6230 | null | null | http://arxiv.org/pdf/1206.6230v2 | 2012-06-28T04:21:18Z | 2012-06-27T11:11:55Z | Decentralized Data Fusion and Active Sensing with Mobile Sensors for
Modeling and Predicting Spatiotemporal Traffic Phenomena | The problem of modeling and predicting spatiotemporal traffic phenomena over
an urban road network is important to many traffic applications such as
detecting and forecasting congestion hotspots. This paper presents a
decentralized data fusion and active sensing (D2FAS) algorithm for mobile
sensors to actively explore the road network to gather and assimilate the most
informative data for predicting the traffic phenomenon. We analyze the time and
communication complexity of D2FAS and demonstrate that it can scale well with a
large number of observations and sensors. We provide a theoretical guarantee on
its predictive performance to be equivalent to that of a sophisticated
centralized sparse approximation for the Gaussian process (GP) model: The
computation of such a sparse approximate GP model can thus be parallelized and
distributed among the mobile sensors (in a Google-like MapReduce paradigm),
thereby achieving efficient and scalable prediction. We also theoretically
guarantee its active sensing performance that improves under various practical
environmental conditions. Empirical evaluation on real-world urban road network
data shows that our D2FAS algorithm is significantly more time-efficient and
scalable than state-of-the-art centralized algorithms while achieving
comparable predictive performance.
| [
"['Jie Chen' 'Kian Hsiang Low' 'Colin Keng-Yan Tan' 'Ali Oran'\n 'Patrick Jaillet' 'John M. Dolan' 'Gaurav S. Sukhatme']",
"Jie Chen, Kian Hsiang Low, Colin Keng-Yan Tan, Ali Oran, Patrick\n Jaillet, John M. Dolan and Gaurav S. Sukhatme"
] |
cs.AI cs.LG | null | 1206.6262 | null | null | http://arxiv.org/pdf/1206.6262v1 | 2012-06-27T13:27:56Z | 2012-06-27T13:27:56Z | Scaling Life-long Off-policy Learning | We pursue a life-long learning approach to artificial intelligence that makes
extensive use of reinforcement learning algorithms. We build on our prior work
with general value functions (GVFs) and the Horde architecture. GVFs have been
shown able to represent a wide variety of facts about the world's dynamics that
may be useful to a long-lived agent (Sutton et al. 2011). We have also
previously shown scaling - that thousands of on-policy GVFs can be learned
accurately in real-time on a mobile robot (Modayil, White & Sutton 2011). That
work was limited in that it learned about only one policy at a time, whereas
the greatest potential benefits of life-long learning come from learning about
many policies in parallel, as we explore in this paper. Many new challenges
arise in this off-policy learning setting. To deal with convergence and
efficiency challenges, we utilize the recently introduced GTD({\lambda})
algorithm. We show that GTD({\lambda}) with tile coding can simultaneously
learn hundreds of predictions for five simple target policies while following a
single random behavior policy, assessing accuracy with interspersed on-policy
tests. To escape the need for the tests, which preclude further scaling, we
introduce and empirically vali- date two online estimators of the off-policy
objective (MSPBE). Finally, we use the more efficient of the two estimators to
demonstrate off-policy learning at scale - the learning of value functions for
one thousand policies in real time on a physical robot. This ability
constitutes a significant step towards scaling life-long off-policy learning.
| [
"Adam White, Joseph Modayil, and Richard S. Sutton",
"['Adam White' 'Joseph Modayil' 'Richard S. Sutton']"
] |
stat.ML cs.LG | null | 1206.6361 | null | null | http://arxiv.org/pdf/1206.6361v1 | 2012-06-27T18:37:50Z | 2012-06-27T18:37:50Z | Learning Markov Network Structure using Brownian Distance Covariance | In this paper, we present a simple non-parametric method for learning the
structure of undirected graphs from data that drawn from an underlying unknown
distribution. We propose to use Brownian distance covariance to estimate the
conditional independences between the random variables and encodes pairwise
Markov graph. This framework can be applied in high-dimensional setting, where
the number of parameters much be larger than the sample size.
| [
"['Ehsan Khoshgnauz']",
"Ehsan Khoshgnauz"
] |
cs.LG stat.CO stat.ML | null | 1206.6380 | null | null | http://arxiv.org/pdf/1206.6380v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring | In this paper we address the following question: Can we approximately sample
from a Bayesian posterior distribution if we are only allowed to touch a small
mini-batch of data-items for every sample we generate?. An algorithm based on
the Langevin equation with stochastic gradients (SGLD) was previously proposed
to solve this, but its mixing rate was slow. By leveraging the Bayesian Central
Limit Theorem, we extend the SGLD algorithm so that at high mixing rates it
will sample from a normal approximation of the posterior, while for slow mixing
rates it will mimic the behavior of SGLD with a pre-conditioner matrix. As a
bonus, the proposed algorithm is reminiscent of Fisher scoring (with stochastic
gradients) and as such an efficient optimizer during burn-in.
| [
"Sungjin Ahn (UC Irvine), Anoop Korattikara (UC Irvine), Max Welling\n (UC Irvine)",
"['Sungjin Ahn' 'Anoop Korattikara' 'Max Welling']"
] |
cs.LG stat.ML | null | 1206.6381 | null | null | http://arxiv.org/pdf/1206.6381v2 | 2012-07-09T08:36:42Z | 2012-06-27T19:59:59Z | Shortest path distance in random k-nearest neighbor graphs | Consider a weighted or unweighted k-nearest neighbor graph that has been
built on n data points drawn randomly according to some density p on R^d. We
study the convergence of the shortest path distance in such graphs as the
sample size tends to infinity. We prove that for unweighted kNN graphs, this
distance converges to an unpleasant distance function on the underlying space
whose properties are detrimental to machine learning. We also study the
behavior of the shortest path distance in weighted kNN graphs.
| [
"['Morteza Alamgir' 'Ulrike von Luxburg']",
"Morteza Alamgir (Max Planck Institute for Intelligent Systems), Ulrike\n von Luxburg (Max Planck Institute for Intelligent Systems and University of\n Hamburg)"
] |
cs.LG stat.ML | null | 1206.6382 | null | null | http://arxiv.org/pdf/1206.6382v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | High-Dimensional Covariance Decomposition into Sparse Markov and
Independence Domains | In this paper, we present a novel framework incorporating a combination of
sparse models in different domains. We posit the observed data as generated
from a linear combination of a sparse Gaussian Markov model (with a sparse
precision matrix) and a sparse Gaussian independence model (with a sparse
covariance matrix). We provide efficient methods for decomposition of the data
into two domains, \viz Markov and independence domains. We characterize a set
of sufficient conditions for identifiability and model consistency. Our
decomposition method is based on a simple modification of the popular
$\ell_1$-penalized maximum-likelihood estimator ($\ell_1$-MLE). We establish
that our estimator is consistent in both the domains, i.e., it successfully
recovers the supports of both Markov and independence models, when the number
of samples $n$ scales as $n = \Omega(d^2 \log p)$, where $p$ is the number of
variables and $d$ is the maximum node degree in the Markov model. Our
conditions for recovery are comparable to those of $\ell_1$-MLE for consistent
estimation of a sparse Markov model, and thus, we guarantee successful
high-dimensional estimation of a richer class of models under comparable
conditions. Our experiments validate these results and also demonstrate that
our models have better inference accuracy under simple algorithms such as loopy
belief propagation.
| [
"['Majid Janzamin' 'Animashree Anandkumar']",
"Majid Janzamin (UC Irvine), Animashree Anandkumar (UC Irvine)"
] |
cs.LG stat.ML | null | 1206.6383 | null | null | http://arxiv.org/pdf/1206.6383v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Feature Selection via Probabilistic Outputs | This paper investigates two feature-scoring criteria that make use of
estimated class probabilities: one method proposed by \citet{shen} and a
complementary approach proposed below. We develop a theoretical framework to
analyze each criterion and show that both estimate the spread (across all
values of a given feature) of the probability that an example belongs to the
positive class. Based on our analysis, we predict when each scoring technique
will be advantageous over the other and give empirical results validating our
predictions.
| [
"Andrea Danyluk (Williams College), Nicholas Arnosti (Stanford\n University)",
"['Andrea Danyluk' 'Nicholas Arnosti']"
] |
cs.LG stat.ML | null | 1206.6384 | null | null | http://arxiv.org/pdf/1206.6384v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Efficient and Practical Stochastic Subgradient Descent for Nuclear Norm
Regularization | We describe novel subgradient methods for a broad class of matrix
optimization problems involving nuclear norm regularization. Unlike existing
approaches, our method executes very cheap iterations by combining low-rank
stochastic subgradients with efficient incremental SVD updates, made possible
by highly optimized and parallelizable dense linear algebra operations on small
matrices. Our practical algorithms always maintain a low-rank factorization of
iterates that can be conveniently held in memory and efficiently multiplied to
generate predictions in matrix completion settings. Empirical comparisons
confirm that our approach is highly competitive with several recently proposed
state-of-the-art solvers for such problems.
| [
"['Haim Avron' 'Satyen Kale' 'Shiva Kasiviswanathan' 'Vikas Sindhwani']",
"Haim Avron (IBM T.J. Watson Research Center), Satyen Kale (IBM T.J.\n Watson Research Center), Shiva Kasiviswanathan (IBM T.J. Watson Research\n Center), Vikas Sindhwani (IBM T.J. Watson Research Center)"
] |
cs.LG stat.ME stat.ML | null | 1206.6385 | null | null | http://arxiv.org/pdf/1206.6385v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Improved Estimation in Time Varying Models | Locally adapted parameterizations of a model (such as locally weighted
regression) are expressive but often suffer from high variance. We describe an
approach for reducing the variance, based on the idea of estimating
simultaneously a transformed space for the model, as well as locally adapted
parameterizations in this new space. We present a new problem formulation that
captures this idea and illustrate it in the important context of time varying
models. We develop an algorithm for learning a set of bases for approximating a
time varying sparse network; each learned basis constitutes an archetypal
sparse network structure. We also provide an extension for learning task-driven
bases. We present empirical results on synthetic data sets, as well as on a BCI
EEG classification task.
| [
"['Doina Precup' 'Philip Bachman']",
"Doina Precup (McGill University), Philip Bachman (McGill University)"
] |
cs.LG cs.AI stat.ML | null | 1206.6386 | null | null | http://arxiv.org/pdf/1206.6386v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | How To Grade a Test Without Knowing the Answers --- A Bayesian Graphical
Model for Adaptive Crowdsourcing and Aptitude Testing | We propose a new probabilistic graphical model that jointly models the
difficulties of questions, the abilities of participants and the correct
answers to questions in aptitude testing and crowdsourcing settings. We devise
an active learning/adaptive testing scheme based on a greedy minimization of
expected model entropy, which allows a more efficient resource allocation by
dynamically choosing the next question to be asked based on the previous
responses. We present experimental results that confirm the ability of our
model to infer the required parameters and demonstrate that the adaptive
testing scheme requires fewer questions to obtain the same accuracy as a static
test scenario.
| [
"['Yoram Bachrach' 'Thore Graepel' 'Tom Minka' 'John Guiver']",
"Yoram Bachrach (Microsoft Research), Thore Graepel (Microsoft\n Research), Tom Minka (Microsoft Research), John Guiver (Microsoft Research)"
] |
cs.LG stat.ML | null | 1206.6387 | null | null | http://arxiv.org/pdf/1206.6387v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Fast classification using sparse decision DAGs | In this paper we propose an algorithm that builds sparse decision DAGs
(directed acyclic graphs) from a list of base classifiers provided by an
external learning method such as AdaBoost. The basic idea is to cast the DAG
design task as a Markov decision process. Each instance can decide to use or to
skip each base classifier, based on the current state of the classifier being
built. The result is a sparse decision DAG where the base classifiers are
selected in a data-dependent way. The method has a single hyperparameter with a
clear semantics of controlling the accuracy/speed trade-off. The algorithm is
competitive with state-of-the-art cascade detectors on three object-detection
benchmarks, and it clearly outperforms them when there is a small number of
base classifiers. Unlike cascades, it is also readily applicable for
multi-class classification. Using the multi-class setup, we show on a benchmark
web page ranking data set that we can significantly improve the decision speed
without harming the performance of the ranker.
| [
"['Djalel Benbouzid' 'Robert Busa-Fekete' 'Balazs Kegl']",
"Djalel Benbouzid (University of Paris-Sud / CNRS / IN2P3), Robert\n Busa-Fekete (LAL, CNRS), Balazs Kegl (CNRS / University of Paris-Sud)"
] |
cs.LG cs.SI stat.ML | null | 1206.6388 | null | null | http://arxiv.org/pdf/1206.6388v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Canonical Trends: Detecting Trend Setters in Web Data | Much information available on the web is copied, reused or rephrased. The
phenomenon that multiple web sources pick up certain information is often
called trend. A central problem in the context of web data mining is to detect
those web sources that are first to publish information which will give rise to
a trend. We present a simple and efficient method for finding trends dominating
a pool of web sources and identifying those web sources that publish the
information relevant to a trend before others. We validate our approach on real
data collected from influential technology news feeds.
| [
"['Felix Biessmann' 'Jens-Michalis Papaioannou' 'Mikio Braun'\n 'Andreas Harth']",
"Felix Biessmann (TU Berlin), Jens-Michalis Papaioannou (TU Berlin),\n Mikio Braun (TU Berlin), Andreas Harth (Karlsruhe Institue of Technology)"
] |
cs.LG cs.CR stat.ML | null | 1206.6389 | null | null | http://arxiv.org/pdf/1206.6389v3 | 2013-03-25T10:16:36Z | 2012-06-27T19:59:59Z | Poisoning Attacks against Support Vector Machines | We investigate a family of poisoning attacks against Support Vector Machines
(SVM). Such attacks inject specially crafted training data that increases the
SVM's test error. Central to the motivation for these attacks is the fact that
most learning algorithms assume that their training data comes from a natural
or well-behaved distribution. However, this assumption does not generally hold
in security-sensitive settings. As we demonstrate, an intelligent adversary
can, to some extent, predict the change of the SVM's decision function due to
malicious input and use this ability to construct malicious data. The proposed
attack uses a gradient ascent strategy in which the gradient is computed based
on properties of the SVM's optimal solution. This method can be kernelized and
enables the attack to be constructed in the input space even for non-linear
kernels. We experimentally demonstrate that our gradient ascent procedure
reliably identifies good local maxima of the non-convex validation error
surface, which significantly increases the classifier's test error.
| [
"['Battista Biggio' 'Blaine Nelson' 'Pavel Laskov']",
"Battista Biggio (University of Cagliari), Blaine Nelson (University of\n Tuebingen), Pavel Laskov (University of Tuebingen)"
] |
cs.AI cs.CE cs.LG | null | 1206.6390 | null | null | http://arxiv.org/pdf/1206.6390v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Incorporating Causal Prior Knowledge as Path-Constraints in Bayesian
Networks and Maximal Ancestral Graphs | We consider the incorporation of causal knowledge about the presence or
absence of (possibly indirect) causal relations into a causal model. Such
causal relations correspond to directed paths in a causal model. This type of
knowledge naturally arises from experimental data, among others. Specifically,
we consider the formalisms of Causal Bayesian Networks and Maximal Ancestral
Graphs and their Markov equivalence classes: Partially Directed Acyclic Graphs
and Partially Oriented Ancestral Graphs. We introduce sound and complete
procedures which are able to incorporate causal prior knowledge in such models.
In simulated experiments, we show that often considering even a few causal
facts leads to a significant number of new inferences. In a case study, we also
show how to use real experimental data to infer causal knowledge and
incorporate it into a real biological causal network. The code is available at
mensxmachina.org.
| [
"Giorgos Borboudakis (ICS FORTH), Ioannis Tsamardinos (University of\n Crete)",
"['Giorgos Borboudakis' 'Ioannis Tsamardinos']"
] |
stat.ME cs.LG stat.AP | null | 1206.6391 | null | null | http://arxiv.org/pdf/1206.6391v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Gaussian Process Quantile Regression using Expectation Propagation | Direct quantile regression involves estimating a given quantile of a response
variable as a function of input variables. We present a new framework for
direct quantile regression where a Gaussian process model is learned,
minimising the expected tilted loss function. The integration required in
learning is not analytically tractable so to speed up the learning we employ
the Expectation Propagation algorithm. We describe how this work relates to
other quantile regression methods and apply the method on both synthetic and
real data sets. The method is shown to be competitive with state of the art
methods whilst allowing for the leverage of the full Gaussian process
probabilistic framework.
| [
"['Alexis Boukouvalas' 'Remi Barillec' 'Dan Cornford']",
"Alexis Boukouvalas (Aston University), Remi Barillec (Aston\n University), Dan Cornford (Aston University)"
] |
cs.LG cs.SD stat.ML | null | 1206.6392 | null | null | http://arxiv.org/pdf/1206.6392v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Modeling Temporal Dependencies in High-Dimensional Sequences:
Application to Polyphonic Music Generation and Transcription | We investigate the problem of modeling symbolic sequences of polyphonic music
in a completely general piano-roll representation. We introduce a probabilistic
model based on distribution estimators conditioned on a recurrent neural
network that is able to discover temporal dependencies in high-dimensional
sequences. Our approach outperforms many traditional models of polyphonic music
on a variety of realistic datasets. We show how our musical language model can
serve as a symbolic prior to improve the accuracy of polyphonic transcription.
| [
"['Nicolas Boulanger-Lewandowski' 'Yoshua Bengio' 'Pascal Vincent']",
"Nicolas Boulanger-Lewandowski (Universite de Montreal), Yoshua Bengio\n (Universite de Montreal), Pascal Vincent (Universite de Montreal)"
] |
cs.LG stat.ML | null | 1206.6393 | null | null | http://arxiv.org/pdf/1206.6393v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Local Loss Optimization in Operator Models: A New Insight into Spectral
Learning | This paper re-visits the spectral method for learning latent variable models
defined in terms of observable operators. We give a new perspective on the
method, showing that operators can be recovered by minimizing a loss defined on
a finite subset of the domain. A non-convex optimization similar to the
spectral method is derived. We also propose a regularized convex relaxation of
this optimization. We show that in practice the availabilty of a continuous
regularization parameter (in contrast with the discrete number of states in the
original method) allows a better trade-off between accuracy and model
complexity. We also prove that in general, a randomized strategy for choosing
the local loss will succeed with high probability.
| [
"['Borja Balle' 'Ariadna Quattoni' 'Xavier Carreras']",
"Borja Balle (UPC), Ariadna Quattoni (UPC), Xavier Carreras (UPC)"
] |
cs.LG cs.SI stat.ML | null | 1206.6394 | null | null | http://arxiv.org/pdf/1206.6394v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Nonparametric Link Prediction in Dynamic Networks | We propose a non-parametric link prediction algorithm for a sequence of graph
snapshots over time. The model predicts links based on the features of its
endpoints, as well as those of the local neighborhood around the endpoints.
This allows for different types of neighborhoods in a graph, each with its own
dynamics (e.g, growing or shrinking communities). We prove the consistency of
our estimator, and give a fast implementation based on locality-sensitive
hashing. Experiments with simulated as well as five real-world dynamic graphs
show that we outperform the state of the art, especially when sharp
fluctuations or non-linearities are present.
| [
"['Purnamrita Sarkar' 'Deepayan Chakrabarti' 'Michael Jordan']",
"Purnamrita Sarkar (UC Berkeley), Deepayan Chakrabarti (Facebook),\n Michael Jordan (UC Berkeley)"
] |
cs.LG cs.CR stat.ML | null | 1206.6395 | null | null | http://arxiv.org/pdf/1206.6395v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Convergence Rates for Differentially Private Statistical Estimation | Differential privacy is a cryptographically-motivated definition of privacy
which has gained significant attention over the past few years. Differentially
private solutions enforce privacy by adding random noise to a function computed
over the data, and the challenge in designing such algorithms is to control the
added noise in order to optimize the privacy-accuracy-sample size tradeoff.
This work studies differentially-private statistical estimation, and shows
upper and lower bounds on the convergence rates of differentially private
approximations to statistical estimators. Our results reveal a formal
connection between differential privacy and the notion of Gross Error
Sensitivity (GES) in robust statistics, by showing that the convergence rate of
any differentially private approximation to an estimator that is accurate over
a large class of distributions has to grow with the GES of the estimator. We
then provide an upper bound on the convergence rate of a differentially private
approximation to an estimator with bounded range and bounded GES. We show that
the bounded range condition is necessary if we wish to ensure a strict form of
differential privacy.
| [
"['Kamalika Chaudhuri' 'Daniel Hsu']",
"Kamalika Chaudhuri (UCSD), Daniel Hsu (Microsoft Research)"
] |
cs.LG stat.ML | null | 1206.6396 | null | null | http://arxiv.org/pdf/1206.6396v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Joint Optimization and Variable Selection of High-dimensional Gaussian
Processes | Maximizing high-dimensional, non-convex functions through noisy observations
is a notoriously hard problem, but one that arises in many applications. In
this paper, we tackle this challenge by modeling the unknown function as a
sample from a high-dimensional Gaussian process (GP) distribution. Assuming
that the unknown function only depends on few relevant variables, we show that
it is possible to perform joint variable selection and GP optimization. We
provide strong performance guarantees for our algorithm, bounding the sample
complexity of variable selection, and as well as providing cumulative regret
bounds. We further provide empirical evidence on the effectiveness of our
algorithm on several benchmark optimization problems.
| [
"['Bo Chen' 'Rui Castro' 'Andreas Krause']",
"Bo Chen (Caltech), Rui Castro (Eindhoven University of Technology),\n Andreas Krause (ETH Zurich)"
] |
cs.LG stat.ML | null | 1206.6397 | null | null | http://arxiv.org/pdf/1206.6397v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Communications Inspired Linear Discriminant Analysis | We study the problem of supervised linear dimensionality reduction, taking an
information-theoretic viewpoint. The linear projection matrix is designed by
maximizing the mutual information between the projected signal and the class
label (based on a Shannon entropy measure). By harnessing a recent theoretical
result on the gradient of mutual information, the above optimization problem
can be solved directly using gradient descent, without requiring simplification
of the objective function. Theoretical analysis and empirical comparison are
made between the proposed method and two closely related methods (Linear
Discriminant Analysis and Information Discriminant Analysis), and comparisons
are also made with a method in which Renyi entropy is used to define the mutual
information (in this case the gradient may be computed simply, under a special
parameter setting). Relative to these alternative approaches, the proposed
method achieves promising results on real datasets.
| [
"['Minhua Chen' 'William Carson' 'Miguel Rodrigues' 'Robert Calderbank'\n 'Lawrence Carin']",
"Minhua Chen (Duke University), William Carson (PA Consulting Group,\n Cambridge Technology Centre), Miguel Rodrigues (University College London),\n Robert Calderbank (Duke University), Lawrence Carin (Duke University)"
] |
cs.LG stat.ML | null | 1206.6398 | null | null | http://arxiv.org/pdf/1206.6398v2 | 2012-09-03T16:05:45Z | 2012-06-27T19:59:59Z | Learning Parameterized Skills | We introduce a method for constructing skills capable of solving tasks drawn
from a distribution of parameterized reinforcement learning problems. The
method draws example tasks from a distribution of interest and uses the
corresponding learned policies to estimate the topology of the
lower-dimensional piecewise-smooth manifold on which the skill policies lie.
This manifold models how policy parameters change as task parameters vary. The
method identifies the number of charts that compose the manifold and then
applies non-linear regression in each chart to construct a parameterized skill
by predicting policy parameters from task parameters. We evaluate our method on
an underactuated simulated robotic arm tasked with learning to accurately throw
darts at a parameterized target location.
| [
"['Bruno Da Silva' 'George Konidaris' 'Andrew Barto']",
"Bruno Da Silva (UMass Amherst), George Konidaris (MIT), Andrew Barto\n (UMass Amherst)"
] |
cs.LG cs.AI stat.ML | null | 1206.6399 | null | null | http://arxiv.org/pdf/1206.6399v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Demand-Driven Clustering in Relational Domains for Predicting Adverse
Drug Events | Learning from electronic medical records (EMR) is challenging due to their
relational nature and the uncertain dependence between a patient's past and
future health status. Statistical relational learning is a natural fit for
analyzing EMRs but is less adept at handling their inherent latent structure,
such as connections between related medications or diseases. One way to capture
the latent structure is via a relational clustering of objects. We propose a
novel approach that, instead of pre-clustering the objects, performs a
demand-driven clustering during learning. We evaluate our algorithm on three
real-world tasks where the goal is to use EMRs to predict whether a patient
will have an adverse reaction to a medication. We find that our approach is
more accurate than performing no clustering, pre-clustering, and using
expert-constructed medical heterarchies.
| [
"['Jesse Davis' 'Vitor Santos Costa' 'Peggy Peissig' 'Michael Caldwell'\n 'Elizabeth Berg' 'David Page']",
"Jesse Davis (KU Leuven), Vitor Santos Costa (University of Porto),\n Peggy Peissig (Marshfield Clinic), Michael Caldwell (Marshfield Clinic),\n Elizabeth Berg (University of Wisconsin - Madison), David Page (University of\n Wisconsin - Madison)"
] |
cs.LG stat.ML | null | 1206.6400 | null | null | http://arxiv.org/pdf/1206.6400v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Online Bandit Learning against an Adaptive Adversary: from Regret to
Policy Regret | Online learning algorithms are designed to learn even when their input is
generated by an adversary. The widely-accepted formal definition of an online
algorithm's ability to learn is the game-theoretic notion of regret. We argue
that the standard definition of regret becomes inadequate if the adversary is
allowed to adapt to the online algorithm's actions. We define the alternative
notion of policy regret, which attempts to provide a more meaningful way to
measure an online algorithm's performance against adaptive adversaries.
Focusing on the online bandit setting, we show that no bandit algorithm can
guarantee a sublinear policy regret against an adaptive adversary with
unbounded memory. On the other hand, if the adversary's memory is bounded, we
present a general technique that converts any bandit algorithm with a sublinear
regret bound into an algorithm with a sublinear policy regret bound. We extend
this result to other variants of regret, such as switching regret, internal
regret, and swap regret.
| [
"Raman Arora (TTIC), Ofer Dekel (Microsoft Research), Ambuj Tewari\n (University of Texas)",
"['Raman Arora' 'Ofer Dekel' 'Ambuj Tewari']"
] |
cs.LG stat.ML | null | 1206.6401 | null | null | http://arxiv.org/pdf/1206.6401v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Consistent Multilabel Ranking through Univariate Losses | We consider the problem of rank loss minimization in the setting of
multilabel classification, which is usually tackled by means of convex
surrogate losses defined on pairs of labels. Very recently, this approach was
put into question by a negative result showing that commonly used pairwise
surrogate losses, such as exponential and logistic losses, are inconsistent. In
this paper, we show a positive result which is arguably surprising in light of
the previous one: the simpler univariate variants of exponential and logistic
surrogates (i.e., defined on single labels) are consistent for rank loss
minimization. Instead of directly proving convergence, we give a much stronger
result by deriving regret bounds and convergence rates. The proposed losses
suggest efficient and scalable algorithms, which are tested experimentally.
| [
"['Krzysztof Dembczynski' 'Wojciech Kotlowski' 'Eyke Huellermeier']",
"Krzysztof Dembczynski (Poznan University of Technology), Wojciech\n Kotlowski (Poznan University of Technology), Eyke Huellermeier (Marburg\n University)"
] |
cs.LG stat.ML | null | 1206.6402 | null | null | http://arxiv.org/pdf/1206.6402v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Parallelizing Exploration-Exploitation Tradeoffs with Gaussian Process
Bandit Optimization | Can one parallelize complex exploration exploitation tradeoffs? As an
example, consider the problem of optimal high-throughput experimental design,
where we wish to sequentially design batches of experiments in order to
simultaneously learn a surrogate function mapping stimulus to response and
identify the maximum of the function. We formalize the task as a multi-armed
bandit problem, where the unknown payoff function is sampled from a Gaussian
process (GP), and instead of a single arm, in each round we pull a batch of
several arms in parallel. We develop GP-BUCB, a principled algorithm for
choosing batches, based on the GP-UCB algorithm for sequential GP optimization.
We prove a surprising result; as compared to the sequential approach, the
cumulative regret of the parallel algorithm only increases by a constant factor
independent of the batch size B. Our results provide rigorous theoretical
support for exploiting parallelism in Bayesian global optimization. We
demonstrate the effectiveness of our approach on two real-world applications.
| [
"Thomas Desautels (California Inst. of Technology), Andreas Krause (ETH\n Zurich), Joel Burdick (California Inst. of Technology)",
"['Thomas Desautels' 'Andreas Krause' 'Joel Burdick']"
] |
cs.CL cs.LG | null | 1206.6403 | null | null | http://arxiv.org/pdf/1206.6403v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Two Step CCA: A new spectral method for estimating vector models of
words | Unlabeled data is often used to learn representations which can be used to
supplement baseline features in a supervised learner. For example, for text
applications where the words lie in a very high dimensional space (the size of
the vocabulary), one can learn a low rank "dictionary" by an
eigen-decomposition of the word co-occurrence matrix (e.g. using PCA or CCA).
In this paper, we present a new spectral method based on CCA to learn an
eigenword dictionary. Our improved procedure computes two set of CCAs, the
first one between the left and right contexts of the given word and the second
one between the projections resulting from this CCA and the word itself. We
prove theoretically that this two-step procedure has lower sample complexity
than the simple single step procedure and also illustrate the empirical
efficacy of our approach and the richness of representations learned by our Two
Step CCA (TSCCA) procedure on the tasks of POS tagging and sentiment
classification.
| [
"Paramveer Dhillon (University of Pennsylvania), Jordan Rodu\n (University of Pennsylvania), Dean Foster (University of Pennsylvania), Lyle\n Ungar (University of Pennsylvania)",
"['Paramveer Dhillon' 'Jordan Rodu' 'Dean Foster' 'Lyle Ungar']"
] |
cs.LG cs.CY math.OC stat.ML | null | 1206.6404 | null | null | http://arxiv.org/pdf/1206.6404v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Policy Gradients with Variance Related Risk Criteria | Managing risk in dynamic decision problems is of cardinal importance in many
fields such as finance and process control. The most common approach to
defining risk is through various variance related criteria such as the Sharpe
Ratio or the standard deviation adjusted reward. It is known that optimizing
many of the variance related risk criteria is NP-hard. In this paper we devise
a framework for local policy gradient style algorithms for reinforcement
learning for variance related criteria. Our starting point is a new formula for
the variance of the cost-to-go in episodic tasks. Using this formula we develop
policy gradient algorithms for criteria that involve both the expected cost and
the variance of the cost. We prove the convergence of these algorithms to local
minima and demonstrate their applicability in a portfolio planning problem.
| [
"['Dotan Di Castro' 'Aviv Tamar' 'Shie Mannor']",
"Dotan Di Castro (Technion), Aviv Tamar (Technion), Shie Mannor\n (Technion)"
] |
cs.LG cs.AI stat.ML | null | 1206.6405 | null | null | http://arxiv.org/pdf/1206.6405v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Bounded Planning in Passive POMDPs | In Passive POMDPs actions do not affect the world state, but still incur
costs. When the agent is bounded by information-processing constraints, it can
only keep an approximation of the belief. We present a variational principle
for the problem of maintaining the information which is most useful for
minimizing the cost, and introduce an efficient and simple algorithm for
finding an optimum.
| [
"['Roy Fox' 'Naftali Tishby']",
"Roy Fox (Hebrew University), Naftali Tishby (Hebrew University)"
] |
cs.LG stat.ML | null | 1206.6406 | null | null | http://arxiv.org/pdf/1206.6406v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Bayesian Optimal Active Search and Surveying | We consider two active binary-classification problems with atypical
objectives. In the first, active search, our goal is to actively uncover as
many members of a given class as possible. In the second, active surveying, our
goal is to actively query points to ultimately predict the proportion of a
given class. Numerous real-world problems can be framed in these terms, and in
either case typical model-based concerns such as generalization error are only
of secondary importance.
We approach these problems via Bayesian decision theory; after choosing
natural utility functions, we derive the optimal policies. We provide three
contributions. In addition to introducing the active surveying problem, we
extend previous work on active search in two ways. First, we prove a novel
theoretical result, that less-myopic approximations to the optimal policy can
outperform more-myopic approximations by any arbitrary degree. We then derive
bounds that for certain models allow us to reduce (in practice dramatically)
the exponential search space required by a naive implementation of the optimal
policy, enabling further lookahead while still ensuring that optimal decisions
are always made.
| [
"['Roman Garnett' 'Yamuna Krishnamurthy' 'Xuehan Xiong' 'Jeff Schneider'\n 'Richard Mann']",
"Roman Garnett (Carnegie Mellon University), Yamuna Krishnamurthy\n (Carnegie Mellon University), Xuehan Xiong (Carnegie Mellon University), Jeff\n Schneider (Carnegie Mellon University), Richard Mann (Uppsala Universitet)"
] |
cs.LG stat.ML | null | 1206.6407 | null | null | http://arxiv.org/pdf/1206.6407v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Large-Scale Feature Learning With Spike-and-Slab Sparse Coding | We consider the problem of object recognition with a large number of classes.
In order to overcome the low amount of labeled examples available in this
setting, we introduce a new feature learning and extraction procedure based on
a factor model we call spike-and-slab sparse coding (S3C). Prior work on S3C
has not prioritized the ability to exploit parallel architectures and scale S3C
to the enormous problem sizes needed for object recognition. We present a novel
inference procedure for appropriate for use with GPUs which allows us to
dramatically increase both the training set size and the amount of latent
factors that S3C may be trained with. We demonstrate that this approach
improves upon the supervised learning capabilities of both sparse coding and
the spike-and-slab Restricted Boltzmann Machine (ssRBM) on the CIFAR-10
dataset. We use the CIFAR-100 dataset to demonstrate that our method scales to
large numbers of classes better than previous methods. Finally, we use our
method to win the NIPS 2011 Workshop on Challenges In Learning Hierarchical
Models? Transfer Learning Challenge.
| [
"['Ian Goodfellow' 'Aaron Courville' 'Yoshua Bengio']",
"Ian Goodfellow (Universite de Montreal), Aaron Courville (Universite\n de Montreal), Yoshua Bengio (Universite de Montreal)"
] |
stat.ME astro-ph.IM cs.LG | null | 1206.6408 | null | null | http://arxiv.org/pdf/1206.6408v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Sequential Nonparametric Regression | We present algorithms for nonparametric regression in settings where the data
are obtained sequentially. While traditional estimators select bandwidths that
depend upon the sample size, for sequential data the effective sample size is
dynamically changing. We propose a linear time algorithm that adjusts the
bandwidth for each new data point, and show that the estimator achieves the
optimal minimax rate of convergence. We also propose the use of online expert
mixing algorithms to adapt to unknown smoothness of the regression function. We
provide simulations that confirm the theoretical results, and demonstrate the
effectiveness of the methods.
| [
"Haijie Gu (Carnegie Mellon University), John Lafferty (University of\n Chicago)",
"['Haijie Gu' 'John Lafferty']"
] |
cs.LG cs.DC stat.ML | null | 1206.6409 | null | null | http://arxiv.org/pdf/1206.6409v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Scaling Up Coordinate Descent Algorithms for Large $\ell_1$
Regularization Problems | We present a generic framework for parallel coordinate descent (CD)
algorithms that includes, as special cases, the original sequential algorithms
Cyclic CD and Stochastic CD, as well as the recent parallel Shotgun algorithm.
We introduce two novel parallel algorithms that are also special
cases---Thread-Greedy CD and Coloring-Based CD---and give performance
measurements for an OpenMP implementation of these.
| [
"Chad Scherrer (Pacific Northwest National Lab), Mahantesh Halappanavar\n (Pacific Northwest National Lab), Ambuj Tewari (University of Texas), David\n Haglin (Pacific Northwest National Lab)",
"['Chad Scherrer' 'Mahantesh Halappanavar' 'Ambuj Tewari' 'David Haglin']"
] |
cs.LG stat.ML | null | 1206.6410 | null | null | http://arxiv.org/pdf/1206.6410v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | On the Partition Function and Random Maximum A-Posteriori Perturbations | In this paper we relate the partition function to the max-statistics of
random variables. In particular, we provide a novel framework for approximating
and bounding the partition function using MAP inference on randomly perturbed
models. As a result, we can use efficient MAP solvers such as graph-cuts to
evaluate the corresponding partition function. We show that our method excels
in the typical "high signal - high coupling" regime that results in ragged
energy landscapes difficult for alternative approaches.
| [
"['Tamir Hazan' 'Tommi Jaakkola']",
"Tamir Hazan (TTIC), Tommi Jaakkola (MIT)"
] |
cs.LG cs.DB cs.IR stat.ML | null | 1206.6411 | null | null | http://arxiv.org/pdf/1206.6411v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | On the Difficulty of Nearest Neighbor Search | Fast approximate nearest neighbor (NN) search in large databases is becoming
popular. Several powerful learning-based formulations have been proposed
recently. However, not much attention has been paid to a more fundamental
question: how difficult is (approximate) nearest neighbor search in a given
data set? And which data properties affect the difficulty of nearest neighbor
search and how? This paper introduces the first concrete measure called
Relative Contrast that can be used to evaluate the influence of several crucial
data characteristics such as dimensionality, sparsity, and database size
simultaneously in arbitrary normed metric spaces. Moreover, we present a
theoretical analysis to prove how the difficulty measure (relative contrast)
determines/affects the complexity of Local Sensitive Hashing, a popular
approximate NN search method. Relative contrast also provides an explanation
for a family of heuristic hashing algorithms with good practical performance
based on PCA. Finally, we show that most of the previous works in measuring NN
search meaningfulness/difficulty can be derived as special asymptotic cases for
dense vectors of the proposed measure.
| [
"['Junfeng He' 'Sanjiv Kumar' 'Shih-Fu Chang']",
"Junfeng He (Columbia University), Sanjiv Kumar (Google Research),\n Shih-Fu Chang (Columbia University)"
] |
cs.LG stat.ML | null | 1206.6412 | null | null | http://arxiv.org/pdf/1206.6412v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | A Simple Algorithm for Semi-supervised Learning with Improved
Generalization Error Bound | In this work, we develop a simple algorithm for semi-supervised regression.
The key idea is to use the top eigenfunctions of integral operator derived from
both labeled and unlabeled examples as the basis functions and learn the
prediction function by a simple linear regression. We show that under
appropriate assumptions about the integral operator, this approach is able to
achieve an improved regression error bound better than existing bounds of
supervised learning. We also verify the effectiveness of the proposed algorithm
by an empirical study.
| [
"Ming Ji (UIUC), Tianbao Yang (Michigan State University), Binbin Lin\n (Zhejiang University), Rong Jin (Michigan State University), Jiawei Han\n (UIUC)",
"['Ming Ji' 'Tianbao Yang' 'Binbin Lin' 'Rong Jin' 'Jiawei Han']"
] |
cs.LG stat.ML | null | 1206.6413 | null | null | http://arxiv.org/pdf/1206.6413v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | A Convex Relaxation for Weakly Supervised Classifiers | This paper introduces a general multi-class approach to weakly supervised
classification. Inferring the labels and learning the parameters of the model
is usually done jointly through a block-coordinate descent algorithm such as
expectation-maximization (EM), which may lead to local minima. To avoid this
problem, we propose a cost function based on a convex relaxation of the
soft-max loss. We then propose an algorithm specifically designed to
efficiently solve the corresponding semidefinite program (SDP). Empirically,
our method compares favorably to standard ones on different datasets for
multiple instance learning and semi-supervised learning as well as on
clustering tasks.
| [
"Armand Joulin (INRIA - Ecole Normale Superieure), Francis Bach (INRIA\n - Ecole Normale Superieure)",
"['Armand Joulin' 'Francis Bach']"
] |
cs.LG cs.SI stat.ML | null | 1206.6414 | null | null | http://arxiv.org/pdf/1206.6414v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | The Nonparametric Metadata Dependent Relational Model | We introduce the nonparametric metadata dependent relational (NMDR) model, a
Bayesian nonparametric stochastic block model for network data. The NMDR allows
the entities associated with each node to have mixed membership in an unbounded
collection of latent communities. Learned regression models allow these
memberships to depend on, and be predicted from, arbitrary node metadata. We
develop efficient MCMC algorithms for learning NMDR models from partially
observed node relationships. Retrospective MCMC methods allow our sampler to
work directly with the infinite stick-breaking representation of the NMDR,
avoiding the need for finite truncations. Our results demonstrate recovery of
useful latent communities from real-world social and ecological networks, and
the usefulness of metadata in link prediction tasks.
| [
"Dae Il Kim (Brown University), Michael Hughes (Brown University), Erik\n Sudderth (Brown University)",
"['Dae Il Kim' 'Michael Hughes' 'Erik Sudderth']"
] |
cs.LG stat.ML | null | 1206.6415 | null | null | http://arxiv.org/pdf/1206.6415v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | The Big Data Bootstrap | The bootstrap provides a simple and powerful means of assessing the quality
of estimators. However, in settings involving large datasets, the computation
of bootstrap-based quantities can be prohibitively demanding. As an
alternative, we present the Bag of Little Bootstraps (BLB), a new procedure
which incorporates features of both the bootstrap and subsampling to obtain a
robust, computationally efficient means of assessing estimator quality. BLB is
well suited to modern parallel and distributed computing architectures and
retains the generic applicability, statistical efficiency, and favorable
theoretical properties of the bootstrap. We provide the results of an extensive
empirical and theoretical investigation of BLB's behavior, including a study of
its statistical correctness, its large-scale implementation and performance,
selection of hyperparameters, and performance on real data.
| [
"['Ariel Kleiner' 'Ameet Talwalkar' 'Purnamrita Sarkar' 'Michael Jordan']",
"Ariel Kleiner (UC Berkeley), Ameet Talwalkar (UC Berkeley), Purnamrita\n Sarkar (UC Berkeley), Michael Jordan (UC Berkeley)"
] |
cs.LG stat.ML | null | 1206.6416 | null | null | http://arxiv.org/pdf/1206.6416v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | An Infinite Latent Attribute Model for Network Data | Latent variable models for network data extract a summary of the relational
structure underlying an observed network. The simplest possible models
subdivide nodes of the network into clusters; the probability of a link between
any two nodes then depends only on their cluster assignment. Currently
available models can be classified by whether clusters are disjoint or are
allowed to overlap. These models can explain a "flat" clustering structure.
Hierarchical Bayesian models provide a natural approach to capture more complex
dependencies. We propose a model in which objects are characterised by a latent
feature vector. Each feature is itself partitioned into disjoint groups
(subclusters), corresponding to a second layer of hierarchy. In experimental
comparisons, the model achieves significantly improved predictive performance
on social and biological link prediction tasks. The results indicate that
models with a single layer hierarchy over-simplify real networks.
| [
"['Konstantina Palla' 'David Knowles' 'Zoubin Ghahramani']",
"Konstantina Palla (University of Cambridge), David Knowles (University\n of Cambridge), Zoubin Ghahramani (University of Cambridge)"
] |
cs.LG stat.ML | null | 1206.6417 | null | null | http://arxiv.org/pdf/1206.6417v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Learning Task Grouping and Overlap in Multi-task Learning | In the paradigm of multi-task learning, mul- tiple related prediction tasks
are learned jointly, sharing information across the tasks. We propose a
framework for multi-task learn- ing that enables one to selectively share the
information across the tasks. We assume that each task parameter vector is a
linear combi- nation of a finite number of underlying basis tasks. The
coefficients of the linear combina- tion are sparse in nature and the overlap
in the sparsity patterns of two tasks controls the amount of sharing across
these. Our model is based on on the assumption that task pa- rameters within a
group lie in a low dimen- sional subspace but allows the tasks in differ- ent
groups to overlap with each other in one or more bases. Experimental results on
four datasets show that our approach outperforms competing methods.
| [
"['Abhishek Kumar' 'Hal Daume III']",
"Abhishek Kumar (University of Maryland), Hal Daume III (University of\n Maryland)"
] |
cs.LG cs.CV stat.ML | null | 1206.6418 | null | null | http://arxiv.org/pdf/1206.6418v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Learning Invariant Representations with Local Transformations | Learning invariant representations is an important problem in machine
learning and pattern recognition. In this paper, we present a novel framework
of transformation-invariant feature learning by incorporating linear
transformations into the feature learning algorithms. For example, we present
the transformation-invariant restricted Boltzmann machine that compactly
represents data by its weights and their transformations, which achieves
invariance of the feature representation via probabilistic max pooling. In
addition, we show that our transformation-invariant feature learning framework
can also be extended to other unsupervised learning methods, such as
autoencoders or sparse coding. We evaluate our method on several image
classification benchmark datasets, such as MNIST variations, CIFAR-10, and
STL-10, and show competitive or superior classification performance when
compared to the state-of-the-art. Furthermore, our method achieves
state-of-the-art performance on phone classification tasks with the TIMIT
dataset, which demonstrates wide applicability of our proposed algorithms to
other domains.
| [
"['Kihyuk Sohn' 'Honglak Lee']",
"Kihyuk Sohn (University of Michigan), Honglak Lee (University of\n Michigan)"
] |
cs.LG stat.ML | null | 1206.6419 | null | null | http://arxiv.org/pdf/1206.6419v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Cross-Domain Multitask Learning with Latent Probit Models | Learning multiple tasks across heterogeneous domains is a challenging problem
since the feature space may not be the same for different tasks. We assume the
data in multiple tasks are generated from a latent common domain via sparse
domain transforms and propose a latent probit model (LPM) to jointly learn the
domain transforms, and the shared probit classifier in the common domain. To
learn meaningful task relatedness and avoid over-fitting in classification, we
introduce sparsity in the domain transforms matrices, as well as in the common
classifier. We derive theoretical bounds for the estimation error of the
classifier in terms of the sparsity of domain transforms. An
expectation-maximization algorithm is derived for learning the LPM. The
effectiveness of the approach is demonstrated on several real datasets.
| [
"Shaobo Han (Duke University), Xuejun Liao (Duke University), Lawrence\n Carin (Duke University)",
"['Shaobo Han' 'Xuejun Liao' 'Lawrence Carin']"
] |
cs.LG cs.DC stat.ML | null | 1206.6420 | null | null | http://arxiv.org/pdf/1206.6420v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Distributed Parameter Estimation via Pseudo-likelihood | Estimating statistical models within sensor networks requires distributed
algorithms, in which both data and computation are distributed across the nodes
of the network. We propose a general approach for distributed learning based on
combining local estimators defined by pseudo-likelihood components,
encompassing a number of combination methods, and provide both theoretical and
experimental analysis. We show that simple linear combination or max-voting
methods, when combined with second-order information, are statistically
competitive with more advanced and costly joint optimization. Our algorithms
have many attractive properties including low communication and computational
cost and "any-time" behavior.
| [
"['Qiang Liu' 'Alexander Ihler']",
"Qiang Liu (UC Irvine), Alexander Ihler (UC Irvine)"
] |
cs.LG stat.ML | null | 1206.6421 | null | null | http://arxiv.org/pdf/1206.6421v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Structured Learning from Partial Annotations | Structured learning is appropriate when predicting structured outputs such as
trees, graphs, or sequences. Most prior work requires the training set to
consist of complete trees, graphs or sequences. Specifying such detailed ground
truth can be tedious or infeasible for large outputs. Our main contribution is
a large margin formulation that makes structured learning from only partially
annotated data possible. The resulting optimization problem is non-convex, yet
can be efficiently solve by concave-convex procedure (CCCP) with novel speedup
strategies. We apply our method to a challenging tracking-by-assignment problem
of a variable number of divisible objects. On this benchmark, using only 25% of
a full annotation we achieve a performance comparable to a model learned with a
full annotation. Finally, we offer a unifying perspective of previous work
using the hinge, ramp, or max loss for structured learning, followed by an
empirical comparison on their practical performance.
| [
"['Xinghua Lou' 'Fred Hamprecht']",
"Xinghua Lou (University of Heidelberg), Fred Hamprecht (University of\n Heidelberg)"
] |
cs.LG stat.ML | null | 1206.6422 | null | null | http://arxiv.org/pdf/1206.6422v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | An Online Boosting Algorithm with Theoretical Justifications | We study the task of online boosting--combining online weak learners into an
online strong learner. While batch boosting has a sound theoretical foundation,
online boosting deserves more study from the theoretical perspective. In this
paper, we carefully compare the differences between online and batch boosting,
and propose a novel and reasonable assumption for the online weak learner.
Based on the assumption, we design an online boosting algorithm with a strong
theoretical guarantee by adapting from the offline SmoothBoost algorithm that
matches the assumption closely. We further tackle the task of deciding the
number of weak learners using established theoretical results for online convex
programming and predicting with expert advice. Experiments on real-world data
sets demonstrate that the proposed algorithm compares favorably with existing
online boosting algorithms.
| [
"['Shang-Tse Chen' 'Hsuan-Tien Lin' 'Chi-Jen Lu']",
"Shang-Tse Chen (Academia Sinica), Hsuan-Tien Lin (National Taiwan\n University), Chi-Jen Lu (Academia Sinica)"
] |
cs.CL cs.LG cs.RO | null | 1206.6423 | null | null | http://arxiv.org/pdf/1206.6423v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | A Joint Model of Language and Perception for Grounded Attribute Learning | As robots become more ubiquitous and capable, it becomes ever more important
to enable untrained users to easily interact with them. Recently, this has led
to study of the language grounding problem, where the goal is to extract
representations of the meanings of natural language tied to perception and
actuation in the physical world. In this paper, we present an approach for
joint learning of language and perception models for grounded attribute
induction. Our perception model includes attribute classifiers, for example to
detect object color and shape, and the language model is based on a
probabilistic categorial grammar that enables the construction of rich,
compositional meaning representations. The approach is evaluated on the task of
interpreting sentences that describe sets of objects in a physical workspace.
We demonstrate accurate task performance and effective latent-variable concept
induction in physical grounded scenes.
| [
"Cynthia Matuszek (University of Washington), Nicholas FitzGerald\n (University of Washington), Luke Zettlemoyer (University of Washington),\n Liefeng Bo (University of Washington), Dieter Fox (University of Washington)",
"['Cynthia Matuszek' 'Nicholas FitzGerald' 'Luke Zettlemoyer' 'Liefeng Bo'\n 'Dieter Fox']"
] |
cs.LG stat.ML | null | 1206.6425 | null | null | http://arxiv.org/pdf/1206.6425v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Sparse Stochastic Inference for Latent Dirichlet allocation | We present a hybrid algorithm for Bayesian topic models that combines the
efficiency of sparse Gibbs sampling with the scalability of online stochastic
inference. We used our algorithm to analyze a corpus of 1.2 million books (33
billion words) with thousands of topics. Our approach reduces the bias of
variational inference and generalizes to many Bayesian hidden-variable models.
| [
"David Mimno (Princeton University), Matt Hoffman (Columbia\n University), David Blei (Princeton University)",
"['David Mimno' 'Matt Hoffman' 'David Blei']"
] |
cs.CL cs.LG | null | 1206.6426 | null | null | http://arxiv.org/pdf/1206.6426v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | A Fast and Simple Algorithm for Training Neural Probabilistic Language
Models | In spite of their superior performance, neural probabilistic language models
(NPLMs) remain far less widely used than n-gram models due to their notoriously
long training times, which are measured in weeks even for moderately-sized
datasets. Training NPLMs is computationally expensive because they are
explicitly normalized, which leads to having to consider all words in the
vocabulary when computing the log-likelihood gradients.
We propose a fast and simple algorithm for training NPLMs based on
noise-contrastive estimation, a newly introduced procedure for estimating
unnormalized continuous distributions. We investigate the behaviour of the
algorithm on the Penn Treebank corpus and show that it reduces the training
times by more than an order of magnitude without affecting the quality of the
resulting models. The algorithm is also more efficient and much more stable
than importance sampling because it requires far fewer noise samples to perform
well.
We demonstrate the scalability of the proposed approach by training several
neural language models on a 47M-word corpus with a 80K-word vocabulary,
obtaining state-of-the-art results on the Microsoft Research Sentence
Completion Challenge dataset.
| [
"['Andriy Mnih' 'Yee Whye Teh']",
"Andriy Mnih (University College London), Yee Whye Teh (University\n College London)"
] |
cs.LG stat.ML | null | 1206.6427 | null | null | http://arxiv.org/pdf/1206.6427v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Convergence of the EM Algorithm for Gaussian Mixtures with Unbalanced
Mixing Coefficients | The speed of convergence of the Expectation Maximization (EM) algorithm for
Gaussian mixture model fitting is known to be dependent on the amount of
overlap among the mixture components. In this paper, we study the impact of
mixing coefficients on the convergence of EM. We show that when the mixture
components exhibit some overlap, the convergence of EM becomes slower as the
dynamic range among the mixing coefficients increases. We propose a
deterministic anti-annealing algorithm, that significantly improves the speed
of convergence of EM for such mixtures with unbalanced mixing coefficients. The
proposed algorithm is compared against other standard optimization techniques
like BFGS, Conjugate Gradient, and the traditional EM algorithm. Finally, we
propose a similar deterministic anti-annealing based algorithm for the
Dirichlet process mixture model and demonstrate its advantages over the
conventional variational Bayesian approach.
| [
"['Iftekhar Naim' 'Daniel Gildea']",
"Iftekhar Naim (University of Rochester), Daniel Gildea (University of\n Rochester)"
] |
cs.LG stat.ML | null | 1206.6428 | null | null | http://arxiv.org/pdf/1206.6428v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | A Binary Classification Framework for Two-Stage Multiple Kernel Learning | With the advent of kernel methods, automating the task of specifying a
suitable kernel has become increasingly important. In this context, the
Multiple Kernel Learning (MKL) problem of finding a combination of
pre-specified base kernels that is suitable for the task at hand has received
significant attention from researchers. In this paper we show that Multiple
Kernel Learning can be framed as a standard binary classification problem with
additional constraints that ensure the positive definiteness of the learned
kernel. Framing MKL in this way has the distinct advantage that it makes it
easy to leverage the extensive research in binary classification to develop
better performing and more scalable MKL algorithms that are conceptually
simpler, and, arguably, more accessible to practitioners. Experiments on nine
data sets from different domains show that, despite its simplicity, the
proposed technique compares favorably with current leading MKL approaches.
| [
"['Abhishek Kumar' 'Alexandru Niculescu-Mizil' 'Koray Kavukcuoglu'\n 'Hal Daume III']",
"Abhishek Kumar (University of Maryland), Alexandru Niculescu-Mizil\n (NEC Laboratories America), Koray Kavukcuoglu (NEC Laboratories America), Hal\n Daume III (University of Maryland)"
] |
cs.LG cs.CV stat.ML | null | 1206.6429 | null | null | http://arxiv.org/pdf/1206.6429v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Incorporating Domain Knowledge in Matching Problems via Harmonic
Analysis | Matching one set of objects to another is a ubiquitous task in machine
learning and computer vision that often reduces to some form of the quadratic
assignment problem (QAP). The QAP is known to be notoriously hard, both in
theory and in practice. Here, we investigate if this difficulty can be
mitigated when some additional piece of information is available: (a) that all
QAP instances of interest come from the same application, and (b) the correct
solution for a set of such QAP instances is given. We propose a new approach to
accelerate the solution of QAPs based on learning parameters for a modified
objective function from prior QAP instances. A key feature of our approach is
that it takes advantage of the algebraic structure of permutations, in
conjunction with special methods for optimizing functions over the symmetric
group Sn in Fourier space. Experiments show that in practical domains the new
method can outperform existing approaches.
| [
"Deepti Pachauri (University of Wisconsin Madison), Maxwell Collins\n (University of Wisconsin Madison), Vikas SIngh (University of Wisconsin\n Madison), Risi Kondor (University of Chicago)",
"['Deepti Pachauri' 'Maxwell Collins' 'Vikas SIngh' 'Risi Kondor']"
] |
cs.LG stat.CO stat.ML | null | 1206.6430 | null | null | http://arxiv.org/pdf/1206.6430v1 | 2012-06-27T19:59:59Z | 2012-06-27T19:59:59Z | Variational Bayesian Inference with Stochastic Search | Mean-field variational inference is a method for approximate Bayesian
posterior inference. It approximates a full posterior distribution with a
factorized set of distributions by maximizing a lower bound on the marginal
likelihood. This requires the ability to integrate a sum of terms in the log
joint likelihood using this factorized distribution. Often not all integrals
are in closed form, which is typically handled by using a lower bound. We
present an alternative algorithm based on stochastic optimization that allows
for direct optimization of the variational lower bound. This method uses
control variates to reduce the variance of the stochastic search gradient, in
which existing lower bounds can play an important role. We demonstrate the
approach on two non-conjugate models: logistic regression and an approximation
to the HDP.
| [
"['John Paisley' 'David Blei' 'Michael Jordan']",
"John Paisley (UC Berkeley), David Blei (Princeton University), Michael\n Jordan (UC Berkeley)"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.