title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Revisiting the Training of Logic Models of Protein Signaling Networks
with a Formal Approach based on Answer Set Programming | q-bio.QM cs.AI cs.CE cs.LG | A fundamental question in systems biology is the construction and training to
data of mathematical models. Logic formalisms have become very popular to model
signaling networks because their simplicity allows us to model large systems
encompassing hundreds of proteins. An approach to train (Boolean) logic models
to high-throughput phospho-proteomics data was recently introduced and solved
using optimization heuristics based on stochastic methods. Here we demonstrate
how this problem can be solved using Answer Set Programming (ASP), a
declarative problem solving paradigm, in which a problem is encoded as a
logical program such that its answer sets represent solutions to the problem.
ASP has significant improvements over heuristic methods in terms of efficiency
and scalability, it guarantees global optimality of solutions as well as
provides a complete set of solutions. We illustrate the application of ASP with
in silico cases based on realistic networks and data.
| Santiago Videla (INRIA - IRISA), Carito Guziolowski (IRCCyN), Federica
Eduati (DEI, EBI), Sven Thiele (INRIA - IRISA), Niels Grabe, Julio
Saez-Rodriguez (EBI), Anne Siegel (INRIA - IRISA) | 10.1007/978-3-642-33636-2_20 | 1210.0690 | null | null |
TV-SVM: Total Variation Support Vector Machine for Semi-Supervised Data
Classification | cs.LG | We introduce semi-supervised data classification algorithms based on total
variation (TV), Reproducing Kernel Hilbert Space (RKHS), support vector machine
(SVM), Cheeger cut, labeled and unlabeled data points. We design binary and
multi-class semi-supervised classification algorithms. We compare the TV-based
classification algorithms with the related Laplacian-based algorithms, and show
that TV classification perform significantly better when the number of labeled
data is small.
| Xavier Bresson and Ruiliang Zhang | null | 1210.0699 | null | null |
Evaluation of linear classifiers on articles containing pharmacokinetic
evidence of drug-drug interactions | stat.ML cs.LG q-bio.QM | Background. Drug-drug interaction (DDI) is a major cause of morbidity and
mortality. [...] Biomedical literature mining can aid DDI research by
extracting relevant DDI signals from either the published literature or large
clinical databases. However, though drug interaction is an ideal area for
translational research, the inclusion of literature mining methodologies in DDI
workflows is still very preliminary. One area that can benefit from literature
mining is the automatic identification of a large number of potential DDIs,
whose pharmacological mechanisms and clinical significance can then be studied
via in vitro pharmacology and in populo pharmaco-epidemiology. Experiments. We
implemented a set of classifiers for identifying published articles relevant to
experimental pharmacokinetic DDI evidence. These documents are important for
identifying causal mechanisms behind putative drug-drug interactions, an
important step in the extraction of large numbers of potential DDIs. We
evaluate performance of several linear classifiers on PubMed abstracts, under
different feature transformation and dimensionality reduction methods. In
addition, we investigate the performance benefits of including various
publicly-available named entity recognition features, as well as a set of
internally-developed pharmacokinetic dictionaries. Results. We found that
several classifiers performed well in distinguishing relevant and irrelevant
abstracts. We found that the combination of unigram and bigram textual features
gave better performance than unigram features alone, and also that
normalization transforms that adjusted for feature frequency and document
length improved classification. For some classifiers, such as linear
discriminant analysis (LDA), proper dimensionality reduction had a large impact
on performance. Finally, the inclusion of NER features and dictionaries was
found not to help classification.
| Artemy Kolchinsky, An\'alia Louren\c{c}o, Lang Li, Luis M. Rocha | null | 1210.0734 | null | null |
A fast compression-based similarity measure with applications to
content-based image retrieval | stat.ML cs.IR cs.LG | Compression-based similarity measures are effectively employed in
applications on diverse data types with a basically parameter-free approach.
Nevertheless, there are problems in applying these techniques to
medium-to-large datasets which have been seldom addressed. This paper proposes
a similarity measure based on compression with dictionaries, the Fast
Compression Distance (FCD), which reduces the complexity of these methods,
without degradations in performance. On its basis a content-based color image
retrieval system is defined, which can be compared to state-of-the-art methods
based on invariant color features. Through the FCD a better understanding of
compression-based techniques is achieved, by performing experiments on datasets
which are larger than the ones analyzed so far in literature.
| Daniele Cerra and Mihai Datcu | 10.1016/j.jvcir.2011.10.009 | 1210.0758 | null | null |
Graph-Based Approaches to Clustering Network-Constrained Trajectory Data | cs.LG stat.ML | Even though clustering trajectory data attracted considerable attention in
the last few years, most of prior work assumed that moving objects can move
freely in an euclidean space and did not consider the eventual presence of an
underlying road network and its influence on evaluating the similarity between
trajectories. In this paper, we present two approaches to clustering
network-constrained trajectory data. The first approach discovers clusters of
trajectories that traveled along the same parts of the road network. The second
approach is segment-oriented and aims to group together road segments based on
trajectories that they have in common. Both approaches use a graph model to
depict the interactions between observations w.r.t. their similarity and
cluster this similarity graph using a community detection algorithm. We also
present experimental results obtained on synthetic data to showcase our
propositions.
| Mohamed Khalil El Mahrsi (LTCI), Fabrice Rossi (SAMM) | null | 1210.0762 | null | null |
Distributed High Dimensional Information Theoretical Image Registration
via Random Projections | cs.IT cs.LG math.IT stat.ML | Information theoretical measures, such as entropy, mutual information, and
various divergences, exhibit robust characteristics in image registration
applications. However, the estimation of these quantities is computationally
intensive in high dimensions. On the other hand, consistent estimation from
pairwise distances of the sample points is possible, which suits random
projection (RP) based low dimensional embeddings. We adapt the RP technique to
this task by means of a simple ensemble method. To the best of our knowledge,
this is the first distributed, RP based information theoretical image
registration approach. The efficiency of the method is demonstrated through
numerical examples.
| Zoltan Szabo, Andras Lorincz | 10.1016/j.dsp.2012.04.018 | 1210.0824 | null | null |
Learning mixtures of structured distributions over discrete domains | cs.LG cs.DS math.ST stat.TH | Let $\mathfrak{C}$ be a class of probability distributions over the discrete
domain $[n] = \{1,...,n\}.$ We show that if $\mathfrak{C}$ satisfies a rather
general condition -- essentially, that each distribution in $\mathfrak{C}$ can
be well-approximated by a variable-width histogram with few bins -- then there
is a highly efficient (both in terms of running time and sample complexity)
algorithm that can learn any mixture of $k$ unknown distributions from
$\mathfrak{C}.$
We analyze several natural types of distributions over $[n]$, including
log-concave, monotone hazard rate and unimodal distributions, and show that
they have the required structural property of being well-approximated by a
histogram with few bins. Applying our general algorithm, we obtain
near-optimally efficient algorithms for all these mixture learning problems.
| Siu-on Chan, Ilias Diakonikolas, Rocco A. Servedio, Xiaorui Sun | null | 1210.0864 | null | null |
Learning from Collective Intelligence in Groups | cs.SI cs.LG | Collective intelligence, which aggregates the shared information from large
crowds, is often negatively impacted by unreliable information sources with the
low quality data. This becomes a barrier to the effective use of collective
intelligence in a variety of applications. In order to address this issue, we
propose a probabilistic model to jointly assess the reliability of sources and
find the true data. We observe that different sources are often not independent
of each other. Instead, sources are prone to be mutually influenced, which
makes them dependent when sharing information with each other. High dependency
between sources makes collective intelligence vulnerable to the overuse of
redundant (and possibly incorrect) information from the dependent sources.
Thus, we reveal the latent group structure among dependent sources, and
aggregate the information at the group level rather than from individual
sources directly. This can prevent the collective intelligence from being
inappropriately dominated by dependent sources. We will also explicitly reveal
the reliability of groups, and minimize the negative impacts of unreliable
groups. Experimental results on real-world data sets show the effectiveness of
the proposed approach with respect to existing algorithms.
| Guo-Jun Qi, Charu Aggarwal, Pierre Moulin, Thomas Huang | null | 1210.0954 | null | null |
Sensory Anticipation of Optical Flow in Mobile Robotics | cs.RO cs.LG | In order to anticipate dangerous events, like a collision, an agent needs to
make long-term predictions. However, those are challenging due to uncertainties
in internal and external variables and environment dynamics. A sensorimotor
model is acquired online by the mobile robot using a state-of-the-art method
that learns the optical flow distribution in images, both in space and time.
The learnt model is used to anticipate the optical flow up to a given time
horizon and to predict an imminent collision by using reinforcement learning.
We demonstrate that multi-modal predictions reduce to simpler distributions
once actions are taken into account.
| Arturo Ribes, Jes\'us Cerquides, Yiannis Demiris and Ram\'on L\'opez
de M\'antaras | null | 1210.1104 | null | null |
Smooth Sparse Coding via Marginal Regression for Learning Sparse
Representations | stat.ML cs.LG | We propose and analyze a novel framework for learning sparse representations,
based on two statistical techniques: kernel smoothing and marginal regression.
The proposed approach provides a flexible framework for incorporating feature
similarity or temporal information present in data sets, via non-parametric
kernel smoothing. We provide generalization bounds for dictionary learning
using smooth sparse coding and show how the sample complexity depends on the L1
norm of kernel function used. Furthermore, we propose using marginal regression
for obtaining sparse codes, which significantly improves the speed and allows
one to scale to large dictionary sizes easily. We demonstrate the advantages of
the proposed approach, both in terms of accuracy and speed by extensive
experimentation on several real data sets. In addition, we demonstrate how the
proposed approach could be used for improving semi-supervised sparse coding.
| Krishnakumar Balasubramanian, Kai Yu, Guy Lebanon | null | 1210.1121 | null | null |
Fast Conical Hull Algorithms for Near-separable Non-negative Matrix
Factorization | stat.ML cs.LG | The separability assumption (Donoho & Stodden, 2003; Arora et al., 2012)
turns non-negative matrix factorization (NMF) into a tractable problem.
Recently, a new class of provably-correct NMF algorithms have emerged under
this assumption. In this paper, we reformulate the separable NMF problem as
that of finding the extreme rays of the conical hull of a finite set of
vectors. From this geometric perspective, we derive new separable NMF
algorithms that are highly scalable and empirically noise robust, and have
several other favorable properties in relation to existing methods. A parallel
implementation of our algorithm demonstrates high scalability on shared- and
distributed-memory machines.
| Abhishek Kumar, Vikas Sindhwani, Prabhanjan Kambadur | null | 1210.1190 | null | null |
Unfolding Latent Tree Structures using 4th Order Tensors | cs.LG stat.ML | Discovering the latent structure from many observed variables is an important
yet challenging learning task. Existing approaches for discovering latent
structures often require the unknown number of hidden states as an input. In
this paper, we propose a quartet based approach which is \emph{agnostic} to
this number. The key contribution is a novel rank characterization of the
tensor associated with the marginal distribution of a quartet. This
characterization allows us to design a \emph{nuclear norm} based test for
resolving quartet relations. We then use the quartet test as a subroutine in a
divide-and-conquer algorithm for recovering the latent tree structure. Under
mild conditions, the algorithm is consistent and its error probability decays
exponentially with increasing sample size. We demonstrate that the proposed
approach compares favorably to alternatives. In a real world stock dataset, it
also discovers meaningful groupings of variables, and produces a model that
fits the data better.
| Mariya Ishteva, Haesun Park, Le Song | null | 1210.1258 | null | null |
Learning Heterogeneous Similarity Measures for Hybrid-Recommendations in
Meta-Mining | cs.LG cs.AI | The notion of meta-mining has appeared recently and extends the traditional
meta-learning in two ways. First it does not learn meta-models that provide
support only for the learning algorithm selection task but ones that support
the whole data-mining process. In addition it abandons the so called black-box
approach to algorithm description followed in meta-learning. Now in addition to
the datasets, algorithms also have descriptors, workflows as well. For the
latter two these descriptions are semantic, describing properties of the
algorithms. With the availability of descriptors both for datasets and data
mining workflows the traditional modelling techniques followed in
meta-learning, typically based on classification and regression algorithms, are
no longer appropriate. Instead we are faced with a problem the nature of which
is much more similar to the problems that appear in recommendation systems. The
most important meta-mining requirements are that suggestions should use only
datasets and workflows descriptors and the cold-start problem, e.g. providing
workflow suggestions for new datasets.
In this paper we take a different view on the meta-mining modelling problem
and treat it as a recommender problem. In order to account for the meta-mining
specificities we derive a novel metric-based-learning recommender approach. Our
method learns two homogeneous metrics, one in the dataset and one in the
workflow space, and a heterogeneous one in the dataset-workflow space. All
learned metrics reflect similarities established from the dataset-workflow
preference matrix. We demonstrate our method on meta-mining over biological
(microarray datasets) problems. The application of our method is not limited to
the meta-mining problem, its formulations is general enough so that it can be
applied on problems with similar requirements.
| Phong Nguyen, Jun Wang, Melanie Hilario and Alexandros Kalousis | null | 1210.1317 | null | null |
A Scalable CUR Matrix Decomposition Algorithm: Lower Time Complexity and
Tighter Bound | cs.LG cs.DM stat.ML | The CUR matrix decomposition is an important extension of Nystr\"{o}m
approximation to a general matrix. It approximates any data matrix in terms of
a small number of its columns and rows. In this paper we propose a novel
randomized CUR algorithm with an expected relative-error bound. The proposed
algorithm has the advantages over the existing relative-error CUR algorithms
that it possesses tighter theoretical bound and lower time complexity, and that
it can avoid maintaining the whole data matrix in main memory. Finally,
experiments on several real-world datasets demonstrate significant improvement
over the existing relative-error algorithms.
| Shusen Wang, Zhihua Zhang, Jian Li | null | 1210.1461 | null | null |
Bayesian Inference with Posterior Regularization and applications to
Infinite Latent SVMs | cs.LG cs.AI stat.ME stat.ML | Existing Bayesian models, especially nonparametric Bayesian methods, rely on
specially conceived priors to incorporate domain knowledge for discovering
improved latent representations. While priors can affect posterior
distributions through Bayes' rule, imposing posterior regularization is
arguably more direct and in some cases more natural and general. In this paper,
we present regularized Bayesian inference (RegBayes), a novel computational
framework that performs posterior inference with a regularization term on the
desired post-data posterior distribution under an information theoretical
formulation. RegBayes is more flexible than the procedure that elicits expert
knowledge via priors, and it covers both directed Bayesian networks and
undirected Markov networks whose Bayesian formulation results in hybrid chain
graph models. When the regularization is induced from a linear operator on the
posterior distributions, such as the expectation operator, we present a general
convex-analysis theorem to characterize the solution of RegBayes. Furthermore,
we present two concrete examples of RegBayes, infinite latent support vector
machines (iLSVM) and multi-task infinite latent support vector machines
(MT-iLSVM), which explore the large-margin idea in combination with a
nonparametric Bayesian model for discovering predictive latent features for
classification and multi-task learning, respectively. We present efficient
inference methods and report empirical studies on several benchmark datasets,
which appear to demonstrate the merits inherited from both large-margin
learning and Bayesian nonparametrics. Such results were not available until
now, and contribute to push forward the interface between these two important
subfields, which have been largely treated as isolated in the community.
| Jun Zhu, Ning Chen, and Eric P. Xing | null | 1210.1766 | null | null |
Information fusion in multi-task Gaussian processes | stat.ML cs.AI cs.LG | This paper evaluates heterogeneous information fusion using multi-task
Gaussian processes in the context of geological resource modeling.
Specifically, it empirically demonstrates that information integration across
heterogeneous information sources leads to superior estimates of all the
quantities being modeled, compared to modeling them individually. Multi-task
Gaussian processes provide a powerful approach for simultaneous modeling of
multiple quantities of interest while taking correlations between these
quantities into consideration. Experiments are performed on large scale real
sensor data.
| Shrihari Vasudevan and Arman Melkumyan and Steven Scheding | null | 1210.1928 | null | null |
Feature Selection via L1-Penalized Squared-Loss Mutual Information | stat.ML cs.LG | Feature selection is a technique to screen out less important features. Many
existing supervised feature selection algorithms use redundancy and relevancy
as the main criteria to select features. However, feature interaction,
potentially a key characteristic in real-world problems, has not received much
attention. As an attempt to take feature interaction into account, we propose
L1-LSMI, an L1-regularization based algorithm that maximizes a squared-loss
variant of mutual information between selected features and outputs. Numerical
results show that L1-LSMI performs well in handling redundancy, detecting
non-linear dependency, and considering feature interaction.
| Wittawat Jitkrittum, Hirotaka Hachiya, Masashi Sugiyama | 10.1587/transinf.E96.D.1513 | 1210.1960 | null | null |
Anomalous Vacillatory Learning | math.LO cs.LG cs.LO | In 1986, Osherson, Stob and Weinstein asked whether two variants of anomalous
vacillatory learning, TxtFex^*_* and TxtFext^*_*, could be distinguished. In
both, a machine is permitted to vacillate between a finite number of hypotheses
and to make a finite number of errors. TxtFext^*_*-learning requires that
hypotheses output infinitely often must describe the same finite variant of the
correct set, while TxtFex^*_*-learning permits the learner to vacillate between
finitely many different finite variants of the correct set. In this paper we
show that TxtFex^*_* \neq TxtFext^*_*, thereby answering the question posed by
Osherson, \textit{et al}. We prove this in a strong way by exhibiting a family
in TxtFex^*_2 \setminus {TxtFext}^*_*.
| Achilles Beros | null | 1210.2051 | null | null |
Privacy Aware Learning | stat.ML cs.IT cs.LG math.IT | We study statistical risk minimization problems under a privacy model in
which the data is kept confidential even from the learner. In this local
privacy framework, we establish sharp upper and lower bounds on the convergence
rates of statistical estimation procedures. As a consequence, we exhibit a
precise tradeoff between the amount of privacy the data preserves and the
utility, as measured by convergence rate, of any statistical estimator or
learning procedure.
| John C. Duchi and Michael I. Jordan and Martin J. Wainwright | null | 1210.2085 | null | null |
Semisupervised Classifier Evaluation and Recalibration | cs.LG cs.CV | How many labeled examples are needed to estimate a classifier's performance
on a new dataset? We study the case where data is plentiful, but labels are
expensive. We show that by making a few reasonable assumptions on the structure
of the data, it is possible to estimate performance curves, with confidence
bounds, using a small number of ground truth labels. Our approach, which we
call Semisupervised Performance Evaluation (SPE), is based on a generative
model for the classifier's confidence scores. In addition to estimating the
performance of classifiers on new datasets, SPE can be used to recalibrate a
classifier by re-estimating the class-conditional confidence distributions.
| Peter Welinder and Max Welling and Pietro Perona | null | 1210.2162 | null | null |
ET-LDA: Joint Topic Modeling For Aligning, Analyzing and Sensemaking of
Public Events and Their Twitter Feeds | cs.LG cs.AI cs.SI physics.soc-ph | Social media channels such as Twitter have emerged as popular platforms for
crowds to respond to public events such as speeches, sports and debates. While
this promises tremendous opportunities to understand and make sense of the
reception of an event from the social media, the promises come entwined with
significant technical challenges. In particular, given an event and an
associated large scale collection of tweets, we need approaches to effectively
align tweets and the parts of the event they refer to. This in turn raises
questions about how to segment the event into smaller yet meaningful parts, and
how to figure out whether a tweet is a general one about the entire event or
specific one aimed at a particular segment of the event. In this work, we
present ET-LDA, an effective method for aligning an event and its tweets
through joint statistical modeling of topical influences from the events and
their associated tweets. The model enables the automatic segmentation of the
events and the characterization of tweets into two categories: (1) episodic
tweets that respond specifically to the content in the segments of the events,
and (2) steady tweets that respond generally about the events. We present an
efficient inference method for this model, and a comprehensive evaluation of
its effectiveness over existing methods. In particular, through a user study,
we demonstrate that users find the topics, the segments, the alignment, and the
episodic tweets discovered by ET-LDA to be of higher quality and more
interesting as compared to the state-of-the-art, with improvements in the range
of 18-41%.
| Yuheng Hu, Ajita John, Fei Wang, Doree Duncan Seligmann, Subbarao
Kambhampati | null | 1210.2164 | null | null |
Fast Online EM for Big Topic Modeling | cs.LG | The expectation-maximization (EM) algorithm can compute the
maximum-likelihood (ML) or maximum a posterior (MAP) point estimate of the
mixture models or latent variable models such as latent Dirichlet allocation
(LDA), which has been one of the most popular probabilistic topic modeling
methods in the past decade. However, batch EM has high time and space
complexities to learn big LDA models from big data streams. In this paper, we
present a fast online EM (FOEM) algorithm that infers the topic distribution
from the previously unseen documents incrementally with constant memory
requirements. Within the stochastic approximation framework, we show that FOEM
can converge to the local stationary point of the LDA's likelihood function. By
dynamic scheduling for the fast speed and parameter streaming for the low
memory usage, FOEM is more efficient for some lifelong topic modeling tasks
than the state-of-the-art online LDA algorithms to handle both big data and big
models (aka, big topic modeling) on just a PC.
| Jia Zeng, Zhi-Qiang Liu and Xiao-Qin Cao | 10.1109/TKDE.2015.2492565 | 1210.2179 | null | null |
A Fast Distributed Proximal-Gradient Method | cs.DC cs.LG stat.ML | We present a distributed proximal-gradient method for optimizing the average
of convex functions, each of which is the private local objective of an agent
in a network with time-varying topology. The local objectives have distinct
differentiable components, but they share a common nondifferentiable component,
which has a favorable structure suitable for effective computation of the
proximal operator. In our method, each agent iteratively updates its estimate
of the global minimum by optimizing its local objective function, and
exchanging estimates with others via communication in the network. Using
Nesterov-type acceleration techniques and multiple communication steps per
iteration, we show that this method converges at the rate 1/k (where k is the
number of communication rounds between the agents), which is faster than the
convergence rate of the existing distributed methods for solving this problem.
The superior convergence rate of our method is also verified by numerical
experiments.
| Annie I. Chen and Asuman Ozdaglar | null | 1210.2289 | null | null |
Blending Learning and Inference in Structured Prediction | cs.LG | In this paper we derive an efficient algorithm to learn the parameters of
structured predictors in general graphical models. This algorithm blends the
learning and inference tasks, which results in a significant speedup over
traditional approaches, such as conditional random fields and structured
support vector machines. For this purpose we utilize the structures of the
predictors to describe a low dimensional structured prediction task which
encourages local consistencies within the different structures while learning
the parameters of the model. Convexity of the learning task provides the means
to enforce the consistencies between the different parts. The
inference-learning blending algorithm that we propose is guaranteed to converge
to the optimum of the low dimensional primal and dual programs. Unlike many of
the existing approaches, the inference-learning blending allows us to learn
efficiently high-order graphical models, over regions of any size, and very
large number of parameters. We demonstrate the effectiveness of our approach,
while presenting state-of-the-art results in stereo estimation, semantic
segmentation, shape reconstruction, and indoor scene understanding.
| Tamir Hazan, Alexander Schwing, David McAllester and Raquel Urtasun | null | 1210.2346 | null | null |
The Power of Linear Reconstruction Attacks | cs.DS cs.CR cs.LG math.PR | We consider the power of linear reconstruction attacks in statistical data
privacy, showing that they can be applied to a much wider range of settings
than previously understood. Linear attacks have been studied before (Dinur and
Nissim PODS'03, Dwork, McSherry and Talwar STOC'07, Kasiviswanathan, Rudelson,
Smith and Ullman STOC'10, De TCC'12, Muthukrishnan and Nikolov STOC'12) but
have so far been applied only in settings with releases that are obviously
linear.
Consider a database curator who manages a database of sensitive information
but wants to release statistics about how a sensitive attribute (say, disease)
in the database relates to some nonsensitive attributes (e.g., postal code,
age, gender, etc). We show one can mount linear reconstruction attacks based on
any release that gives: a) the fraction of records that satisfy a given
non-degenerate boolean function. Such releases include contingency tables
(previously studied by Kasiviswanathan et al., STOC'10) as well as more complex
outputs like the error rate of classifiers such as decision trees; b) any one
of a large class of M-estimators (that is, the output of empirical risk
minimization algorithms), including the standard estimators for linear and
logistic regression.
We make two contributions: first, we show how these types of releases can be
transformed into a linear format, making them amenable to existing
polynomial-time reconstruction algorithms. This is already perhaps surprising,
since many of the above releases (like M-estimators) are obtained by solving
highly nonlinear formulations. Second, we show how to analyze the resulting
attacks under various distributional assumptions on the data. Specifically, we
consider a setting in which the same statistic (either a) or b) above) is
released about how the sensitive attribute relates to all subsets of size k
(out of a total of d) nonsensitive boolean attributes.
| Shiva Prasad Kasiviswanathan, Mark Rudelson, Adam Smith | null | 1210.2381 | null | null |
Measuring the Influence of Observations in HMMs through the
Kullback-Leibler Distance | cs.IT cs.LG math.IT math.PR | We measure the influence of individual observations on the sequence of the
hidden states of the Hidden Markov Model (HMM) by means of the Kullback-Leibler
distance (KLD). Namely, we consider the KLD between the conditional
distribution of the hidden states' chain given the complete sequence of
observations and the conditional distribution of the hidden chain given all the
observations but the one under consideration. We introduce a linear complexity
algorithm for computing the influence of all the observations. As an
illustration, we investigate the application of our algorithm to the problem of
detecting outliers in HMM data series.
| Vittorio Perduca, Gregory Nuel | 10.1109/LSP.2012.2235830 | 1210.2613 | null | null |
Multi-view constrained clustering with an incomplete mapping between
views | cs.LG cs.AI | Multi-view learning algorithms typically assume a complete bipartite mapping
between the different views in order to exchange information during the
learning process. However, many applications provide only a partial mapping
between the views, creating a challenge for current methods. To address this
problem, we propose a multi-view algorithm based on constrained clustering that
can operate with an incomplete mapping. Given a set of pairwise constraints in
each view, our approach propagates these constraints using a local similarity
measure to those instances that can be mapped to the other views, allowing the
propagated constraints to be transferred across views via the partial mapping.
It uses co-EM to iteratively estimate the propagation within each view based on
the current clustering model, transfer the constraints across views, and then
update the clustering model. By alternating the learning process between views,
this approach produces a unified clustering model that is consistent with all
views. We show that this approach significantly improves clustering performance
over several other methods for transferring constraints and allows multi-view
clustering to be reliably applied when given a limited mapping between the
views. Our evaluation reveals that the propagated constraints have high
precision with respect to the true clusters in the data, explaining their
benefit to clustering performance in both single- and multi-view learning
scenarios.
| Eric Eaton, Marie desJardins, Sara Jacob | 10.1007/s10115-012-0577-7 | 1210.2640 | null | null |
Cost-Sensitive Tree of Classifiers | stat.ML cs.LG | Recently, machine learning algorithms have successfully entered large-scale
real-world industrial applications (e.g. search engines and email spam
filters). Here, the CPU cost during test time must be budgeted and accounted
for. In this paper, we address the challenge of balancing the test-time cost
and the classifier accuracy in a principled fashion. The test-time cost of a
classifier is often dominated by the computation required for feature
extraction-which can vary drastically across eatures. We decrease this
extraction time by constructing a tree of classifiers, through which test
inputs traverse along individual paths. Each path extracts different features
and is optimized for a specific sub-partition of the input space. By only
computing features for inputs that benefit from them the most, our cost
sensitive tree of classifiers can match the high accuracies of the current
state-of-the-art at a small fraction of the computational cost.
| Zhixiang Xu, Matt J. Kusner, Kilian Q. Weinberger, Minmin Chen | null | 1210.2771 | null | null |
Learning Onto-Relational Rules with Inductive Logic Programming | cs.AI cs.DB cs.LG cs.LO | Rules complement and extend ontologies on the Semantic Web. We refer to these
rules as onto-relational since they combine DL-based ontology languages and
Knowledge Representation formalisms supporting the relational data model within
the tradition of Logic Programming and Deductive Databases. Rule authoring is a
very demanding Knowledge Engineering task which can be automated though
partially by applying Machine Learning algorithms. In this chapter we show how
Inductive Logic Programming (ILP), born at the intersection of Machine Learning
and Logic Programming and considered as a major approach to Relational
Learning, can be adapted to Onto-Relational Learning. For the sake of
illustration, we provide details of a specific Onto-Relational Learning
solution to the problem of learning rule-based definitions of DL concepts and
roles with ILP.
| Francesca A. Lisi | null | 1210.2984 | null | null |
A Benchmark to Select Data Mining Based Classification Algorithms For
Business Intelligence And Decision Support Systems | cs.DB cs.LG | DSS serve the management, operations, and planning levels of an organization
and help to make decisions, which may be rapidly changing and not easily
specified in advance. Data mining has a vital role to extract important
information to help in decision making of a decision support system.
Integration of data mining and decision support systems (DSS) can lead to the
improved performance and can enable the tackling of new types of problems.
Artificial Intelligence methods are improving the quality of decision support,
and have become embedded in many applications ranges from ant locking
automobile brakes to these days interactive search engines. It provides various
machine learning techniques to support data mining. The classification is one
of the main and valuable tasks of data mining. Several types of classification
algorithms have been suggested, tested and compared to determine the future
trends based on unseen data. There has been no single algorithm found to be
superior over all others for all data sets. The objective of this paper is to
compare various classification algorithms that have been frequently used in
data mining for decision support systems. Three decision trees based
algorithms, one artificial neural network, one statistical, one support vector
machines with and without ada boost and one clustering algorithm are tested and
compared on four data sets from different domains in terms of predictive
accuracy, error rate, classification index, comprehensibility and training
time. Experimental results demonstrate that Genetic Algorithm (GA) and support
vector machines based algorithms are better in terms of predictive accuracy.
SVM without adaboost shall be the first choice in context of speed and
predictive accuracy. Adaboost improves the accuracy of SVM but on the cost of
large training time.
| Pardeep Kumar, Nitin, Vivek Kumar Sehgal and Durg Singh Chauhan | 10.5121/ijdkp.2012.2503 | 1210.3139 | null | null |
Unsupervised Detection and Tracking of Arbitrary Objects with Dependent
Dirichlet Process Mixtures | stat.ML cs.CV cs.LG | This paper proposes a technique for the unsupervised detection and tracking
of arbitrary objects in videos. It is intended to reduce the need for detection
and localization methods tailored to specific object types and serve as a
general framework applicable to videos with varied objects, backgrounds, and
image qualities. The technique uses a dependent Dirichlet process mixture
(DDPM) known as the Generalized Polya Urn (GPUDDPM) to model image pixel data
that can be easily and efficiently extracted from the regions in a video that
represent objects. This paper describes a specific implementation of the model
using spatial and color pixel data extracted via frame differencing and gives
two algorithms for performing inference in the model to accomplish detection
and tracking. This technique is demonstrated on multiple synthetic and
benchmark video datasets that illustrate its ability to, without modification,
detect and track objects with diverse physical characteristics moving over
non-uniform backgrounds and through occlusion.
| Willie Neiswanger, Frank Wood | null | 1210.3288 | null | null |
Inferring clonal evolution of tumors from single nucleotide somatic
mutations | cs.LG q-bio.PE q-bio.QM stat.ML | High-throughput sequencing allows the detection and quantification of
frequencies of somatic single nucleotide variants (SNV) in heterogeneous tumor
cell populations. In some cases, the evolutionary history and population
frequency of the subclonal lineages of tumor cells present in the sample can be
reconstructed from these SNV frequency measurements. However, automated methods
to do this reconstruction are not available and the conditions under which
reconstruction is possible have not been described.
We describe the conditions under which the evolutionary history can be
uniquely reconstructed from SNV frequencies from single or multiple samples
from the tumor population and we introduce a new statistical model, PhyloSub,
that infers the phylogeny and genotype of the major subclonal lineages
represented in the population of cancer cells. It uses a Bayesian nonparametric
prior over trees that groups SNVs into major subclonal lineages and
automatically estimates the number of lineages and their ancestry. We sample
from the joint posterior distribution over trees to identify evolutionary
histories and cell population frequencies that have the highest probability of
generating the observed SNV frequency data. When multiple phylogenies are
consistent with a given set of SNV frequencies, PhyloSub represents the
uncertainty in the tumor phylogeny using a partial order plot. Experiments on a
simulated dataset and two real datasets comprising tumor samples from acute
myeloid leukemia and chronic lymphocytic leukemia patients demonstrate that
PhyloSub can infer both linear (or chain) and branching lineages and its
inferences are in good agreement with ground truth, where it is available.
| Wei Jiao, Shankar Vembu, Amit G. Deshwar, Lincoln Stein, Quaid Morris | null | 1210.3384 | null | null |
Bayesian Analysis for miRNA and mRNA Interactions Using Expression Data | stat.AP cs.LG q-bio.GN q-bio.MN stat.ML | MicroRNAs (miRNAs) are small RNA molecules composed of 19-22 nt, which play
important regulatory roles in post-transcriptional gene regulation by
inhibiting the translation of the mRNA into proteins or otherwise cleaving the
target mRNA. Inferring miRNA targets provides useful information for
understanding the roles of miRNA in biological processes that are potentially
involved in complex diseases. Statistical methodologies for point estimation,
such as the Least Absolute Shrinkage and Selection Operator (LASSO) algorithm,
have been proposed to identify the interactions of miRNA and mRNA based on
sequence and expression data. In this paper, we propose using the Bayesian
LASSO (BLASSO) and the non-negative Bayesian LASSO (nBLASSO) to analyse the
interactions between miRNA and mRNA using expression data. The proposed
Bayesian methods explore the posterior distributions for those parameters
required to model the miRNA-mRNA interactions. These approaches can be used to
observe the inferred effects of the miRNAs on the targets by plotting the
posterior distributions of those parameters. For comparison purposes, the Least
Squares Regression (LSR), Ridge Regression (RR), LASSO, non-negative LASSO
(nLASSO), and the proposed Bayesian approaches were applied to four public
datasets. We concluded that nLASSO and nBLASSO perform best in terms of
sensitivity and specificity. Compared to the point estimate algorithms, which
only provide single estimates for those parameters, the Bayesian methods are
more meaningful and provide credible intervals, which take into account the
uncertainty of the inferred interactions of the miRNA and mRNA. Furthermore,
Bayesian methods naturally provide statistical significance to select
convincing inferred interactions, while point estimate algorithms require a
manually chosen threshold, which is less meaningful, to choose the possible
interactions.
| Mingjun Zhong, Rong Liu, Bo Liu | null | 1210.3456 | null | null |
Learning Attitudes and Attributes from Multi-Aspect Reviews | cs.CL cs.IR cs.LG | The majority of online reviews consist of plain-text feedback together with a
single numeric score. However, there are multiple dimensions to products and
opinions, and understanding the `aspects' that contribute to users' ratings may
help us to better understand their individual preferences. For example, a
user's impression of an audiobook presumably depends on aspects such as the
story and the narrator, and knowing their opinions on these aspects may help us
to recommend better products. In this paper, we build models for rating systems
in which such dimensions are explicit, in the sense that users leave separate
ratings for each aspect of a product. By introducing new corpora consisting of
five million reviews, rated with between three and six aspects, we evaluate our
models on three prediction tasks: First, we use our model to uncover which
parts of a review discuss which of the rated aspects. Second, we use our model
to summarize reviews, which for us means finding the sentences that best
explain a user's rating. Finally, since aspect ratings are optional in many of
the datasets we consider, we use our model to recover those ratings that are
missing from a user's evaluation. Our model matches state-of-the-art approaches
on existing small-scale datasets, while scaling to the real-world datasets we
introduce. Moreover, our model is able to `disentangle' content and sentiment
words: we automatically learn content words that are indicative of a particular
aspect as well as the aspect-specific sentiment words that are indicative of a
particular rating.
| Julian McAuley, Jure Leskovec, Dan Jurafsky | null | 1210.3926 | null | null |
The Perturbed Variation | cs.LG stat.ML | We introduce a new discrepancy score between two distributions that gives an
indication on their similarity. While much research has been done to determine
if two samples come from exactly the same distribution, much less research
considered the problem of determining if two finite samples come from similar
distributions. The new score gives an intuitive interpretation of similarity;
it optimally perturbs the distributions so that they best fit each other. The
score is defined between distributions, and can be efficiently estimated from
samples. We provide convergence bounds of the estimated score, and develop
hypothesis testing procedures that test if two data sets come from similar
distributions. The statistical power of this procedures is presented in
simulations. We also compare the score's capacity to detect similarity with
that of other known measures on real data.
| Maayan Harel and Shie Mannor | null | 1210.4006 | null | null |
Getting Feasible Variable Estimates From Infeasible Ones: MRF Local
Polytope Study | cs.NA cs.CV cs.DS cs.LG math.OC | This paper proposes a method for construction of approximate feasible primal
solutions from dual ones for large-scale optimization problems possessing
certain separability properties. Whereas infeasible primal estimates can
typically be produced from (sub-)gradients of the dual function, it is often
not easy to project them to the primal feasible set, since the projection
itself has a complexity comparable to the complexity of the initial problem. We
propose an alternative efficient method to obtain feasibility and show that its
properties influencing the convergence to the optimum are similar to the
properties of the Euclidean projection. We apply our method to the local
polytope relaxation of inference problems for Markov Random Fields and
demonstrate its superiority over existing methods.
| Bogdan Savchynskyy and Stefan Schmidt | null | 1210.4081 | null | null |
The Kernel Pitman-Yor Process | cs.LG cs.AI stat.ML | In this work, we propose the kernel Pitman-Yor process (KPYP) for
nonparametric clustering of data with general spatial or temporal
interdependencies. The KPYP is constructed by first introducing an infinite
sequence of random locations. Then, based on the stick-breaking construction of
the Pitman-Yor process, we define a predictor-dependent random probability
measure by considering that the discount hyperparameters of the
Beta-distributed random weights (stick variables) of the process are not
uniform among the weights, but controlled by a kernel function expressing the
proximity between the location assigned to each weight and the given
predictors.
| Sotirios P. Chatzis and Dimitrios Korkinof and Yiannis Demiris | null | 1210.4184 | null | null |
Semi-Supervised Classification Through the Bag-of-Paths Group
Betweenness | stat.ML cs.LG | This paper introduces a novel, well-founded, betweenness measure, called the
Bag-of-Paths (BoP) betweenness, as well as its extension, the BoP group
betweenness, to tackle semisupervised classification problems on weighted
directed graphs. The objective of semi-supervised classification is to assign a
label to unlabeled nodes using the whole topology of the graph and the labeled
nodes at our disposal. The BoP betweenness relies on a bag-of-paths framework
assigning a Boltzmann distribution on the set of all possible paths through the
network such that long (high-cost) paths have a low probability of being picked
from the bag, while short (low-cost) paths have a high probability of being
picked. Within that context, the BoP betweenness of node j is defined as the
sum of the a posteriori probabilities that node j lies in-between two arbitrary
nodes i, k, when picking a path starting in i and ending in k. Intuitively, a
node typically receives a high betweenness if it has a large probability of
appearing on paths connecting two arbitrary nodes of the network. This quantity
can be computed in closed form by inverting a n x n matrix where n is the
number of nodes. For the group betweenness, the paths are constrained to start
and end in nodes within the same class, therefore defining a group betweenness
for each class. Unlabeled nodes are then classified according to the class
showing the highest group betweenness. Experiments on various real-world data
sets show that BoP group betweenness outperforms all the tested state
of-the-art methods. The benefit of the BoP betweenness is particularly
noticeable when only a few labeled nodes are available.
| Bertrand Lebichot, Ilkka Kivim\"aki, Kevin Fran\c{c}oisse and Marco
Saerens | null | 1210.4276 | null | null |
Hilbert Space Embedding for Dirichlet Process Mixtures | stat.ML cs.LG | This paper proposes a Hilbert space embedding for Dirichlet Process mixture
models via a stick-breaking construction of Sethuraman. Although Bayesian
nonparametrics offers a powerful approach to construct a prior that avoids the
need to specify the model size/complexity explicitly, an exact inference is
often intractable. On the other hand, frequentist approaches such as kernel
machines, which suffer from the model selection/comparison problems, often
benefit from efficient learning algorithms. This paper discusses the
possibility to combine the best of both worlds by using the Dirichlet Process
mixture model as a case study.
| Krikamol Muandet | null | 1210.4347 | null | null |
Fast SVM-based Feature Elimination Utilizing Data Radius, Hard-Margin,
Soft-Margin | stat.ML cs.LG | Margin maximization in the hard-margin sense, proposed as feature elimination
criterion by the MFE-LO method, is combined here with data radius utilization
to further aim to lower generalization error, as several published bounds and
bound-related formulations pertaining to lowering misclassification risk (or
error) pertain to radius e.g. product of squared radius and weight vector
squared norm. Additionally, we propose additional novel feature elimination
criteria that, while instead being in the soft-margin sense, too can utilize
data radius, utilizing previously published bound-related formulations for
approaching radius for the soft-margin sense, whereby e.g. a focus was on the
principle stated therein as "finding a bound whose minima are in a region with
small leave-one-out values may be more important than its tightness". These
additional criteria we propose combine radius utilization with a novel and
computationally low-cost soft-margin light classifier retraining approach we
devise named QP1; QP1 is the soft-margin alternative to the hard-margin LO. We
correct an error in the MFE-LO description, find MFE-LO achieves the highest
generalization accuracy among the previously published margin-based feature
elimination (MFE) methods, discuss some limitations of MFE-LO, and find our
novel methods herein outperform MFE-LO, attain lower test set classification
error rate. On several datasets that each both have a large number of features
and fall into the `large features few samples' dataset category, and on
datasets with lower (low-to-intermediate) number of features, our novel methods
give promising results. Especially, among our methods the tunable ones, that do
not employ (the non-tunable) LO approach, can be tuned more aggressively in the
future than herein, to aim to demonstrate for them even higher performance than
herein.
| Yaman Aksu | null | 1210.4460 | null | null |
Epitome for Automatic Image Colorization | cs.CV cs.LG cs.MM | Image colorization adds color to grayscale images. It not only increases the
visual appeal of grayscale images, but also enriches the information contained
in scientific images that lack color information. Most existing methods of
colorization require laborious user interaction for scribbles or image
segmentation. To eliminate the need for human labor, we develop an automatic
image colorization method using epitome. Built upon a generative graphical
model, epitome is a condensed image appearance and shape model which also
proves to be an effective summary of color information for the colorization
task. We train the epitome from the reference images and perform inference in
the epitome to colorize grayscale images, rendering better colorization results
than previous method in our experiments.
| Yingzhen Yang, Xinqi Chu, Tian-Tsong Ng, Alex Yong-Sang Chia,
Shuicheng Yan, Thomas S. Huang | null | 1210.4481 | null | null |
A Direct Approach to Multi-class Boosting and Extensions | cs.LG | Boosting methods combine a set of moderately accurate weaklearners to form a
highly accurate predictor. Despite the practical importance of multi-class
boosting, it has received far less attention than its binary counterpart. In
this work, we propose a fully-corrective multi-class boosting formulation which
directly solves the multi-class problem without dividing it into multiple
binary classification problems. In contrast, most previous multi-class boosting
algorithms decompose a multi-boost problem into multiple binary boosting
problems. By explicitly deriving the Lagrange dual of the primal optimization
problem, we are able to construct a column generation-based fully-corrective
approach to boosting which directly optimizes multi-class classification
performance. The new approach not only updates all weak learners' coefficients
at every iteration, but does so in a manner flexible enough to accommodate
various loss functions and regularizations. For example, it enables us to
introduce structural sparsity through mixed-norm regularization to promote
group sparsity and feature sharing. Boosting with shared features is
particularly beneficial in complex prediction problems where features can be
expensive to compute. Our experiments on various data sets demonstrate that our
direct multi-class boosting generalizes as well as, or better than, a range of
competing multi-class boosting methods. The end result is a highly effective
and compact ensemble classifier which can be trained in a distributed fashion.
| Chunhua Shen, Sakrapee Paisitkriangkrai, Anton van den Hengel | null | 1210.4601 | null | null |
Mean-Field Learning: a Survey | cs.LG cs.GT cs.MA math.DS stat.ML | In this paper we study iterative procedures for stationary equilibria in
games with large number of players. Most of learning algorithms for games with
continuous action spaces are limited to strict contraction best reply maps in
which the Banach-Picard iteration converges with geometrical convergence rate.
When the best reply map is not a contraction, Ishikawa-based learning is
proposed. The algorithm is shown to behave well for Lipschitz continuous and
pseudo-contractive maps. However, the convergence rate is still unsatisfactory.
Several acceleration techniques are presented. We explain how cognitive users
can improve the convergence rate based only on few number of measurements. The
methodology provides nice properties in mean field games where the payoff
function depends only on own-action and the mean of the mean-field (first
moment mean-field games). A learning framework that exploits the structure of
such games, called, mean-field learning, is proposed. The proposed mean-field
learning framework is suitable not only for games but also for non-convex
global optimization problems. Then, we introduce mean-field learning without
feedback and examine the convergence to equilibria in beauty contest games,
which have interesting applications in financial markets. Finally, we provide a
fully distributed mean-field learning and its speedup versions for satisfactory
solution in wireless networks. We illustrate the convergence rate improvement
with numerical examples.
| Hamidou Tembine, Raul Tempone and Pedro Vilanova | null | 1210.4657 | null | null |
Regulating the information in spikes: a useful bias | q-bio.NC cs.IT cs.LG math.IT | The bias/variance tradeoff is fundamental to learning: increasing a model's
complexity can improve its fit on training data, but potentially worsens
performance on future samples. Remarkably, however, the human brain
effortlessly handles a wide-range of complex pattern recognition tasks. On the
basis of these conflicting observations, it has been argued that useful biases
in the form of "generic mechanisms for representation" must be hardwired into
cortex (Geman et al).
This note describes a useful bias that encourages cooperative learning which
is both biologically plausible and rigorously justified.
| David Balduzzi | null | 1210.4695 | null | null |
Scalable Matrix-valued Kernel Learning for High-dimensional Nonlinear
Multivariate Regression and Granger Causality | stat.ML cs.LG | We propose a general matrix-valued multiple kernel learning framework for
high-dimensional nonlinear multivariate regression problems. This framework
allows a broad class of mixed norm regularizers, including those that induce
sparsity, to be imposed on a dictionary of vector-valued Reproducing Kernel
Hilbert Spaces. We develop a highly scalable and eigendecomposition-free
algorithm that orchestrates two inexact solvers for simultaneously learning
both the input and output components of separable matrix-valued kernels. As a
key application enabled by our framework, we show how high-dimensional causal
inference tasks can be naturally cast as sparse function estimation problems,
leading to novel nonlinear extensions of a class of Graphical Granger Causality
techniques. Our algorithmic developments and extensive empirical studies are
complemented by theoretical analyses in terms of Rademacher generalization
bounds.
| Vikas Sindhwani and Minh Ha Quang and Aurelie C. Lozano | null | 1210.4792 | null | null |
Leveraging Side Observations in Stochastic Bandits | cs.LG stat.ML | This paper considers stochastic bandits with side observations, a model that
accounts for both the exploration/exploitation dilemma and relationships
between arms. In this setting, after pulling an arm i, the decision maker also
observes the rewards for some other actions related to i. We will see that this
model is suited to content recommendation in social networks, where users'
reactions may be endorsed or not by their friends. We provide efficient
algorithms based on upper confidence bounds (UCBs) to leverage this additional
information and derive new bounds improving on standard regret guarantees. We
also evaluate these policies in the context of movie recommendation in social
networks: experiments on real datasets show substantial learning rate speedups
ranging from 2.2x to 14x on dense networks.
| Stephane Caron, Branislav Kveton, Marc Lelarge, Smriti Bhagat | null | 1210.4839 | null | null |
An Efficient Message-Passing Algorithm for the M-Best MAP Problem | cs.AI cs.LG stat.ML | Much effort has been directed at algorithms for obtaining the highest
probability configuration in a probabilistic random field model known as the
maximum a posteriori (MAP) inference problem. In many situations, one could
benefit from having not just a single solution, but the top M most probable
solutions known as the M-Best MAP problem. In this paper, we propose an
efficient message-passing based algorithm for solving the M-Best MAP problem.
Specifically, our algorithm solves the recently proposed Linear Programming
(LP) formulation of M-Best MAP [7], while being orders of magnitude faster than
a generic LP-solver. Our approach relies on studying a particular partial
Lagrangian relaxation of the M-Best MAP LP which exposes a natural
combinatorial structure of the problem that we exploit.
| Dhruv Batra | null | 1210.4841 | null | null |
Deterministic MDPs with Adversarial Rewards and Bandit Feedback | cs.GT cs.LG | We consider a Markov decision process with deterministic state transition
dynamics, adversarially generated rewards that change arbitrarily from round to
round, and a bandit feedback model in which the decision maker only observes
the rewards it receives. In this setting, we present a novel and efficient
online decision making algorithm named MarcoPolo. Under mild assumptions on the
structure of the transition dynamics, we prove that MarcoPolo enjoys a regret
of O(T^(3/4)sqrt(log(T))) against the best deterministic policy in hindsight.
Specifically, our analysis does not rely on the stringent unichain assumption,
which dominates much of the previous work on this topic.
| Raman Arora, Ofer Dekel, Ambuj Tewari | null | 1210.4843 | null | null |
Variational Dual-Tree Framework for Large-Scale Transition Matrix
Approximation | cs.LG stat.ML | In recent years, non-parametric methods utilizing random walks on graphs have
been used to solve a wide range of machine learning problems, but in their
simplest form they do not scale well due to the quadratic complexity. In this
paper, a new dual-tree based variational approach for approximating the
transition matrix and efficiently performing the random walk is proposed. The
approach exploits a connection between kernel density estimation, mixture
modeling, and random walk on graphs in an optimization of the transition matrix
for the data graph that ties together edge transitions probabilities that are
similar. Compared to the de facto standard approximation method based on
k-nearestneighbors, we demonstrate order of magnitudes speedup without
sacrificing accuracy for Label Propagation tasks on benchmark data sets in
semi-supervised learning.
| Saeed Amizadeh, Bo Thiesson, Milos Hauskrecht | null | 1210.4846 | null | null |
Markov Determinantal Point Processes | cs.LG cs.IR stat.ML | A determinantal point process (DPP) is a random process useful for modeling
the combinatorial problem of subset selection. In particular, DPPs encourage a
random subset Y to contain a diverse set of items selected from a base set Y.
For example, we might use a DPP to display a set of news headlines that are
relevant to a user's interests while covering a variety of topics. Suppose,
however, that we are asked to sequentially select multiple diverse sets of
items, for example, displaying new headlines day-by-day. We might want these
sets to be diverse not just individually but also through time, offering
headlines today that are unlike the ones shown yesterday. In this paper, we
construct a Markov DPP (M-DPP) that models a sequence of random sets {Yt}. The
proposed M-DPP defines a stationary process that maintains DPP margins.
Crucially, the induced union process Zt = Yt u Yt-1 is also marginally
DPP-distributed. Jointly, these properties imply that the sequence of random
sets are encouraged to be diverse both at a given time step as well as across
time steps. We describe an exact, efficient sampling procedure, and a method
for incrementally learning a quality measure over items in the base set Y based
on external preferences. We apply the M-DPP to the task of sequentially
displaying diverse and relevant news articles to a user with topic preferences.
| Raja Hafiz Affandi, Alex Kulesza, Emily B. Fox | null | 1210.4850 | null | null |
Learning to Rank With Bregman Divergences and Monotone Retargeting | cs.LG stat.ML | This paper introduces a novel approach for learning to rank (LETOR) based on
the notion of monotone retargeting. It involves minimizing a divergence between
all monotonic increasing transformations of the training scores and a
parameterized prediction function. The minimization is both over the
transformations as well as over the parameters. It is applied to Bregman
divergences, a large class of "distance like" functions that were recently
shown to be the unique class that is statistically consistent with the
normalized discounted gain (NDCG) criterion [19]. The algorithm uses
alternating projection style updates, in which one set of simultaneous
projections can be computed independent of the Bregman divergence and the other
reduces to parameter estimation of a generalized linear model. This results in
easily implemented, efficiently parallelizable algorithm for the LETOR task
that enjoys global optimum guarantees under mild conditions. We present
empirical results on benchmark datasets showing that this approach can
outperform the state of the art NDCG consistent techniques.
| Sreangsu Acharyya, Oluwasanmi Koyejo, Joydeep Ghosh | null | 1210.4851 | null | null |
A Slice Sampler for Restricted Hierarchical Beta Process with
Applications to Shared Subspace Learning | cs.LG cs.CV stat.ML | Hierarchical beta process has found interesting applications in recent years.
In this paper we present a modified hierarchical beta process prior with
applications to hierarchical modeling of multiple data sources. The novel use
of the prior over a hierarchical factor model allows factors to be shared
across different sources. We derive a slice sampler for this model, enabling
tractable inference even when the likelihood and the prior over parameters are
non-conjugate. This allows the application of the model in much wider contexts
without restrictions. We present two different data generative models a linear
GaussianGaussian model for real valued data and a linear Poisson-gamma model
for count data. Encouraging transfer learning results are shown for two real
world applications text modeling and content based image retrieval.
| Sunil Kumar Gupta, Dinh Q. Phung, Svetha Venkatesh | null | 1210.4855 | null | null |
Exploiting compositionality to explore a large space of model structures | cs.LG stat.ML | The recent proliferation of richly structured probabilistic models raises the
question of how to automatically determine an appropriate model for a dataset.
We investigate this question for a space of matrix decomposition models which
can express a variety of widely used models from unsupervised learning. To
enable model selection, we organize these models into a context-free grammar
which generates a wide variety of structures through the compositional
application of a few simple rules. We use our grammar to generically and
efficiently infer latent components and estimate predictive likelihood for
nearly 2500 structures using a small toolbox of reusable algorithms. Using a
greedy search over our grammar, we automatically choose the decomposition
structure from raw data by evaluating only a small fraction of all models. The
proposed method typically finds the correct structure for synthetic data and
backs off gracefully to simpler models under heavy noise. It learns sensible
structures for datasets as diverse as image patches, motion capture, 20
Questions, and U.S. Senate votes, all using exactly the same code.
| Roger Grosse, Ruslan R Salakhutdinov, William T. Freeman, Joshua B.
Tenenbaum | null | 1210.4856 | null | null |
Mechanism Design for Cost Optimal PAC Learning in the Presence of
Strategic Noisy Annotators | cs.LG cs.GT stat.ML | We consider the problem of Probably Approximate Correct (PAC) learning of a
binary classifier from noisy labeled examples acquired from multiple annotators
(each characterized by a respective classification noise rate). First, we
consider the complete information scenario, where the learner knows the noise
rates of all the annotators. For this scenario, we derive sample complexity
bound for the Minimum Disagreement Algorithm (MDA) on the number of labeled
examples to be obtained from each annotator. Next, we consider the incomplete
information scenario, where each annotator is strategic and holds the
respective noise rate as a private information. For this scenario, we design a
cost optimal procurement auction mechanism along the lines of Myerson's optimal
auction design framework in a non-trivial manner. This mechanism satisfies
incentive compatibility property, thereby facilitating the learner to elicit
true noise rates of all the annotators.
| Dinesh Garg, Sourangshu Bhattacharya, S. Sundararajan, Shirish Shevade | null | 1210.4859 | null | null |
Spectral Estimation of Conditional Random Graph Models for Large-Scale
Network Data | cs.SI cs.LG physics.soc-ph stat.ML | Generative models for graphs have been typically committed to strong prior
assumptions concerning the form of the modeled distributions. Moreover, the
vast majority of currently available models are either only suitable for
characterizing some particular network properties (such as degree distribution
or clustering coefficient), or they are aimed at estimating joint probability
distributions, which is often intractable in large-scale networks. In this
paper, we first propose a novel network statistic, based on the Laplacian
spectrum of graphs, which allows to dispense with any parametric assumption
concerning the modeled network properties. Second, we use the defined statistic
to develop the Fiedler random graph model, switching the focus from the
estimation of joint probability distributions to a more tractable conditional
estimation setting. After analyzing the dependence structure characterizing
Fiedler random graphs, we evaluate them experimentally in edge prediction over
several real-world networks, showing that they allow to reach a much higher
prediction accuracy than various alternative statistical models.
| Antonino Freno, Mikaela Keller, Gemma C. Garriga, Marc Tommasi | null | 1210.4860 | null | null |
Sample-efficient Nonstationary Policy Evaluation for Contextual Bandits | cs.LG stat.ML | We present and prove properties of a new offline policy evaluator for an
exploration learning setting which is superior to previous evaluators. In
particular, it simultaneously and correctly incorporates techniques from
importance weighting, doubly robust evaluation, and nonstationary policy
evaluation approaches. In addition, our approach allows generating longer
histories by careful control of a bias-variance tradeoff, and further decreases
variance by incorporating information about randomness of the target policy.
Empirical evidence from synthetic and realworld exploration learning problems
shows the new evaluator successfully unifies previous approaches and uses
information an order of magnitude more efficiently.
| Miroslav Dudik, Dumitru Erhan, John Langford, Lihong Li | null | 1210.4862 | null | null |
Lifted Relational Variational Inference | cs.LG stat.ML | Hybrid continuous-discrete models naturally represent many real-world
applications in robotics, finance, and environmental engineering. Inference
with large-scale models is challenging because relational structures
deteriorate rapidly during inference with observations. The main contribution
of this paper is an efficient relational variational inference algorithm that
factors largescale probability models into simpler variational models, composed
of mixtures of iid (Bernoulli) random variables. The algorithm takes
probability relational models of largescale hybrid systems and converts them to
a close-to-optimal variational models. Then, it efficiently calculates marginal
probabilities on the variational models by using a latent (or lifted) variable
elimination or a lifted stochastic sampling. This inference is unique because
it maintains the relational structure upon individual observations and during
inference steps.
| Jaesik Choi, Eyal Amir | null | 1210.4867 | null | null |
Response Aware Model-Based Collaborative Filtering | cs.LG cs.IR stat.ML | Previous work on recommender systems mainly focus on fitting the ratings
provided by users. However, the response patterns, i.e., some items are rated
while others not, are generally ignored. We argue that failing to observe such
response patterns can lead to biased parameter estimation and sub-optimal model
performance. Although several pieces of work have tried to model users'
response patterns, they miss the effectiveness and interpretability of the
successful matrix factorization collaborative filtering approaches. To bridge
the gap, in this paper, we unify explicit response models and PMF to establish
the Response Aware Probabilistic Matrix Factorization (RAPMF) framework. We
show that RAPMF subsumes PMF as a special case. Empirically we demonstrate the
merits of RAPMF from various aspects.
| Guang Ling, Haiqin Yang, Michael R. Lyu, Irwin King | null | 1210.4869 | null | null |
Crowdsourcing Control: Moving Beyond Multiple Choice | cs.AI cs.LG | To ensure quality results from crowdsourced tasks, requesters often aggregate
worker responses and use one of a plethora of strategies to infer the correct
answer from the set of noisy responses. However, all current models assume
prior knowledge of all possible outcomes of the task. While not an unreasonable
assumption for tasks that can be posited as multiple-choice questions (e.g.
n-ary classification), we observe that many tasks do not naturally fit this
paradigm, but instead demand a free-response formulation where the outcome
space is of infinite size (e.g. audio transcription). We model such tasks with
a novel probabilistic graphical model, and design and implement LazySusan, a
decision-theoretic controller that dynamically requests responses as necessary
in order to infer answers to these tasks. We also design an EM algorithm to
jointly learn the parameters of our model while inferring the correct answers
to multiple tasks at a time. Live experiments on Amazon Mechanical Turk
demonstrate the superiority of LazySusan at solving SAT Math questions,
eliminating 83.2% of the error and achieving greater net utility compared to
the state-ofthe-art strategy, majority-voting. We also show in live experiments
that our EM algorithm outperforms majority-voting on a visualization task that
we design.
| Christopher H. Lin, Mausam, Daniel Weld | null | 1210.4870 | null | null |
Learning Mixtures of Submodular Shells with Application to Document
Summarization | cs.LG cs.CL cs.IR stat.ML | We introduce a method to learn a mixture of submodular "shells" in a
large-margin setting. A submodular shell is an abstract submodular function
that can be instantiated with a ground set and a set of parameters to produce a
submodular function. A mixture of such shells can then also be so instantiated
to produce a more complex submodular function. What our algorithm learns are
the mixture weights over such shells. We provide a risk bound guarantee when
learning in a large-margin structured-prediction setting using a projected
subgradient method when only approximate submodular optimization is possible
(such as with submodular function maximization). We apply this method to the
problem of multi-document summarization and produce the best results reported
so far on the widely used NIST DUC-05 through DUC-07 document summarization
corpora.
| Hui Lin, Jeff A. Bilmes | null | 1210.4871 | null | null |
Nested Dictionary Learning for Hierarchical Organization of Imagery and
Text | cs.LG cs.CV stat.ML | A tree-based dictionary learning model is developed for joint analysis of
imagery and associated text. The dictionary learning may be applied directly to
the imagery from patches, or to general feature vectors extracted from patches
or superpixels (using any existing method for image feature extraction). Each
image is associated with a path through the tree (from root to a leaf), and
each of the multiple patches in a given image is associated with one node in
that path. Nodes near the tree root are shared between multiple paths,
representing image characteristics that are common among different types of
images. Moving toward the leaves, nodes become specialized, representing
details in image classes. If available, words (text) are also jointly modeled,
with a path-dependent probability over words. The tree structure is inferred
via a nested Dirichlet process, and a retrospective stick-breaking sampler is
used to infer the tree depth and width.
| Lingbo Li, XianXing Zhang, Mingyuan Zhou, Lawrence Carin | null | 1210.4872 | null | null |
Active Imitation Learning via Reduction to I.I.D. Active Learning | cs.LG stat.ML | In standard passive imitation learning, the goal is to learn a target policy
by passively observing full execution trajectories of it. Unfortunately,
generating such trajectories can require substantial expert effort and be
impractical in some cases. In this paper, we consider active imitation learning
with the goal of reducing this effort by querying the expert about the desired
action at individual states, which are selected based on answers to past
queries and the learner's interactions with an environment simulator. We
introduce a new approach based on reducing active imitation learning to i.i.d.
active learning, which can leverage progress in the i.i.d. setting. Our first
contribution, is to analyze reductions for both non-stationary and stationary
policies, showing that the label complexity (number of queries) of active
imitation learning can be substantially less than passive learning. Our second
contribution, is to introduce a practical algorithm inspired by the reductions,
which is shown to be highly effective in four test domains compared to a number
of alternatives.
| Kshitij Judah, Alan Fern, Thomas G. Dietterich | null | 1210.4876 | null | null |
Inferring Strategies from Limited Reconnaissance in Real-time Strategy
Games | cs.AI cs.GT cs.LG | In typical real-time strategy (RTS) games, enemy units are visible only when
they are within sight range of a friendly unit. Knowledge of an opponent's
disposition is limited to what can be observed through scouting. Information is
costly, since units dedicated to scouting are unavailable for other purposes,
and the enemy will resist scouting attempts. It is important to infer as much
as possible about the opponent's current and future strategy from the available
observations. We present a dynamic Bayes net model of strategies in the RTS
game Starcraft that combines a generative model of how strategies relate to
observable quantities with a principled framework for incorporating evidence
gained via scouting. We demonstrate the model's ability to infer unobserved
aspects of the game from realistic observations.
| Jesse Hostetler, Ethan W. Dereszynski, Thomas G. Dietterich, Alan Fern | null | 1210.4880 | null | null |
Tightening Fractional Covering Upper Bounds on the Partition Function
for High-Order Region Graphs | cs.LG stat.ML | In this paper we present a new approach for tightening upper bounds on the
partition function. Our upper bounds are based on fractional covering bounds on
the entropy function, and result in a concave program to compute these bounds
and a convex program to tighten them. To solve these programs effectively for
general region graphs we utilize the entropy barrier method, thus decomposing
the original programs by their dual programs and solve them with dual block
optimization scheme. The entropy barrier method provides an elegant framework
to generalize the message-passing scheme to high-order region graph, as well as
to solve the block dual steps in closed-form. This is a key for computational
relevancy for large problems with thousands of regions.
| Tamir Hazan, Jian Peng, Amnon Shashua | null | 1210.4881 | null | null |
A Model-Based Approach to Rounding in Spectral Clustering | cs.LG cs.NA stat.ML | In spectral clustering, one defines a similarity matrix for a collection of
data points, transforms the matrix to get the Laplacian matrix, finds the
eigenvectors of the Laplacian matrix, and obtains a partition of the data using
the leading eigenvectors. The last step is sometimes referred to as rounding,
where one needs to decide how many leading eigenvectors to use, to determine
the number of clusters, and to partition the data points. In this paper, we
propose a novel method for rounding. The method differs from previous methods
in three ways. First, we relax the assumption that the number of clusters
equals the number of eigenvectors used. Second, when deciding the number of
leading eigenvectors to use, we not only rely on information contained in the
leading eigenvectors themselves, but also use subsequent eigenvectors. Third,
our method is model-based and solves all the three subproblems of rounding
using a class of graphical models called latent tree models. We evaluate our
method on both synthetic and real-world data. The results show that our method
works correctly in the ideal case where between-clusters similarity is 0, and
degrades gracefully as one moves away from the ideal case.
| Leonard K. M. Poon, April H. Liu, Tengfei Liu, Nevin Lianwen Zhang | null | 1210.4883 | null | null |
A Spectral Algorithm for Latent Junction Trees | cs.LG stat.ML | Latent variable models are an elegant framework for capturing rich
probabilistic dependencies in many applications. However, current approaches
typically parametrize these models using conditional probability tables, and
learning relies predominantly on local search heuristics such as Expectation
Maximization. Using tensor algebra, we propose an alternative parameterization
of latent variable models (where the model structures are junction trees) that
still allows for computation of marginals among observed variables. While this
novel representation leads to a moderate increase in the number of parameters
for junction trees of low treewidth, it lets us design a local-minimum-free
algorithm for learning this parameterization. The main computation of the
algorithm involves only tensor operations and SVDs which can be orders of
magnitude faster than EM algorithms for large datasets. To our knowledge, this
is the first provably consistent parameter learning technique for a large class
of low-treewidth latent graphical models beyond trees. We demonstrate the
advantages of our method on synthetic and real datasets.
| Ankur P. Parikh, Le Song, Mariya Ishteva, Gabi Teodoru, Eric P. Xing | null | 1210.4884 | null | null |
Hilbert Space Embeddings of POMDPs | cs.LG cs.AI stat.ML | A nonparametric approach for policy learning for POMDPs is proposed. The
approach represents distributions over the states, observations, and actions as
embeddings in feature spaces, which are reproducing kernel Hilbert spaces.
Distributions over states given the observations are obtained by applying the
kernel Bayes' rule to these distribution embeddings. Policies and value
functions are defined on the feature space over states, which leads to a
feature space expression for the Bellman equation. Value iteration may then be
used to estimate the optimal value function and associated policy. Experimental
results confirm that the correct policy is learned using the feature space
representation.
| Yu Nishiyama, Abdeslam Boularias, Arthur Gretton, Kenji Fukumizu | null | 1210.4887 | null | null |
Local Structure Discovery in Bayesian Networks | cs.LG cs.AI stat.ML | Learning a Bayesian network structure from data is an NP-hard problem and
thus exact algorithms are feasible only for small data sets. Therefore, network
structures for larger networks are usually learned with various heuristics.
Another approach to scaling up the structure learning is local learning. In
local learning, the modeler has one or more target variables that are of
special interest; he wants to learn the structure near the target variables and
is not interested in the rest of the variables. In this paper, we present a
score-based local learning algorithm called SLL. We conjecture that our
algorithm is theoretically sound in the sense that it is optimal in the limit
of large sample size. Empirical results suggest that SLL is competitive when
compared to the constraint-based HITON algorithm. We also study the prospects
of constructing the network structure for the whole node set based on local
results by presenting two algorithms and comparing them to several heuristics.
| Teppo Niinimaki, Pekka Parviainen | null | 1210.4888 | null | null |
Learning STRIPS Operators from Noisy and Incomplete Observations | cs.LG cs.AI stat.ML | Agents learning to act autonomously in real-world domains must acquire a
model of the dynamics of the domain in which they operate. Learning domain
dynamics can be challenging, especially where an agent only has partial access
to the world state, and/or noisy external sensors. Even in standard STRIPS
domains, existing approaches cannot learn from noisy, incomplete observations
typical of real-world domains. We propose a method which learns STRIPS action
models in such domains, by decomposing the problem into first learning a
transition function between states in the form of a set of classifiers, and
then deriving explicit STRIPS rules from the classifiers' parameters. We
evaluate our approach on simulated standard planning domains from the
International Planning Competition, and show that it learns useful domain
descriptions from noisy, incomplete observations.
| Kira Mourao, Luke S. Zettlemoyer, Ronald P. A. Petrick, Mark Steedman | null | 1210.4889 | null | null |
Unsupervised Joint Alignment and Clustering using Bayesian
Nonparametrics | cs.LG stat.ML | Joint alignment of a collection of functions is the process of independently
transforming the functions so that they appear more similar to each other.
Typically, such unsupervised alignment algorithms fail when presented with
complex data sets arising from multiple modalities or make restrictive
assumptions about the form of the functions or transformations, limiting their
generality. We present a transformed Bayesian infinite mixture model that can
simultaneously align and cluster a data set. Our model and associated learning
scheme offer two key advantages: the optimal number of clusters is determined
in a data-driven fashion through the use of a Dirichlet process prior, and it
can accommodate any transformation function parameterized by a continuous
parameter vector. As a result, it is applicable to a wide range of data types,
and transformation functions. We present positive results on synthetic
two-dimensional data, on a set of one-dimensional curves, and on various image
data sets, showing large improvements over previous work. We discuss several
variations of the model and conclude with directions for future work.
| Marwan A. Mattar, Allen R. Hanson, Erik G. Learned-Miller | null | 1210.4892 | null | null |
Sparse Q-learning with Mirror Descent | cs.LG stat.ML | This paper explores a new framework for reinforcement learning based on
online convex optimization, in particular mirror descent and related
algorithms. Mirror descent can be viewed as an enhanced gradient method,
particularly suited to minimization of convex functions in highdimensional
spaces. Unlike traditional gradient methods, mirror descent undertakes gradient
updates of weights in both the dual space and primal space, which are linked
together using a Legendre transform. Mirror descent can be viewed as a proximal
algorithm where the distance generating function used is a Bregman divergence.
A new class of proximal-gradient based temporal-difference (TD) methods are
presented based on different Bregman divergences, which are more powerful than
regular TD learning. Examples of Bregman divergences that are studied include
p-norm functions, and Mahalanobis distance based on the covariance of sample
gradients. A new family of sparse mirror-descent reinforcement learning methods
are proposed, which are able to find sparse fixed points of an l1-regularized
Bellman equation at significantly less computational cost than previous methods
based on second-order matrix methods. An experimental study of mirror-descent
reinforcement learning is presented using discrete and continuous Markov
decision processes.
| Sridhar Mahadevan, Bo Liu | null | 1210.4893 | null | null |
Closed-Form Learning of Markov Networks from Dependency Networks | cs.LG cs.AI stat.ML | Markov networks (MNs) are a powerful way to compactly represent a joint
probability distribution, but most MN structure learning methods are very slow,
due to the high cost of evaluating candidates structures. Dependency networks
(DNs) represent a probability distribution as a set of conditional probability
distributions. DNs are very fast to learn, but the conditional distributions
may be inconsistent with each other and few inference algorithms support DNs.
In this paper, we present a closed-form method for converting a DN into an MN,
allowing us to enjoy both the efficiency of DN learning and the convenience of
the MN representation. When the DN is consistent, this conversion is exact. For
inconsistent DNs, we present averaging methods that significantly improve the
approximation. In experiments on 12 standard datasets, our methods are orders
of magnitude faster than and often more accurate than combining conditional
distributions using weight learning.
| Daniel Lowd | null | 1210.4896 | null | null |
Value Function Approximation in Noisy Environments Using Locally
Smoothed Regularized Approximate Linear Programs | cs.LG stat.ML | Recently, Petrik et al. demonstrated that L1Regularized Approximate Linear
Programming (RALP) could produce value functions and policies which compared
favorably to established linear value function approximation techniques like
LSPI. RALP's success primarily stems from the ability to solve the feature
selection and value function approximation steps simultaneously. RALP's
performance guarantees become looser if sampled next states are used. For very
noisy domains, RALP requires an accurate model rather than samples, which can
be unrealistic in some practical scenarios. In this paper, we demonstrate this
weakness, and then introduce Locally Smoothed L1-Regularized Approximate Linear
Programming (LS-RALP). We demonstrate that LS-RALP mitigates inaccuracies
stemming from noise even without an accurate model. We show that, given some
smoothness assumptions, as the number of samples increases, error from noise
approaches zero, and provide experimental examples of LS-RALP's success on
common reinforcement learning benchmark problems.
| Gavin Taylor, Ron Parr | null | 1210.4898 | null | null |
Fast Exact Inference for Recursive Cardinality Models | cs.LG stat.ML | Cardinality potentials are a generally useful class of high order potential
that affect probabilities based on how many of D binary variables are active.
Maximum a posteriori (MAP) inference for cardinality potential models is
well-understood, with efficient computations taking O(DlogD) time. Yet
efficient marginalization and sampling have not been addressed as thoroughly in
the machine learning community. We show that there exists a simple algorithm
for computing marginal probabilities and drawing exact joint samples that runs
in O(Dlog2 D) time, and we show how to frame the algorithm as efficient belief
propagation in a low order tree-structured model that includes additional
auxiliary variables. We then develop a new, more general class of models,
termed Recursive Cardinality models, which take advantage of this efficiency.
Finally, we show how to do efficient exact inference in models composed of a
tree structure and a cardinality potential. We explore the expressive power of
Recursive Cardinality models and empirically demonstrate their utility.
| Daniel Tarlow, Kevin Swersky, Richard S. Zemel, Ryan Prescott Adams,
Brendan J. Frey | null | 1210.4899 | null | null |
Efficiently Searching for Frustrated Cycles in MAP Inference | cs.DS cs.LG stat.ML | Dual decomposition provides a tractable framework for designing algorithms
for finding the most probable (MAP) configuration in graphical models. However,
for many real-world inference problems, the typical decomposition has a large
integrality gap, due to frustrated cycles. One way to tighten the relaxation is
to introduce additional constraints that explicitly enforce cycle consistency.
Earlier work showed that cluster-pursuit algorithms, which iteratively
introduce cycle and other higherorder consistency constraints, allows one to
exactly solve many hard inference problems. However, these algorithms
explicitly enumerate a candidate set of clusters, limiting them to triplets or
other short cycles. We solve the search problem for cycle constraints, giving a
nearly linear time algorithm for finding the most frustrated cycle of arbitrary
length. We show how to use this search algorithm together with the dual
decomposition framework and clusterpursuit. The new algorithm exactly solves
MAP inference problems arising from relational classification and stereo
vision.
| David Sontag, Do Kook Choe, Yitao Li | null | 1210.4902 | null | null |
Latent Composite Likelihood Learning for the Structured Canonical
Correlation Model | stat.ML cs.LG | Latent variable models are used to estimate variables of interest quantities
which are observable only up to some measurement error. In many studies, such
variables are known but not precisely quantifiable (such as "job satisfaction"
in social sciences and marketing, "analytical ability" in educational testing,
or "inflation" in economics). This leads to the development of measurement
instruments to record noisy indirect evidence for such unobserved variables
such as surveys, tests and price indexes. In such problems, there are
postulated latent variables and a given measurement model. At the same time,
other unantecipated latent variables can add further unmeasured confounding to
the observed variables. The problem is how to deal with unantecipated latents
variables. In this paper, we provide a method loosely inspired by canonical
correlation that makes use of background information concerning the "known"
latent variables. Given a partially specified structure, it provides a
structure learning approach to detect "unknown unknowns," the confounding
effect of potentially infinitely many other latent variables. This is done
without explicitly modeling such extra latent factors. Because of the special
structure of the problem, we are able to exploit a new variation of composite
likelihood fitting to efficiently learn this structure. Validation is provided
with experiments in synthetic data and the analysis of a large survey done with
a sample of over 100,000 staff members of the National Health Service of the
United Kingdom.
| Ricardo Silva | null | 1210.4905 | null | null |
Active Learning with Distributional Estimates | cs.LG stat.ML | Active Learning (AL) is increasingly important in a broad range of
applications. Two main AL principles to obtain accurate classification with few
labeled data are refinement of the current decision boundary and exploration of
poorly sampled regions. In this paper we derive a novel AL scheme that balances
these two principles in a natural way. In contrast to many AL strategies, which
are based on an estimated class conditional probability ^p(y|x), a key
component of our approach is to view this quantity as a random variable, hence
explicitly considering the uncertainty in its estimated value. Our main
contribution is a novel mathematical framework for uncertainty-based AL, and a
corresponding AL scheme, where the uncertainty in ^p(y|x) is modeled by a
second-order distribution. On the practical side, we show how to approximate
such second-order distributions for kernel density classification. Finally, we
find that over a large number of UCI, USPS and Caltech4 datasets, our AL scheme
achieves significantly better learning curves than popular AL methods such as
uncertainty sampling and error reduction sampling, when all use the same kernel
density classifier.
| Jens Roeder, Boaz Nadler, Kevin Kunzmann, Fred A. Hamprecht | null | 1210.4909 | null | null |
New Advances and Theoretical Insights into EDML | cs.AI cs.LG stat.ML | EDML is a recently proposed algorithm for learning MAP parameters in Bayesian
networks. In this paper, we present a number of new advances and insights on
the EDML algorithm. First, we provide the multivalued extension of EDML,
originally proposed for Bayesian networks over binary variables. Next, we
identify a simplified characterization of EDML that further implies a simple
fixed-point algorithm for the convex optimization problem that underlies it.
This characterization further reveals a connection between EDML and EM: a fixed
point of EDML is a fixed point of EM, and vice versa. We thus identify also a
new characterization of EM fixed points, but in the semantics of EDML. Finally,
we propose a hybrid EDML/EM algorithm that takes advantage of the improved
empirical convergence behavior of EDML, while maintaining the monotonic
improvement property of EM.
| Khaled S. Refaat, Arthur Choi, Adnan Darwiche | null | 1210.4910 | null | null |
An Improved Admissible Heuristic for Learning Optimal Bayesian Networks | cs.AI cs.LG stat.ML | Recently two search algorithms, A* and breadth-first branch and bound
(BFBnB), were developed based on a simple admissible heuristic for learning
Bayesian network structures that optimize a scoring function. The heuristic
represents a relaxation of the learning problem such that each variable chooses
optimal parents independently. As a result, the heuristic may contain many
directed cycles and result in a loose bound. This paper introduces an improved
admissible heuristic that tries to avoid directed cycles within small groups of
variables. A sparse representation is also introduced to store only the unique
optimal parent choices. Empirical results show that the new techniques
significantly improved the efficiency and scalability of A* and BFBnB on most
of datasets tested in this paper.
| Changhe Yuan, Brandon Malone | null | 1210.4913 | null | null |
Latent Structured Ranking | cs.LG cs.IR stat.ML | Many latent (factorized) models have been proposed for recommendation tasks
like collaborative filtering and for ranking tasks like document or image
retrieval and annotation. Common to all those methods is that during inference
the items are scored independently by their similarity to the query in the
latent embedding space. The structure of the ranked list (i.e. considering the
set of items returned as a whole) is not taken into account. This can be a
problem because the set of top predictions can be either too diverse (contain
results that contradict each other) or are not diverse enough. In this paper we
introduce a method for learning latent structured rankings that improves over
existing methods by providing the right blend of predictions at the top of the
ranked list. Particular emphasis is put on making this method scalable.
Empirical results on large scale image annotation and music recommendation
tasks show improvements over existing approaches.
| Jason Weston, John Blitzer | null | 1210.4914 | null | null |
Fast Graph Construction Using Auction Algorithm | cs.LG stat.ML | In practical machine learning systems, graph based data representation has
been widely used in various learning paradigms, ranging from unsupervised
clustering to supervised classification. Besides those applications with
natural graph or network structure data, such as social network analysis and
relational learning, many other applications often involve a critical step in
converting data vectors to an adjacency graph. In particular, a sparse subgraph
extracted from the original graph is often required due to both theoretic and
practical needs. Previous study clearly shows that the performance of different
learning algorithms, e.g., clustering and classification, benefits from such
sparse subgraphs with balanced node connectivity. However, the existing graph
construction methods are either computationally expensive or with
unsatisfactory performance. In this paper, we utilize a scalable method called
auction algorithm and its parallel extension to recover a sparse yet nearly
balanced subgraph with significantly reduced computational cost. Empirical
study and comparison with the state-ofart approaches clearly demonstrate the
superiority of the proposed method in both efficiency and accuracy.
| Jun Wang, Yinglong Xia | null | 1210.4917 | null | null |
Dynamic Teaching in Sequential Decision Making Environments | cs.LG cs.AI stat.ML | We describe theoretical bounds and a practical algorithm for teaching a model
by demonstration in a sequential decision making environment. Unlike previous
efforts that have optimized learners that watch a teacher demonstrate a static
policy, we focus on the teacher as a decision maker who can dynamically choose
different policies to teach different parts of the environment. We develop
several teaching frameworks based on previously defined supervised protocols,
such as Teaching Dimension, extending them to handle noise and sequences of
inputs encountered in an MDP.We provide theoretical bounds on the learnability
of several important model classes in this setting and suggest a practical
algorithm for dynamic teaching.
| Thomas J. Walsh, Sergiu Goschin | null | 1210.4918 | null | null |
Latent Dirichlet Allocation Uncovers Spectral Characteristics of Drought
Stressed Plants | cs.LG cs.CE stat.ML | Understanding the adaptation process of plants to drought stress is essential
in improving management practices, breeding strategies as well as engineering
viable crops for a sustainable agriculture in the coming decades.
Hyper-spectral imaging provides a particularly promising approach to gain such
understanding since it allows to discover non-destructively spectral
characteristics of plants governed primarily by scattering and absorption
characteristics of the leaf internal structure and biochemical constituents.
Several drought stress indices have been derived using hyper-spectral imaging.
However, they are typically based on few hyper-spectral images only, rely on
interpretations of experts, and consider few wavelengths only. In this study,
we present the first data-driven approach to discovering spectral drought
stress indices, treating it as an unsupervised labeling problem at massive
scale. To make use of short range dependencies of spectral wavelengths, we
develop an online variational Bayes algorithm for latent Dirichlet allocation
with convolved Dirichlet regularizer. This approach scales to massive datasets
and, hence, provides a more objective complement to plant physiological
practices. The spectral topics found conform to plant physiological knowledge
and can be computed in a fraction of the time compared to existing LDA
approaches.
| Mirwaes Wahabzada, Kristian Kersting, Christian Bauckhage, Christoph
Roemer, Agim Ballvora, Francisco Pinto, Uwe Rascher, Jens Leon, Lutz Ploemer | null | 1210.4919 | null | null |
Factorized Multi-Modal Topic Model | cs.LG cs.IR stat.ML | Multi-modal data collections, such as corpora of paired images and text
snippets, require analysis methods beyond single-view component and topic
models. For continuous observations the current dominant approach is based on
extensions of canonical correlation analysis, factorizing the variation into
components shared by the different modalities and those private to each of
them. For count data, multiple variants of topic models attempting to tie the
modalities together have been presented. All of these, however, lack the
ability to learn components private to one modality, and consequently will try
to force dependencies even between minimally correlating modalities. In this
work we combine the two approaches by presenting a novel HDP-based topic model
that automatically learns both shared and private topics. The model is shown to
be especially useful for querying the contents of one domain given samples of
the other.
| Seppo Virtanen, Yangqing Jia, Arto Klami, Trevor Darrell | null | 1210.4920 | null | null |
Optimal Computational Trade-Off of Inexact Proximal Methods | cs.LG cs.CV cs.NA | In this paper, we investigate the trade-off between convergence rate and
computational cost when minimizing a composite functional with
proximal-gradient methods, which are popular optimisation tools in machine
learning. We consider the case when the proximity operator is computed via an
iterative procedure, which provides an approximation of the exact proximity
operator. In that case, we obtain algorithms with two nested loops. We show
that the strategy that minimizes the computational cost to reach a solution
with a desired accuracy in finite time is to set the number of inner iterations
to a constant, which differs from the strategy indicated by a convergence rate
analysis. In the process, we also present a new procedure called SIP (that is
Speedy Inexact Proximal-gradient algorithm) that is both computationally
efficient and easy to implement. Our numerical experiments confirm the
theoretical findings and suggest that SIP can be a very competitive alternative
to the standard procedure.
| Pierre Machart (LIF, LSIS), Sandrine Anthoine (LATP), Luca Baldassarre
(EPFL) | null | 1210.5034 | null | null |
A Novel Learning Algorithm for Bayesian Network and Its Efficient
Implementation on GPU | cs.DC cs.LG | Computational inference of causal relationships underlying complex networks,
such as gene-regulatory pathways, is NP-complete due to its combinatorial
nature when permuting all possible interactions. Markov chain Monte Carlo
(MCMC) has been introduced to sample only part of the combinations while still
guaranteeing convergence and traversability, which therefore becomes widely
used. However, MCMC is not able to perform efficiently enough for networks that
have more than 15~20 nodes because of the computational complexity. In this
paper, we use general purpose processor (GPP) and general purpose graphics
processing unit (GPGPU) to implement and accelerate a novel Bayesian network
learning algorithm. With a hash-table-based memory-saving strategy and a novel
task assigning strategy, we achieve a 10-fold acceleration per iteration than
using a serial GPP. Specially, we use a greedy method to search for the best
graph from a given order. We incorporate a prior component in the current
scoring function, which further facilitates the searching. Overall, we are able
to apply this system to networks with more than 60 nodes, allowing inferences
and modeling of bigger and more complex networks than current methods.
| Yu Wang, Weikang Qian, Shuchang Zhang and Bo Yuan | null | 1210.5128 | null | null |
LSBN: A Large-Scale Bayesian Structure Learning Framework for Model
Averaging | cs.LG stat.ML | The motivation for this paper is to apply Bayesian structure learning using
Model Averaging in large-scale networks. Currently, Bayesian model averaging
algorithm is applicable to networks with only tens of variables, restrained by
its super-exponential complexity. We present a novel framework, called
LSBN(Large-Scale Bayesian Network), making it possible to handle networks with
infinite size by following the principle of divide-and-conquer. The method of
LSBN comprises three steps. In general, LSBN first performs the partition by
using a second-order partition strategy, which achieves more robust results.
LSBN conducts sampling and structure learning within each overlapping community
after the community is isolated from other variables by Markov Blanket. Finally
LSBN employs an efficient algorithm, to merge structures of overlapping
communities into a whole. In comparison with other four state-of-art
large-scale network structure learning algorithms such as ARACNE, PC, Greedy
Search and MMHC, LSBN shows comparable results in five common benchmark
datasets, evaluated by precision, recall and f-score. What's more, LSBN makes
it possible to learn large-scale Bayesian structure by Model Averaging which
used to be intractable. In summary, LSBN provides an scalable and parallel
framework for the reconstruction of network structures. Besides, the complete
information of overlapping communities serves as the byproduct, which could be
used to mine meaningful clusters in biological networks, such as
protein-protein-interaction network or gene regulatory network, as well as in
social network.
| Yang Lu, Mengying Wang, Menglu Li, Qili Zhu, Bo Yuan | null | 1210.5135 | null | null |
Matrix reconstruction with the local max norm | stat.ML cs.LG | We introduce a new family of matrix norms, the "local max" norms,
generalizing existing methods such as the max norm, the trace norm (nuclear
norm), and the weighted or smoothed weighted trace norms, which have been
extensively used in the literature as regularizers for matrix reconstruction
problems. We show that this new family can be used to interpolate between the
(weighted or unweighted) trace norm and the more conservative max norm. We test
this interpolation on simulated data and on the large-scale Netflix and
MovieLens ratings data, and find improved accuracy relative to the existing
matrix norms. We also provide theoretical results showing learning guarantees
for some of the new norms.
| Rina Foygel, Nathan Srebro, Ruslan Salakhutdinov | null | 1210.5196 | null | null |
The performance of orthogonal multi-matching pursuit under RIP | cs.IT cs.LG math.IT math.NA | The orthogonal multi-matching pursuit (OMMP) is a natural extension of
orthogonal matching pursuit (OMP). We denote the OMMP with the parameter $M$ as
OMMP(M) where $M\geq 1$ is an integer. The main difference between OMP and
OMMP(M) is that OMMP(M) selects $M$ atoms per iteration, while OMP only adds
one atom to the optimal atom set. In this paper, we study the performance of
orthogonal multi-matching pursuit (OMMP) under RIP. In particular, we show
that, when the measurement matrix A satisfies $(9s, 1/10)$-RIP, there exists an
absolutely constant $M_0\leq 8$ so that OMMP(M_0) can recover $s$-sparse signal
within $s$ iterations. We furthermore prove that, for slowly-decaying
$s$-sparse signal, OMMP(M) can recover s-sparse signal within $O(\frac{s}{M})$
iterations for a large class of $M$. In particular, for $M=s^a$ with $a\in
[0,1/2]$, OMMP(M) can recover slowly-decaying $s$-sparse signal within
$O(s^{1-a})$ iterations. The result implies that OMMP can reduce the
computational complexity heavily.
| Zhiqiang Xu | null | 1210.5323 | null | null |
Pairwise MRF Calibration by Perturbation of the Bethe Reference Point | cond-mat.dis-nn cond-mat.stat-mech cs.LG stat.ML | We investigate different ways of generating approximate solutions to the
pairwise Markov random field (MRF) selection problem. We focus mainly on the
inverse Ising problem, but discuss also the somewhat related inverse Gaussian
problem because both types of MRF are suitable for inference tasks with the
belief propagation algorithm (BP) under certain conditions. Our approach
consists in to take a Bethe mean-field solution obtained with a maximum
spanning tree (MST) of pairwise mutual information, referred to as the
\emph{Bethe reference point}, for further perturbation procedures. We consider
three different ways following this idea: in the first one, we select and
calibrate iteratively the optimal links to be added starting from the Bethe
reference point; the second one is based on the observation that the natural
gradient can be computed analytically at the Bethe point; in the third one,
assuming no local field and using low temperature expansion we develop a dual
loop joint model based on a well chosen fundamental cycle basis. We indeed
identify a subclass of planar models, which we refer to as \emph{Bethe-dual
graph models}, having possibly many loops, but characterized by a singly
connected dual factor graph, for which the partition function and the linear
response can be computed exactly in respectively O(N) and $O(N^2)$ operations,
thanks to a dual weight propagation (DWP) message passing procedure that we set
up. When restricted to this subclass of models, the inverse Ising problem being
convex, becomes tractable at any temperature. Experimental tests on various
datasets with refined $L_0$ or $L_1$ regularization procedures indicate that
these approaches may be competitive and useful alternatives to existing ones.
| Cyril Furtlehner, Yufei Han, Jean-Marc Lasgouttes and Victorin Martin | null | 1210.5338 | null | null |
Bayesian Estimation for Continuous-Time Sparse Stochastic Processes | cs.LG | We consider continuous-time sparse stochastic processes from which we have
only a finite number of noisy/noiseless samples. Our goal is to estimate the
noiseless samples (denoising) and the signal in-between (interpolation
problem).
By relying on tools from the theory of splines, we derive the joint a priori
distribution of the samples and show how this probability density function can
be factorized. The factorization enables us to tractably implement the maximum
a posteriori and minimum mean-square error (MMSE) criteria as two statistical
approaches for estimating the unknowns. We compare the derived statistical
methods with well-known techniques for the recovery of sparse signals, such as
the $\ell_1$ norm and Log ($\ell_1$-$\ell_0$ relaxation) regularization
methods. The simulation results show that, under certain conditions, the
performance of the regularization techniques can be very close to that of the
MMSE estimator.
| Arash Amini, Ulugbek S. Kamilov, Emrah Bostan and Michael Unser | 10.1109/TSP.2012.2226446 | 1210.5394 | null | null |
Disentangling Factors of Variation via Generative Entangling | stat.ML cs.LG cs.NE | Here we propose a novel model family with the objective of learning to
disentangle the factors of variation in data. Our approach is based on the
spike-and-slab restricted Boltzmann machine which we generalize to include
higher-order interactions among multiple latent variables. Seen from a
generative perspective, the multiplicative interactions emulates the entangling
of factors of variation. Inference in the model can be seen as disentangling
these generative factors. Unlike previous attempts at disentangling latent
factors, the proposed model is trained using no supervised information
regarding the latent factors. We apply our model to the task of facial
expression classification.
| Guillaume Desjardins and Aaron Courville and Yoshua Bengio | null | 1210.5474 | null | null |
Online Learning in Decentralized Multiuser Resource Sharing Problems | cs.LG | In this paper, we consider the general scenario of resource sharing in a
decentralized system when the resource rewards/qualities are time-varying and
unknown to the users, and using the same resource by multiple users leads to
reduced quality due to resource sharing. Firstly, we consider a
user-independent reward model with no communication between the users, where a
user gets feedback about the congestion level in the resource it uses.
Secondly, we consider user-specific rewards and allow costly communication
between the users. The users have a cooperative goal of achieving the highest
system utility. There are multiple obstacles in achieving this goal such as the
decentralized nature of the system, unknown resource qualities, communication,
computation and switching costs. We propose distributed learning algorithms
with logarithmic regret with respect to the optimal allocation. Our logarithmic
regret result holds under both i.i.d. and Markovian reward models, as well as
under communication, computation and switching costs.
| Cem Tekin, Mingyan Liu | null | 1210.5544 | null | null |
Content-boosted Matrix Factorization Techniques for Recommender Systems | stat.ML cs.LG | Many businesses are using recommender systems for marketing outreach.
Recommendation algorithms can be either based on content or driven by
collaborative filtering. We study different ways to incorporate content
information directly into the matrix factorization approach of collaborative
filtering. These content-boosted matrix factorization algorithms not only
improve recommendation accuracy, but also provide useful insights about the
contents, as well as make recommendations more easily interpretable.
| Jennifer Nguyen, Mu Zhu | 10.1002/sam.11184 | 1210.5631 | null | null |
Efficient Inference in Fully Connected CRFs with Gaussian Edge
Potentials | cs.CV cs.AI cs.LG | Most state-of-the-art techniques for multi-class image segmentation and
labeling use conditional random fields defined over pixels or image regions.
While region-level models often feature dense pairwise connectivity,
pixel-level models are considerably larger and have only permitted sparse graph
structures. In this paper, we consider fully connected CRF models defined on
the complete set of pixels in an image. The resulting graphs have billions of
edges, making traditional inference algorithms impractical. Our main
contribution is a highly efficient approximate inference algorithm for fully
connected CRF models in which the pairwise edge potentials are defined by a
linear combination of Gaussian kernels. Our experiments demonstrate that dense
connectivity at the pixel level substantially improves segmentation and
labeling accuracy.
| Philipp Kr\"ahenb\"uhl and Vladlen Koltun | null | 1210.5644 | null | null |
Choice of V for V-Fold Cross-Validation in Least-Squares Density
Estimation | math.ST cs.LG stat.TH | This paper studies V-fold cross-validation for model selection in
least-squares density estimation. The goal is to provide theoretical grounds
for choosing V in order to minimize the least-squares loss of the selected
estimator. We first prove a non-asymptotic oracle inequality for V-fold
cross-validation and its bias-corrected version (V-fold penalization). In
particular, this result implies that V-fold penalization is asymptotically
optimal in the nonparametric case. Then, we compute the variance of V-fold
cross-validation and related criteria, as well as the variance of key
quantities for model selection performance. We show that these variances depend
on V like 1+4/(V-1), at least in some particular cases, suggesting that the
performance increases much from V=2 to V=5 or 10, and then is almost constant.
Overall, this can explain the common advice to take V=5---at least in our
setting and when the computational power is limited---, as supported by some
simulation experiments. An oracle inequality and exact formulas for the
variance are also proved for Monte-Carlo cross-validation, also known as
repeated cross-validation, where the parameter V is replaced by the number B of
random splits of the data.
| Sylvain Arlot (SIERRA, DI-ENS), Matthieu Lerasle (JAD) | null | 1210.5830 | null | null |
Supervised Learning with Similarity Functions | cs.LG stat.ML | We address the problem of general supervised learning when data can only be
accessed through an (indefinite) similarity function between data points.
Existing work on learning with indefinite kernels has concentrated solely on
binary/multi-class classification problems. We propose a model that is generic
enough to handle any supervised learning task and also subsumes the model
previously proposed for classification. We give a "goodness" criterion for
similarity functions w.r.t. a given supervised learning task and then adapt a
well-known landmarking technique to provide efficient algorithms for supervised
learning using "good" similarity functions. We demonstrate the effectiveness of
our model on three important super-vised learning problems: a) real-valued
regression, b) ordinal regression and c) ranking where we show that our method
guarantees bounded generalization error. Furthermore, for the case of
real-valued regression, we give a natural goodness definition that, when used
in conjunction with a recent result in sparse vector recovery, guarantees a
sparse predictor with bounded generalization error. Finally, we report results
of our learning algorithms on regression and ordinal regression tasks using
non-PSD similarity functions and demonstrate the effectiveness of our
algorithms, especially that of the sparse landmark selection algorithm that
achieves significantly higher accuracies than the baseline methods while
offering reduced computational costs.
| Purushottam Kar and Prateek Jain | null | 1210.5840 | null | null |
Initialization of Self-Organizing Maps: Principal Components Versus
Random Initialization. A Case Study | stat.ML cs.LG | The performance of the Self-Organizing Map (SOM) algorithm is dependent on
the initial weights of the map. The different initialization methods can
broadly be classified into random and data analysis based initialization
approach. In this paper, the performance of random initialization (RI) approach
is compared to that of principal component initialization (PCI) in which the
initial map weights are chosen from the space of the principal component.
Performance is evaluated by the fraction of variance unexplained (FVU).
Datasets were classified into quasi-linear and non-linear and it was observed
that RI performed better for non-linear datasets; however the performance of
PCI approach remains inconclusive for quasi-linear datasets.
| A. A. Akinduko and E. M. Mirkes | null | 1210.5873 | null | null |
Reducing statistical time-series problems to binary classification | cs.LG stat.ML | We show how binary classification methods developed to work on i.i.d. data
can be used for solving statistical problems that are seemingly unrelated to
classification and concern highly-dependent time series. Specifically, the
problems of time-series clustering, homogeneity testing and the three-sample
problem are addressed. The algorithms that we construct for solving these
problems are based on a new metric between time-series distributions, which can
be evaluated using binary classification methods. Universal consistency of the
proposed algorithms is proven under most general assumptions. The theoretical
results are illustrated with experiments on synthetic and real-world data.
| Daniil Ryabko, J\'er\'emie Mary | null | 1210.6001 | null | null |
Fast Exact Max-Kernel Search | cs.DS cs.IR cs.LG | The wide applicability of kernels makes the problem of max-kernel search
ubiquitous and more general than the usual similarity search in metric spaces.
We focus on solving this problem efficiently. We begin by characterizing the
inherent hardness of the max-kernel search problem with a novel notion of
directional concentration. Following that, we present a method to use an $O(n
\log n)$ algorithm to index any set of objects (points in $\Real^\dims$ or
abstract objects) directly in the Hilbert space without any explicit feature
representations of the objects in this space. We present the first provably
$O(\log n)$ algorithm for exact max-kernel search using this index. Empirical
results for a variety of data sets as well as abstract objects demonstrate up
to 4 orders of magnitude speedup in some cases. Extensions for approximate
max-kernel search are also presented.
| Ryan R. Curtin, Parikshit Ram, Alexander G. Gray | null | 1210.6287 | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.