categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG cs.AI | null | 1302.6617 | null | null | http://arxiv.org/pdf/1302.6617v1 | 2013-02-26T22:36:46Z | 2013-02-26T22:36:46Z | Arriving on time: estimating travel time distributions on large-scale
road networks | Most optimal routing problems focus on minimizing travel time or distance
traveled. Oftentimes, a more useful objective is to maximize the probability of
on-time arrival, which requires statistical distributions of travel times,
rather than just mean values. We propose a method to estimate travel time
distributions on large-scale road networks, using probe vehicle data collected
from GPS. We present a framework that works with large input of data, and
scales linearly with the size of the network. Leveraging the planar topology of
the graph, the method computes efficiently the time correlations between
neighboring streets. First, raw probe vehicle traces are compressed into pairs
of travel times and number of stops for each traversed road segment using a
`stop-and-go' algorithm developed for this work. The compressed data is then
used as input for training a path travel time model, which couples a Markov
model along with a Gaussian Markov random field. Finally, scalable inference
algorithms are developed for obtaining path travel time distributions from the
composite MM-GMRF model. We illustrate the accuracy and scalability of our
model on a 505,000 road link network spanning the San Francisco Bay Area.
| [
"Timothy Hunter, Aude Hofleitner, Jack Reilly, Walid Krichene, Jerome\n Thai, Anastasios Kouvelas, Pieter Abbeel, Alexandre Bayen",
"['Timothy Hunter' 'Aude Hofleitner' 'Jack Reilly' 'Walid Krichene'\n 'Jerome Thai' 'Anastasios Kouvelas' 'Pieter Abbeel' 'Alexandre Bayen']"
] |
cs.LG cs.AI stat.ML | null | 1302.6677 | null | null | http://arxiv.org/pdf/1302.6677v1 | 2013-02-27T06:45:28Z | 2013-02-27T06:45:28Z | Taming the Curse of Dimensionality: Discrete Integration by Hashing and
Optimization | Integration is affected by the curse of dimensionality and quickly becomes
intractable as the dimensionality of the problem grows. We propose a randomized
algorithm that, with high probability, gives a constant-factor approximation of
a general discrete integral defined over an exponentially large set. This
algorithm relies on solving only a small number of instances of a discrete
combinatorial optimization problem subject to randomly generated parity
constraints used as a hash function. As an application, we demonstrate that
with a small number of MAP queries we can efficiently approximate the partition
function of discrete graphical models, which can in turn be used, for instance,
for marginal computation or model selection.
| [
"['Stefano Ermon' 'Carla P. Gomes' 'Ashish Sabharwal' 'Bart Selman']",
"Stefano Ermon, Carla P. Gomes, Ashish Sabharwal, Bart Selman"
] |
cs.SE cs.LG cs.SI nlin.AO physics.soc-ph | null | 1302.6764 | null | null | http://arxiv.org/pdf/1302.6764v2 | 2013-02-28T22:26:41Z | 2013-02-27T13:32:15Z | Categorizing Bugs with Social Networks: A Case Study on Four Open Source
Software Communities | Efficient bug triaging procedures are an important precondition for
successful collaborative software engineering projects. Triaging bugs can
become a laborious task particularly in open source software (OSS) projects
with a large base of comparably inexperienced part-time contributors. In this
paper, we propose an efficient and practical method to identify valid bug
reports which a) refer to an actual software bug, b) are not duplicates and c)
contain enough information to be processed right away. Our classification is
based on nine measures to quantify the social embeddedness of bug reporters in
the collaboration network. We demonstrate its applicability in a case study,
using a comprehensive data set of more than 700,000 bug reports obtained from
the Bugzilla installation of four major OSS communities, for a period of more
than ten years. For those projects that exhibit the lowest fraction of valid
bug reports, we find that the bug reporters' position in the collaboration
network is a strong indicator for the quality of bug reports. Based on this
finding, we develop an automated classification scheme that can easily be
integrated into bug tracking platforms and analyze its performance in the
considered OSS communities. A support vector machine (SVM) to identify valid
bug reports based on the nine measures yields a precision of up to 90.3% with
an associated recall of 38.9%. With this, we significantly improve the results
obtained in previous case studies for an automated early identification of bugs
that are eventually fixed. Furthermore, our study highlights the potential of
using quantitative measures of social organization in collaborative software
engineering. It also opens a broad perspective for the integration of social
awareness in the design of support infrastructures.
| [
"['Marcelo Serrano Zanetti' 'Ingo Scholtes' 'Claudio Juan Tessone'\n 'Frank Schweitzer']",
"Marcelo Serrano Zanetti, Ingo Scholtes, Claudio Juan Tessone and Frank\n Schweitzer"
] |
math.NA cs.LG stat.ML | null | 1302.6768 | null | null | http://arxiv.org/pdf/1302.6768v2 | 2014-06-29T09:25:20Z | 2013-02-27T13:47:45Z | Missing Entries Matrix Approximation and Completion | We describe several algorithms for matrix completion and matrix approximation
when only some of its entries are known. The approximation constraint can be
any whose approximated solution is known for the full matrix. For low rank
approximations, similar algorithms appears recently in the literature under
different names. In this work, we introduce new theorems for matrix
approximation and show that these algorithms can be extended to handle
different constraints such as nuclear norm, spectral norm, orthogonality
constraints and more that are different than low rank approximations. As the
algorithms can be viewed from an optimization point of view, we discuss their
convergence to global solution for the convex case. We also discuss the optimal
step size and show that it is fixed in each iteration. In addition, the derived
matrix completion flow is robust and does not require any parameters. This
matrix completion flow is applicable to different spectral minimizations and
can be applied to physics, mathematics and electrical engineering problems such
as data reconstruction of images and data coming from PDEs such as Helmholtz
equation used for electromagnetic waves.
| [
"['Gil Shabat' 'Yaniv Shmueli' 'Amir Averbuch']",
"Gil Shabat, Yaniv Shmueli and Amir Averbuch"
] |
cs.AI cs.LG stat.ML | null | 1302.6808 | null | null | null | null | null | Learning Gaussian Networks | We describe algorithms for learning Bayesian networks from a combination of
user knowledge and statistical data. The algorithms have two components: a
scoring metric and a search procedure. The scoring metric takes a network
structure, statistical data, and a user's prior knowledge, and returns a score
proportional to the posterior probability of the network structure given the
data. The search procedure generates networks for evaluation by the scoring
metric. Previous work has concentrated on metrics for domains containing only
discrete variables, under the assumption that data represents a multinomial
sample. In this paper, we extend this work, developing scoring metrics for
domains containing all continuous variables or a mixture of discrete and
continuous variables, under the assumption that continuous data is sampled from
a multivariate normal distribution. Our work extends traditional statistical
approaches for identifying vanishing regression coefficients in that we
identify two important assumptions, called event equivalence and parameter
modularity, that when combined allow the construction of prior distributions
for multivariate normal parameters from a single prior Bayesian network
specified by a user.
| [
"Dan Geiger and David Heckerman"
] |
cs.LG stat.ML | null | 1302.6828 | null | null | http://arxiv.org/pdf/1302.6828v1 | 2013-02-27T14:18:05Z | 2013-02-27T14:18:05Z | Induction of Selective Bayesian Classifiers | In this paper, we examine previous work on the naive Bayesian classifier and
review its limitations, which include a sensitivity to correlated features. We
respond to this problem by embedding the naive Bayesian induction scheme within
an algorithm that c arries out a greedy search through the space of features.
We hypothesize that this approach will improve asymptotic accuracy in domains
that involve correlated features without reducing the rate of learning in ones
that do not. We report experimental results on six natural domains, including
comparisons with decision-tree induction, that support these hypotheses. In
closing, we discuss other approaches to extending naive Bayesian classifiers
and outline some directions for future research.
| [
"Pat Langley, Stephanie Sage",
"['Pat Langley' 'Stephanie Sage']"
] |
cs.LG | null | 1302.6927 | null | null | http://arxiv.org/pdf/1302.6927v1 | 2013-02-27T17:14:14Z | 2013-02-27T17:14:14Z | Online Learning for Time Series Prediction | In this paper we address the problem of predicting a time series using the
ARMA (autoregressive moving average) model, under minimal assumptions on the
noise terms. Using regret minimization techniques, we develop effective online
learning algorithms for the prediction problem, without assuming that the noise
terms are Gaussian, identically distributed or even independent. Furthermore,
we show that our algorithm's performances asymptotically approaches the
performance of the best ARMA model in hindsight.
| [
"Oren Anava, Elad Hazan, Shie Mannor, Ohad Shamir",
"['Oren Anava' 'Elad Hazan' 'Shie Mannor' 'Ohad Shamir']"
] |
cs.LG | null | 1302.6937 | null | null | http://arxiv.org/pdf/1302.6937v2 | 2014-06-10T07:41:36Z | 2013-02-27T17:46:43Z | Online Convex Optimization Against Adversaries with Memory and
Application to Statistical Arbitrage | The framework of online learning with memory naturally captures learning
problems with temporal constraints, and was previously studied for the experts
setting. In this work we extend the notion of learning with memory to the
general Online Convex Optimization (OCO) framework, and present two algorithms
that attain low regret. The first algorithm applies to Lipschitz continuous
loss functions, obtaining optimal regret bounds for both convex and strongly
convex losses. The second algorithm attains the optimal regret bounds and
applies more broadly to convex losses without requiring Lipschitz continuity,
yet is more complicated to implement. We complement our theoretic results with
an application to statistical arbitrage in finance: we devise algorithms for
constructing mean-reverting portfolios.
| [
"Oren Anava, Elad Hazan, Shie Mannor",
"['Oren Anava' 'Elad Hazan' 'Shie Mannor']"
] |
cs.LG cs.NI math.OC | null | 1302.6974 | null | null | http://arxiv.org/pdf/1302.6974v4 | 2015-02-17T11:30:13Z | 2013-02-27T20:01:24Z | Spectrum Bandit Optimization | We consider the problem of allocating radio channels to links in a wireless
network. Links interact through interference, modelled as a conflict graph
(i.e., two interfering links cannot be simultaneously active on the same
channel). We aim at identifying the channel allocation maximizing the total
network throughput over a finite time horizon. Should we know the average radio
conditions on each channel and on each link, an optimal allocation would be
obtained by solving an Integer Linear Program (ILP). When radio conditions are
unknown a priori, we look for a sequential channel allocation policy that
converges to the optimal allocation while minimizing on the way the throughput
loss or {\it regret} due to the need for exploring sub-optimal allocations. We
formulate this problem as a generic linear bandit problem, and analyze it first
in a stochastic setting where radio conditions are driven by a stationary
stochastic process, and then in an adversarial setting where radio conditions
can evolve arbitrarily. We provide new algorithms in both settings and derive
upper bounds on their regrets.
| [
"Marc Lelarge and Alexandre Proutiere and M. Sadegh Talebi",
"['Marc Lelarge' 'Alexandre Proutiere' 'M. Sadegh Talebi']"
] |
stat.ML cs.LG | null | 1302.7043 | null | null | http://arxiv.org/pdf/1302.7043v1 | 2013-02-28T00:37:29Z | 2013-02-28T00:37:29Z | Scoup-SMT: Scalable Coupled Sparse Matrix-Tensor Factorization | How can we correlate neural activity in the human brain as it responds to
words, with behavioral data expressed as answers to questions about these same
words? In short, we want to find latent variables, that explain both the brain
activity, as well as the behavioral responses. We show that this is an instance
of the Coupled Matrix-Tensor Factorization (CMTF) problem. We propose
Scoup-SMT, a novel, fast, and parallel algorithm that solves the CMTF problem
and produces a sparse latent low-rank subspace of the data. In our experiments,
we find that Scoup-SMT is 50-100 times faster than a state-of-the-art algorithm
for CMTF, along with a 5 fold increase in sparsity. Moreover, we extend
Scoup-SMT to handle missing data without degradation of performance. We apply
Scoup-SMT to BrainQ, a dataset consisting of a (nouns, brain voxels, human
subjects) tensor and a (nouns, properties) matrix, with coupling along the
nouns dimension. Scoup-SMT is able to find meaningful latent variables, as well
as to predict brain activity with competitive accuracy. Finally, we demonstrate
the generality of Scoup-SMT, by applying it on a Facebook dataset (users,
friends, wall-postings); there, Scoup-SMT spots spammer-like anomalies.
| [
"['Evangelos E. Papalexakis' 'Tom M. Mitchell' 'Nicholas D. Sidiropoulos'\n 'Christos Faloutsos' 'Partha Pratim Talukdar' 'Brian Murphy']",
"Evangelos E. Papalexakis, Tom M. Mitchell, Nicholas D. Sidiropoulos,\n Christos Faloutsos, Partha Pratim Talukdar, Brian Murphy"
] |
math.LO cs.LG cs.LO | null | 1302.7069 | null | null | http://arxiv.org/pdf/1302.7069v1 | 2013-02-28T03:35:18Z | 2013-02-28T03:35:18Z | Learning Theory in the Arithmetic Hierarchy | We consider the arithmetic complexity of index sets of uniformly computably
enumerable families learnable under different learning criteria. We determine
the exact complexity of these sets for the standard notions of finite learning,
learning in the limit, behaviorally correct learning and anomalous learning in
the limit. In proving the $\Sigma_5^0$-completeness result for behaviorally
correct learning we prove a result of independent interest; if a uniformly
computably enumerable family is not learnable, then for any computable learner
there is a $\Delta_2^0$ enumeration witnessing failure.
| [
"['Achilles Beros']",
"Achilles Beros"
] |
stat.ML cs.AI cs.LG stat.ME | null | 1302.7175 | null | null | http://arxiv.org/pdf/1302.7175v2 | 2013-03-01T15:04:48Z | 2013-02-28T12:48:32Z | Estimating the Maximum Expected Value: An Analysis of (Nested) Cross
Validation and the Maximum Sample Average | We investigate the accuracy of the two most common estimators for the maximum
expected value of a general set of random variables: a generalization of the
maximum sample average, and cross validation. No unbiased estimator exists and
we show that it is non-trivial to select a good estimator without knowledge
about the distributions of the random variables. We investigate and bound the
bias and variance of the aforementioned estimators and prove consistency. The
variance of cross validation can be significantly reduced, but not without
risking a large bias. The bias and variance of different variants of cross
validation are shown to be very problem-dependent, and a wrong choice can lead
to very inaccurate estimates.
| [
"['Hado van Hasselt']",
"Hado van Hasselt"
] |
cs.LG | null | 1302.7263 | null | null | http://arxiv.org/pdf/1302.7263v3 | 2013-03-15T12:52:33Z | 2013-02-28T17:15:55Z | Online Similarity Prediction of Networked Data from Known and Unknown
Graphs | We consider online similarity prediction problems over networked data. We
begin by relating this task to the more standard class prediction problem,
showing that, given an arbitrary algorithm for class prediction, we can
construct an algorithm for similarity prediction with "nearly" the same mistake
bound, and vice versa. After noticing that this general construction is
computationally infeasible, we target our study to {\em feasible} similarity
prediction algorithms on networked data. We initially assume that the network
structure is {\em known} to the learner. Here we observe that Matrix Winnow
\cite{w07} has a near-optimal mistake guarantee, at the price of cubic
prediction time per round. This motivates our effort for an efficient
implementation of a Perceptron algorithm with a weaker mistake guarantee but
with only poly-logarithmic prediction time. Our focus then turns to the
challenging case of networks whose structure is initially {\em unknown} to the
learner. In this novel setting, where the network structure is only
incrementally revealed, we obtain a mistake-bounded algorithm with a quadratic
prediction time per round.
| [
"Claudio Gentile, Mark Herbster, Stephen Pasteris",
"['Claudio Gentile' 'Mark Herbster' 'Stephen Pasteris']"
] |
stat.ML cs.LG | 10.1093/bioinformatics/btt425 | 1302.7280 | null | null | http://arxiv.org/abs/1302.7280v1 | 2013-02-28T18:40:14Z | 2013-02-28T18:40:14Z | Bayesian Consensus Clustering | The task of clustering a set of objects based on multiple sources of data
arises in several modern applications. We propose an integrative statistical
model that permits a separate clustering of the objects for each data source.
These separate clusterings adhere loosely to an overall consensus clustering,
and hence they are not independent. We describe a computationally scalable
Bayesian framework for simultaneous estimation of both the consensus clustering
and the source-specific clusterings. We demonstrate that this flexible approach
is more robust than joint clustering of all data sources, and is more powerful
than clustering each data source separately. This work is motivated by the
integrated analysis of heterogeneous biomedical data, and we present an
application to subtype identification of breast cancer tumor samples using
publicly available data from The Cancer Genome Atlas. Software is available at
http://people.duke.edu/~el113/software.html.
| [
"['Eric F. Lock' 'David B. Dunson']",
"Eric F. Lock and David B. Dunson"
] |
cs.LG cs.NA | null | 1302.7283 | null | null | http://arxiv.org/pdf/1302.7283v1 | 2013-02-28T18:56:56Z | 2013-02-28T18:56:56Z | Source Separation using Regularized NMF with MMSE Estimates under GMM
Priors with Online Learning for The Uncertainties | We propose a new method to enforce priors on the solution of the nonnegative
matrix factorization (NMF). The proposed algorithm can be used for denoising or
single-channel source separation (SCSS) applications. The NMF solution is
guided to follow the Minimum Mean Square Error (MMSE) estimates under Gaussian
mixture prior models (GMM) for the source signal. In SCSS applications, the
spectra of the observed mixed signal are decomposed as a weighted linear
combination of trained basis vectors for each source using NMF. In this work,
the NMF decomposition weight matrices are treated as a distorted image by a
distortion operator, which is learned directly from the observed signals. The
MMSE estimate of the weights matrix under GMM prior and log-normal distribution
for the distortion is then found to improve the NMF decomposition results. The
MMSE estimate is embedded within the optimization objective to form a novel
regularized NMF cost function. The corresponding update rules for the new
objectives are derived in this paper. Experimental results show that, the
proposed regularized NMF algorithm improves the source separation performance
compared with using NMF without prior or with other prior models.
| [
"Emad M. Grais, Hakan Erdogan",
"['Emad M. Grais' 'Hakan Erdogan']"
] |
q-fin.ST cs.IR cs.LG stat.ML | null | 1303.0073 | null | null | http://arxiv.org/pdf/1303.0073v2 | 2013-03-19T21:20:08Z | 2013-03-01T03:38:35Z | A Method for Comparing Hedge Funds | The paper presents new machine learning methods: signal composition, which
classifies time-series regardless of length, type, and quantity; and
self-labeling, a supervised-learning enhancement. The paper describes further
the implementation of the methods on a financial search engine system to
identify behavioral similarities among time-series representing monthly returns
of 11,312 hedge funds operated during approximately one decade (2000 - 2010).
The presented approach of cross-category and cross-location classification
assists the investor to identify alternative investments.
| [
"Uri Kartoun",
"['Uri Kartoun']"
] |
stat.AP cs.LG stat.ML | null | 1303.0076 | null | null | http://arxiv.org/pdf/1303.0076v2 | 2013-03-19T21:19:06Z | 2013-03-01T03:49:11Z | Bio-Signals-based Situation Comparison Approach to Predict Pain | This paper describes a time-series-based classification approach to identify
similarities between bio-medical-based situations. The proposed approach allows
classifying collections of time-series representing bio-medical measurements,
i.e., situations, regardless of the type, the length and the quantity of the
time-series a situation comprised of.
| [
"Uri Kartoun",
"['Uri Kartoun']"
] |
cs.SI cs.LG | 10.1007/978-3-642-16567-2_7 | 1303.0095 | null | null | http://arxiv.org/abs/1303.0095v1 | 2013-03-01T06:31:02Z | 2013-03-01T06:31:02Z | Label-dependent Feature Extraction in Social Networks for Node
Classification | A new method of feature extraction in the social network for within-network
classification is proposed in the paper. The method provides new features
calculated by combination of both: network structure information and class
labels assigned to nodes. The influence of various features on classification
performance has also been studied. The experiments on real-world data have
shown that features created owing to the proposed method can lead to
significant improvement of classification accuracy.
| [
"['Tomasz Kajdanowicz' 'Przemyslaw Kazienko' 'Piotr Doskocz']",
"Tomasz Kajdanowicz, Przemyslaw Kazienko, Piotr Doskocz"
] |
cs.LG stat.ML | null | 1303.0140 | null | null | http://arxiv.org/pdf/1303.0140v1 | 2013-03-01T10:50:46Z | 2013-03-01T10:50:46Z | Second-Order Non-Stationary Online Learning for Regression | The goal of a learner, in standard online learning, is to have the cumulative
loss not much larger compared with the best-performing function from some fixed
class. Numerous algorithms were shown to have this gap arbitrarily close to
zero, compared with the best function that is chosen off-line. Nevertheless,
many real-world applications, such as adaptive filtering, are non-stationary in
nature, and the best prediction function may drift over time. We introduce two
novel algorithms for online regression, designed to work well in non-stationary
environment. Our first algorithm performs adaptive resets to forget the
history, while the second is last-step min-max optimal in context of a drift.
We analyze both algorithms in the worst-case regret framework and show that
they maintain an average loss close to that of the best slowly changing
sequence of linear functions, as long as the cumulative drift is sublinear. In
addition, in the stationary case, when no drift occurs, our algorithms suffer
logarithmic regret, as for previous algorithms. Our bounds improve over the
existing ones, and simulations demonstrate the usefulness of these algorithms
compared with other state-of-the-art approaches.
| [
"['Nina Vaits' 'Edward Moroshko' 'Koby Crammer']",
"Nina Vaits, Edward Moroshko, Koby Crammer"
] |
cs.CE cs.LG q-bio.QM | null | 1303.0156 | null | null | http://arxiv.org/pdf/1303.0156v1 | 2013-03-01T12:46:06Z | 2013-03-01T12:46:06Z | Exploiting the Accumulated Evidence for Gene Selection in Microarray
Gene Expression Data | Machine Learning methods have of late made significant efforts to solving
multidisciplinary problems in the field of cancer classification using
microarray gene expression data. Feature subset selection methods can play an
important role in the modeling process, since these tasks are characterized by
a large number of features and a few observations, making the modeling a
non-trivial undertaking. In this particular scenario, it is extremely important
to select genes by taking into account the possible interactions with other
gene subsets. This paper shows that, by accumulating the evidence in favour (or
against) each gene along the search process, the obtained gene subsets may
constitute better solutions, either in terms of predictive accuracy or gene
size, or in both. The proposed technique is extremely simple and applicable at
a negligible overhead in cost.
| [
"G. Prat and Ll. Belanche",
"['G. Prat' 'Ll. Belanche']"
] |
cs.LG cs.IR q-fin.ST stat.ML | null | 1303.0283 | null | null | http://arxiv.org/pdf/1303.0283v2 | 2013-03-19T21:17:56Z | 2013-03-01T03:45:42Z | Inverse Signal Classification for Financial Instruments | The paper presents new machine learning methods: signal composition, which
classifies time-series regardless of length, type, and quantity; and
self-labeling, a supervised-learning enhancement. The paper describes further
the implementation of the methods on a financial search engine system using a
collection of 7,881 financial instruments traded during 2011 to identify
inverse behavior among the time-series.
| [
"Uri Kartoun",
"['Uri Kartoun']"
] |
stat.ML cs.LG | null | 1303.0309 | null | null | http://arxiv.org/pdf/1303.0309v2 | 2013-06-01T13:42:46Z | 2013-03-01T21:50:09Z | One-Class Support Measure Machines for Group Anomaly Detection | We propose one-class support measure machines (OCSMMs) for group anomaly
detection which aims at recognizing anomalous aggregate behaviors of data
points. The OCSMMs generalize well-known one-class support vector machines
(OCSVMs) to a space of probability measures. By formulating the problem as
quantile estimation on distributions, we can establish an interesting
connection to the OCSVMs and variable kernel density estimators (VKDEs) over
the input space on which the distributions are defined, bridging the gap
between large-margin methods and kernel density estimators. In particular, we
show that various types of VKDEs can be considered as solutions to a class of
regularization problems studied in this paper. Experiments on Sloan Digital Sky
Survey dataset and High Energy Particle Physics dataset demonstrate the
benefits of the proposed framework in real-world applications.
| [
"['Krikamol Muandet' 'Bernhard Schölkopf']",
"Krikamol Muandet and Bernhard Sch\\\"olkopf"
] |
cs.LG | null | 1303.0339 | null | null | http://arxiv.org/pdf/1303.0339v1 | 2013-03-02T03:01:46Z | 2013-03-02T03:01:46Z | Learning Hash Functions Using Column Generation | Fast nearest neighbor searching is becoming an increasingly important tool in
solving many large-scale problems. Recently a number of approaches to learning
data-dependent hash functions have been developed. In this work, we propose a
column generation based method for learning data-dependent hash functions on
the basis of proximity comparison information. Given a set of triplets that
encode the pairwise proximity comparison information, our method learns hash
functions that preserve the relative comparison relationships in the data as
well as possible within the large-margin learning framework. The learning
procedure is implemented using column generation and hence is named CGHash. At
each iteration of the column generation procedure, the best hash function is
selected. Unlike most other hashing methods, our method generalizes to new data
points naturally; and has a training objective which is convex, thus ensuring
that the global optimum can be identified. Experiments demonstrate that the
proposed method learns compact binary codes and that its retrieval performance
compares favorably with state-of-the-art methods when tested on a few benchmark
datasets.
| [
"['Xi Li' 'Guosheng Lin' 'Chunhua Shen' 'Anton van den Hengel'\n 'Anthony Dick']",
"Xi Li and Guosheng Lin and Chunhua Shen and Anton van den Hengel and\n Anthony Dick"
] |
cs.LG cs.IT math.IT stat.ML | 10.1214/16-EJS1147 | 1303.0341 | null | null | http://arxiv.org/abs/1303.0341v3 | 2017-04-28T02:03:30Z | 2013-03-02T03:22:37Z | Matrix Completion via Max-Norm Constrained Optimization | Matrix completion has been well studied under the uniform sampling model and
the trace-norm regularized methods perform well both theoretically and
numerically in such a setting. However, the uniform sampling model is
unrealistic for a range of applications and the standard trace-norm relaxation
can behave very poorly when the underlying sampling scheme is non-uniform.
In this paper we propose and analyze a max-norm constrained empirical risk
minimization method for noisy matrix completion under a general sampling model.
The optimal rate of convergence is established under the Frobenius norm loss in
the context of approximately low-rank matrix reconstruction. It is shown that
the max-norm constrained method is minimax rate-optimal and yields a unified
and robust approximate recovery guarantee, with respect to the sampling
distributions. The computational effectiveness of this method is also
discussed, based on first-order algorithms for solving convex optimizations
involving max-norm regularization.
| [
"T. Tony Cai, Wen-Xin Zhou",
"['T. Tony Cai' 'Wen-Xin Zhou']"
] |
null | null | 1303.0362 | null | null | http://arxiv.org/abs/1303.0362v1 | 2013-03-02T07:47:21Z | 2013-03-02T07:47:21Z | Inductive Sparse Subspace Clustering | Sparse Subspace Clustering (SSC) has achieved state-of-the-art clustering quality by performing spectral clustering over a $ell^{1}$-norm based similarity graph. However, SSC is a transductive method which does not handle with the data not used to construct the graph (out-of-sample data). For each new datum, SSC requires solving $n$ optimization problems in O(n) variables for performing the algorithm over the whole data set, where $n$ is the number of data points. Therefore, it is inefficient to apply SSC in fast online clustering and scalable graphing. In this letter, we propose an inductive spectral clustering algorithm, called inductive Sparse Subspace Clustering (iSSC), which makes SSC feasible to cluster out-of-sample data. iSSC adopts the assumption that high-dimensional data actually lie on the low-dimensional manifold such that out-of-sample data could be grouped in the embedding space learned from in-sample data. Experimental results show that iSSC is promising in clustering out-of-sample data. | [
"['Xi Peng' 'Lei Zhang' 'Zhang Yi']"
] |
stat.ML cs.IT cs.LG math.IT | null | 1303.0551 | null | null | http://arxiv.org/pdf/1303.0551v2 | 2014-05-08T00:30:12Z | 2013-03-03T19:08:55Z | Sparse PCA through Low-rank Approximations | We introduce a novel algorithm that computes the $k$-sparse principal
component of a positive semidefinite matrix $A$. Our algorithm is combinatorial
and operates by examining a discrete set of special vectors lying in a
low-dimensional eigen-subspace of $A$. We obtain provable approximation
guarantees that depend on the spectral decay profile of the matrix: the faster
the eigenvalue decay, the better the quality of our approximation. For example,
if the eigenvalues of $A$ follow a power-law decay, we obtain a polynomial-time
approximation algorithm for any desired accuracy.
A key algorithmic component of our scheme is a combinatorial feature
elimination step that is provably safe and in practice significantly reduces
the running complexity of our algorithm. We implement our algorithm and test it
on multiple artificial and real data sets. Due to the feature elimination step,
it is possible to perform sparse PCA on data sets consisting of millions of
entries in a few minutes. Our experimental evaluation shows that our scheme is
nearly optimal while finding very sparse vectors. We compare to the prior state
of the art and show that our scheme matches or outperforms previous algorithms
in all tested data sets.
| [
"Dimitris S. Papailiopoulos, Alexandros G. Dimakis, and Stavros\n Korokythakis",
"['Dimitris S. Papailiopoulos' 'Alexandros G. Dimakis'\n 'Stavros Korokythakis']"
] |
stat.ML cs.LG | null | 1303.0561 | null | null | http://arxiv.org/pdf/1303.0561v2 | 2013-08-22T23:10:00Z | 2013-03-03T20:36:44Z | Top-down particle filtering for Bayesian decision trees | Decision tree learning is a popular approach for classification and
regression in machine learning and statistics, and Bayesian
formulations---which introduce a prior distribution over decision trees, and
formulate learning as posterior inference given data---have been shown to
produce competitive performance. Unlike classic decision tree learning
algorithms like ID3, C4.5 and CART, which work in a top-down manner, existing
Bayesian algorithms produce an approximation to the posterior distribution by
evolving a complete tree (or collection thereof) iteratively via local Monte
Carlo modifications to the structure of the tree, e.g., using Markov chain
Monte Carlo (MCMC). We present a sequential Monte Carlo (SMC) algorithm that
instead works in a top-down manner, mimicking the behavior and speed of classic
algorithms. We demonstrate empirically that our approach delivers accuracy
comparable to the most popular MCMC method, but operates more than an order of
magnitude faster, and thus represents a better computation-accuracy tradeoff.
| [
"Balaji Lakshminarayanan, Daniel M. Roy and Yee Whye Teh",
"['Balaji Lakshminarayanan' 'Daniel M. Roy' 'Yee Whye Teh']"
] |
stat.ML cs.LG | null | 1303.0642 | null | null | http://arxiv.org/pdf/1303.0642v2 | 2013-03-22T20:15:31Z | 2013-03-04T08:39:34Z | Bayesian Compressed Regression | As an alternative to variable selection or shrinkage in high dimensional
regression, we propose to randomly compress the predictors prior to analysis.
This dramatically reduces storage and computational bottlenecks, performing
well when the predictors can be projected to a low dimensional linear subspace
with minimal loss of information about the response. As opposed to existing
Bayesian dimensionality reduction approaches, the exact posterior distribution
conditional on the compressed data is available analytically, speeding up
computation by many orders of magnitude while also bypassing robustness issues
due to convergence and mixing problems with MCMC. Model averaging is used to
reduce sensitivity to the random projection matrix, while accommodating
uncertainty in the subspace dimension. Strong theoretical support is provided
for the approach by showing near parametric convergence rates for the
predictive density in the large p small n asymptotic paradigm. Practical
performance relative to competitors is illustrated in simulations and real data
applications.
| [
"Rajarshi Guhaniyogi and David B. Dunson",
"['Rajarshi Guhaniyogi' 'David B. Dunson']"
] |
cs.LG cs.SD stat.ML | 10.1109/ICASSP.2013.6637769 | 1303.0663 | null | null | http://arxiv.org/abs/1303.0663v1 | 2013-03-04T10:17:49Z | 2013-03-04T10:17:49Z | Denoising Deep Neural Networks Based Voice Activity Detection | Recently, the deep-belief-networks (DBN) based voice activity detection (VAD)
has been proposed. It is powerful in fusing the advantages of multiple
features, and achieves the state-of-the-art performance. However, the deep
layers of the DBN-based VAD do not show an apparent superiority to the
shallower layers. In this paper, we propose a denoising-deep-neural-network
(DDNN) based VAD to address the aforementioned problem. Specifically, we
pre-train a deep neural network in a special unsupervised denoising greedy
layer-wise mode, and then fine-tune the whole network in a supervised way by
the common back-propagation algorithm. In the pre-training phase, we take the
noisy speech signals as the visible layer and try to extract a new feature that
minimizes the reconstruction cross-entropy loss between the noisy speech
signals and its corresponding clean speech signals. Experimental results show
that the proposed DDNN-based VAD not only outperforms the DBN-based VAD but
also shows an apparent performance improvement of the deep layers over
shallower layers.
| [
"['Xiao-Lei Zhang' 'Ji Wu']",
"Xiao-Lei Zhang and Ji Wu"
] |
cs.IR cs.LG stat.ML | 10.1145/2507157.2507166 | 1303.0665 | null | null | http://arxiv.org/abs/1303.0665v2 | 2014-11-03T16:19:43Z | 2013-03-04T10:34:13Z | Personalized News Recommendation with Context Trees | The profusion of online news articles makes it difficult to find interesting
articles, a problem that can be assuaged by using a recommender system to bring
the most relevant news stories to readers. However, news recommendation is
challenging because the most relevant articles are often new content seen by
few users. In addition, they are subject to trends and preference changes over
time, and in many cases we do not have sufficient information to profile the
reader.
In this paper, we introduce a class of news recommendation systems based on
context trees. They can provide high-quality news recommendation to anonymous
visitors based on present browsing behaviour. We show that context-tree
recommender systems provide good prediction accuracy and recommendation
novelty, and they are sufficiently flexible to capture the unique properties of
news articles.
| [
"Florent Garcin, Christos Dimitrakakis and Boi Faltings",
"['Florent Garcin' 'Christos Dimitrakakis' 'Boi Faltings']"
] |
stat.ML cs.AI cs.LG | null | 1303.0691 | null | null | http://arxiv.org/pdf/1303.0691v3 | 2014-01-17T13:07:42Z | 2013-03-04T13:02:49Z | Learning AMP Chain Graphs and some Marginal Models Thereof under
Faithfulness: Extended Version | This paper deals with chain graphs under the Andersson-Madigan-Perlman (AMP)
interpretation. In particular, we present a constraint based algorithm for
learning an AMP chain graph a given probability distribution is faithful to.
Moreover, we show that the extension of Meek's conjecture to AMP chain graphs
does not hold, which compromises the development of efficient and correct
score+search learning algorithms under assumptions weaker than faithfulness.
We also introduce a new family of graphical models that consists of
undirected and bidirected edges. We name this new family maximal
covariance-concentration graphs (MCCGs) because it includes both covariance and
concentration graphs as subfamilies. However, every MCCG can be seen as the
result of marginalizing out some nodes in an AMP CG. We describe global, local
and pairwise Markov properties for MCCGs and prove their equivalence. We
characterize when two MCCGs are Markov equivalent, and show that every Markov
equivalence class of MCCGs has a distinguished member. We present a constraint
based algorithm for learning a MCCG a given probability distribution is
faithful to.
Finally, we present a graphical criterion for reading dependencies from a
MCCG of a probability distribution that satisfies the graphoid properties, weak
transitivity and composition. We prove that the criterion is sound and complete
in certain sense.
| [
"['Jose M. Peña']",
"Jose M. Pe\\~na"
] |
cs.LG q-bio.NC stat.ML | null | 1303.0742 | null | null | http://arxiv.org/pdf/1303.0742v1 | 2013-03-04T15:58:24Z | 2013-03-04T15:58:24Z | Multivariate Temporal Dictionary Learning for EEG | This article addresses the issue of representing electroencephalographic
(EEG) signals in an efficient way. While classical approaches use a fixed Gabor
dictionary to analyze EEG signals, this article proposes a data-driven method
to obtain an adapted dictionary. To reach an efficient dictionary learning,
appropriate spatial and temporal modeling is required. Inter-channels links are
taken into account in the spatial multivariate model, and shift-invariance is
used for the temporal model. Multivariate learned kernels are informative (a
few atoms code plentiful energy) and interpretable (the atoms can have a
physiological meaning). Using real EEG data, the proposed method is shown to
outperform the classical multichannel matching pursuit used with a Gabor
dictionary, as measured by the representative power of the learned dictionary
and its spatial flexibility. Moreover, dictionary learning can capture
interpretable patterns: this ability is illustrated on real data, learning a
P300 evoked potential.
| [
"Quentin Barth\\'elemy, C\\'edric Gouy-Pailler, Yoann Isaac, Antoine\n Souloumiac, Anthony Larue, J\\'er\\^ome I. Mars",
"['Quentin Barthélemy' 'Cédric Gouy-Pailler' 'Yoann Isaac'\n 'Antoine Souloumiac' 'Anthony Larue' 'Jérôme I. Mars']"
] |
cs.NE cs.IT cs.LG math.DG math.IT | null | 1303.0818 | null | null | http://arxiv.org/pdf/1303.0818v5 | 2015-02-03T18:24:30Z | 2013-03-04T20:41:09Z | Riemannian metrics for neural networks I: feedforward networks | We describe four algorithms for neural network training, each adapted to
different scalability constraints. These algorithms are mathematically
principled and invariant under a number of transformations in data and network
representation, from which performance is thus independent. These algorithms
are obtained from the setting of differential geometry, and are based on either
the natural gradient using the Fisher information matrix, or on Hessian
methods, scaled down in a specific way to allow for scalability while keeping
some of their key mathematical properties.
| [
"Yann Ollivier",
"['Yann Ollivier']"
] |
cs.LG cs.AI cs.MS | null | 1303.0934 | null | null | http://arxiv.org/pdf/1303.0934v1 | 2013-03-05T05:55:59Z | 2013-03-05T05:55:59Z | GURLS: a Least Squares Library for Supervised Learning | We present GURLS, a least squares, modular, easy-to-extend software library
for efficient supervised learning. GURLS is targeted to machine learning
practitioners, as well as non-specialists. It offers a number state-of-the-art
training strategies for medium and large-scale learning, and routines for
efficient model selection. The library is particularly well suited for
multi-output problems (multi-category/multi-label). GURLS is currently
available in two independent implementations: Matlab and C++. It takes
advantage of the favorable properties of regularized least squares algorithm to
exploit advanced tools in linear algebra. Routines to handle computations with
very large matrices by means of memory-mapped storage and distributed task
execution are available. The package is distributed under the BSD licence and
is available for download at https://github.com/CBCL/GURLS.
| [
"Andrea Tacchetti, Pavan K Mallapragada, Matteo Santoro, Lorenzo\n Rosasco",
"['Andrea Tacchetti' 'Pavan K Mallapragada' 'Matteo Santoro'\n 'Lorenzo Rosasco']"
] |
cs.LG stat.ML | null | 1303.1152 | null | null | http://arxiv.org/pdf/1303.1152v2 | 2014-04-25T12:03:24Z | 2013-03-05T19:59:13Z | An Equivalence between the Lasso and Support Vector Machines | We investigate the relation of two fundamental tools in machine learning and
signal processing, that is the support vector machine (SVM) for classification,
and the Lasso technique used in regression. We show that the resulting
optimization problems are equivalent, in the following sense. Given any
instance of an $\ell_2$-loss soft-margin (or hard-margin) SVM, we construct a
Lasso instance having the same optimal solutions, and vice versa.
As a consequence, many existing optimization algorithms for both SVMs and
Lasso can also be applied to the respective other problem instances. Also, the
equivalence allows for many known theoretical insights for SVM and Lasso to be
translated between the two settings. One such implication gives a simple
kernelized version of the Lasso, analogous to the kernels used in the SVM
setting. Another consequence is that the sparsity of a Lasso solution is equal
to the number of support vectors for the corresponding SVM instance, and that
one can use screening rules to prune the set of support vectors. Furthermore,
we can relate sublinear time algorithms for the two problems, and give a new
such algorithm variant for the Lasso. We also study the regularization paths
for both methods.
| [
"['Martin Jaggi']",
"Martin Jaggi"
] |
stat.ML cs.LG | null | 1303.1208 | null | null | http://arxiv.org/pdf/1303.1208v3 | 2016-08-05T15:38:43Z | 2013-03-05T22:23:14Z | Classification with Asymmetric Label Noise: Consistency and Maximal
Denoising | In many real-world classification problems, the labels of training examples
are randomly corrupted. Most previous theoretical work on classification with
label noise assumes that the two classes are separable, that the label noise is
independent of the true class label, or that the noise proportions for each
class are known. In this work, we give conditions that are necessary and
sufficient for the true class-conditional distributions to be identifiable.
These conditions are weaker than those analyzed previously, and allow for the
classes to be nonseparable and the noise levels to be asymmetric and unknown.
The conditions essentially state that a majority of the observed labels are
correct and that the true class-conditional distributions are "mutually
irreducible," a concept we introduce that limits the similarity of the two
distributions. For any label noise problem, there is a unique pair of true
class-conditional distributions satisfying the proposed conditions, and we
argue that this pair corresponds in a certain sense to maximal denoising of the
observed distributions.
Our results are facilitated by a connection to "mixture proportion
estimation," which is the problem of estimating the maximal proportion of one
distribution that is present in another. We establish a novel rate of
convergence result for mixture proportion estimation, and apply this to obtain
consistency of a discrimination rule based on surrogate loss minimization.
Experimental results on benchmark data and a nuclear particle classification
problem demonstrate the efficacy of our approach.
| [
"Gilles Blanchard, Marek Flaska, Gregory Handy, Sara Pozzi, Clayton\n Scott",
"['Gilles Blanchard' 'Marek Flaska' 'Gregory Handy' 'Sara Pozzi'\n 'Clayton Scott']"
] |
cs.LG cs.NA | null | 1303.1264 | null | null | http://arxiv.org/pdf/1303.1264v1 | 2013-03-06T07:58:14Z | 2013-03-06T07:58:14Z | Discovery of factors in matrices with grades | We present an approach to decomposition and factor analysis of matrices with
ordinal data. The matrix entries are grades to which objects represented by
rows satisfy attributes represented by columns, e.g. grades to which an image
is red, a product has a given feature, or a person performs well in a test. We
assume that the grades form a bounded scale equipped with certain aggregation
operators and conforms to the structure of a complete residuated lattice. We
present a greedy approximation algorithm for the problem of decomposition of
such matrix in a product of two matrices with grades under the restriction that
the number of factors be small. Our algorithm is based on a geometric insight
provided by a theorem identifying particular rectangular-shaped submatrices as
optimal factors for the decompositions. These factors correspond to formal
concepts of the input data and allow an easy interpretation of the
decomposition. We present illustrative examples and experimental evaluation.
| [
"['Radim Belohlavek' 'Vilem Vychodil']",
"Radim Belohlavek and Vilem Vychodil"
] |
cs.LG | null | 1303.1271 | null | null | http://arxiv.org/pdf/1303.1271v5 | 2013-08-22T04:29:26Z | 2013-03-06T08:20:33Z | Convex and Scalable Weakly Labeled SVMs | In this paper, we study the problem of learning from weakly labeled data,
where labels of the training examples are incomplete. This includes, for
example, (i) semi-supervised learning where labels are partially known; (ii)
multi-instance learning where labels are implicitly known; and (iii) clustering
where labels are completely unknown. Unlike supervised learning, learning with
weak labels involves a difficult Mixed-Integer Programming (MIP) problem.
Therefore, it can suffer from poor scalability and may also get stuck in local
minimum. In this paper, we focus on SVMs and propose the WellSVM via a novel
label generation strategy. This leads to a convex relaxation of the original
MIP, which is at least as tight as existing convex Semi-Definite Programming
(SDP) relaxations. Moreover, the WellSVM can be solved via a sequence of SVM
subproblems that are much more scalable than previous convex SDP relaxations.
Experiments on three weakly labeled learning tasks, namely, (i) semi-supervised
learning; (ii) multi-instance learning for locating regions of interest in
content-based information retrieval; and (iii) clustering, clearly demonstrate
improved performance, and WellSVM is also readily applicable on large data
sets.
| [
"Yu-Feng Li, Ivor W. Tsang, James T. Kwok and Zhi-Hua Zhou",
"['Yu-Feng Li' 'Ivor W. Tsang' 'James T. Kwok' 'Zhi-Hua Zhou']"
] |
cs.LG stat.ML | null | 1303.1280 | null | null | http://arxiv.org/pdf/1303.1280v1 | 2013-03-06T09:23:45Z | 2013-03-06T09:23:45Z | Large-Margin Metric Learning for Partitioning Problems | In this paper, we consider unsupervised partitioning problems, such as
clustering, image segmentation, video segmentation and other change-point
detection problems. We focus on partitioning problems based explicitly or
implicitly on the minimization of Euclidean distortions, which include
mean-based change-point detection, K-means, spectral clustering and normalized
cuts. Our main goal is to learn a Mahalanobis metric for these unsupervised
problems, leading to feature weighting and/or selection. This is done in a
supervised way by assuming the availability of several potentially partially
labelled datasets that share the same metric. We cast the metric learning
problem as a large-margin structured prediction problem, with proper definition
of regularizers and losses, leading to a convex optimization problem which can
be solved efficiently with iterative techniques. We provide experiments where
we show how learning the metric may significantly improve the partitioning
performance in synthetic examples, bioinformatics, video segmentation and image
segmentation problems.
| [
"['Rémi Lajugie' 'Sylvain Arlot' 'Francis Bach']",
"R\\'emi Lajugie (LIENS), Sylvain Arlot (LIENS), Francis Bach (LIENS)"
] |
cs.LG | null | 1303.1733 | null | null | http://arxiv.org/pdf/1303.1733v2 | 2013-05-31T21:09:20Z | 2013-03-07T16:10:44Z | Multi-relational Learning Using Weighted Tensor Decomposition with
Modular Loss | We propose a modular framework for multi-relational learning via tensor
decomposition. In our learning setting, the training data contains multiple
types of relationships among a set of objects, which we represent by a sparse
three-mode tensor. The goal is to predict the values of the missing entries. To
do so, we model each relationship as a function of a linear combination of
latent factors. We learn this latent representation by computing a low-rank
tensor decomposition, using quasi-Newton optimization of a weighted objective
function. Sparsity in the observed data is captured by the weighted objective,
leading to improved accuracy when training data is limited. Exploiting sparsity
also improves efficiency, potentially up to an order of magnitude over
unweighted approaches. In addition, our framework accommodates arbitrary
combinations of smooth, task-specific loss functions, making it better suited
for learning different types of relations. For the typical cases of real-valued
functions and binary relations, we propose several loss functions and derive
the associated parameter gradients. We evaluate our method on synthetic and
real data, showing significant improvements in both accuracy and scalability
over related factorization techniques.
| [
"Ben London, Theodoros Rekatsinas, Bert Huang, and Lise Getoor",
"['Ben London' 'Theodoros Rekatsinas' 'Bert Huang' 'Lise Getoor']"
] |
cs.LG cs.DS cs.NA | null | 1303.1849 | null | null | http://arxiv.org/pdf/1303.1849v2 | 2013-06-03T20:07:19Z | 2013-03-07T23:16:16Z | Revisiting the Nystrom Method for Improved Large-Scale Machine Learning | We reconsider randomized algorithms for the low-rank approximation of
symmetric positive semi-definite (SPSD) matrices such as Laplacian and kernel
matrices that arise in data analysis and machine learning applications. Our
main results consist of an empirical evaluation of the performance quality and
running time of sampling and projection methods on a diverse suite of SPSD
matrices. Our results highlight complementary aspects of sampling versus
projection methods; they characterize the effects of common data preprocessing
steps on the performance of these algorithms; and they point to important
differences between uniform sampling and nonuniform sampling methods based on
leverage scores. In addition, our empirical results illustrate that existing
theory is so weak that it does not provide even a qualitative guide to
practice. Thus, we complement our empirical results with a suite of worst-case
theoretical bounds for both random sampling and random projection methods.
These bounds are qualitatively superior to existing bounds---e.g. improved
additive-error bounds for spectral and Frobenius norm error and relative-error
bounds for trace norm error---and they point to future directions to make these
algorithms useful in even larger-scale machine learning applications.
| [
"['Alex Gittens' 'Michael W. Mahoney']",
"Alex Gittens and Michael W. Mahoney"
] |
cs.CE cs.LG | 10.1016/j.is.2017.05.006 | 1303.2054 | null | null | http://arxiv.org/abs/1303.2054v1 | 2013-03-08T16:57:18Z | 2013-03-08T16:57:18Z | Mining Representative Unsubstituted Graph Patterns Using Prior
Similarity Matrix | One of the most powerful techniques to study protein structures is to look
for recurrent fragments (also called substructures or spatial motifs), then use
them as patterns to characterize the proteins under study. An emergent trend
consists in parsing proteins three-dimensional (3D) structures into graphs of
amino acids. Hence, the search of recurrent spatial motifs is formulated as a
process of frequent subgraph discovery where each subgraph represents a spatial
motif. In this scope, several efficient approaches for frequent subgraph
discovery have been proposed in the literature. However, the set of discovered
frequent subgraphs is too large to be efficiently analyzed and explored in any
further process. In this paper, we propose a novel pattern selection approach
that shrinks the large number of discovered frequent subgraphs by selecting the
representative ones. Existing pattern selection approaches do not exploit the
domain knowledge. Yet, in our approach we incorporate the evolutionary
information of amino acids defined in the substitution matrices in order to
select the representative subgraphs. We show the effectiveness of our approach
on a number of real datasets. The results issued from our experiments show that
our approach is able to considerably decrease the number of motifs while
enhancing their interestingness.
| [
"Wajdi Dhifli, Rabie Saidi, Engelbert Mephu Nguifo",
"['Wajdi Dhifli' 'Rabie Saidi' 'Engelbert Mephu Nguifo']"
] |
cs.LG | null | 1303.2104 | null | null | http://arxiv.org/pdf/1303.2104v1 | 2013-03-08T20:46:27Z | 2013-03-08T20:46:27Z | Transfer Learning for Voice Activity Detection: A Denoising Deep Neural
Network Perspective | Mismatching problem between the source and target noisy corpora severely
hinder the practical use of the machine-learning-based voice activity detection
(VAD). In this paper, we try to address this problem in the transfer learning
prospective. Transfer learning tries to find a common learning machine or a
common feature subspace that is shared by both the source corpus and the target
corpus. The denoising deep neural network is used as the learning machine.
Three transfer techniques, which aim to learn common feature representations,
are used for analysis. Experimental results demonstrate the effectiveness of
the transfer learning schemes on the mismatch problem.
| [
"['Xiao-Lei Zhang' 'Ji Wu']",
"Xiao-Lei Zhang, Ji Wu"
] |
cs.LG | null | 1303.2130 | null | null | http://arxiv.org/pdf/1303.2130v2 | 2013-10-21T15:06:36Z | 2013-03-08T21:32:52Z | Convex Discriminative Multitask Clustering | Multitask clustering tries to improve the clustering performance of multiple
tasks simultaneously by taking their relationship into account. Most existing
multitask clustering algorithms fall into the type of generative clustering,
and none are formulated as convex optimization problems. In this paper, we
propose two convex Discriminative Multitask Clustering (DMTC) algorithms to
address the problems. Specifically, we first propose a Bayesian DMTC framework.
Then, we propose two convex DMTC objectives within the framework. The first
one, which can be seen as a technical combination of the convex multitask
feature learning and the convex Multiclass Maximum Margin Clustering (M3C),
aims to learn a shared feature representation. The second one, which can be
seen as a combination of the convex multitask relationship learning and M3C,
aims to learn the task relationship. The two objectives are solved in a uniform
procedure by the efficient cutting-plane algorithm. Experimental results on a
toy problem and two benchmark datasets demonstrate the effectiveness of the
proposed algorithms.
| [
"Xiao-Lei Zhang",
"['Xiao-Lei Zhang']"
] |
cs.LG | null | 1303.2132 | null | null | http://arxiv.org/pdf/1303.2132v2 | 2014-04-23T00:59:58Z | 2013-03-08T21:40:42Z | Heuristic Ternary Error-Correcting Output Codes Via Weight Optimization
and Layered Clustering-Based Approach | One important classifier ensemble for multiclass classification problems is
Error-Correcting Output Codes (ECOCs). It bridges multiclass problems and
binary-class classifiers by decomposing multiclass problems to a serial
binary-class problems. In this paper, we present a heuristic ternary code,
named Weight Optimization and Layered Clustering-based ECOC (WOLC-ECOC). It
starts with an arbitrary valid ECOC and iterates the following two steps until
the training risk converges. The first step, named Layered Clustering based
ECOC (LC-ECOC), constructs multiple strong classifiers on the most confusing
binary-class problem. The second step adds the new classifiers to ECOC by a
novel Optimized Weighted (OW) decoding algorithm, where the optimization
problem of the decoding is solved by the cutting plane algorithm. Technically,
LC-ECOC makes the heuristic training process not blocked by some difficult
binary-class problem. OW decoding guarantees the non-increase of the training
risk for ensuring a small code length. Results on 14 UCI datasets and a music
genre classification problem demonstrate the effectiveness of WOLC-ECOC.
| [
"Xiao-Lei Zhang",
"['Xiao-Lei Zhang']"
] |
cs.LG stat.ML | 10.1109/TNNLS.2014.2336679 | 1303.2184 | null | null | http://arxiv.org/abs/1303.2184v3 | 2014-07-15T09:42:04Z | 2013-03-09T09:09:54Z | Complex Support Vector Machines for Regression and Quaternary
Classification | The paper presents a new framework for complex Support Vector Regression as
well as Support Vector Machines for quaternary classification. The method
exploits the notion of widely linear estimation to model the input-out relation
for complex-valued data and considers two cases: a) the complex data are split
into their real and imaginary parts and a typical real kernel is employed to
map the complex data to a complexified feature space and b) a pure complex
kernel is used to directly map the data to the induced complex feature space.
The recently developed Wirtinger's calculus on complex reproducing kernel
Hilbert spaces (RKHS) is employed in order to compute the Lagrangian and derive
the dual optimization problem. As one of our major results, we prove that any
complex SVM/SVR task is equivalent with solving two real SVM/SVR tasks
exploiting a specific real kernel which is generated by the chosen complex
kernel. In particular, the case of pure complex kernels leads to the generation
of new kernels, which have not been considered before. In the classification
case, the proposed framework inherently splits the complex space into four
parts. This leads naturally in solving the four class-task (quaternary
classification), instead of the typical two classes of the real SVM. In turn,
this rationale can be used in a multiclass problem as a split-class scenario
based on four classes, as opposed to the one-versus-all method; this can lead
to significant computational savings. Experiments demonstrate the effectiveness
of the proposed framework for regression and classification tasks that involve
complex data.
| [
"Pantelis Bouboulis, Sergios Theodoridis, Charalampos Mavroforakis,\n Leoni Dalla",
"['Pantelis Bouboulis' 'Sergios Theodoridis' 'Charalampos Mavroforakis'\n 'Leoni Dalla']"
] |
cs.LG cs.CV cs.SI stat.ML | 10.1109/TSP.2013.2295553 | 1303.2221 | null | null | http://arxiv.org/abs/1303.2221v1 | 2013-03-09T15:31:48Z | 2013-03-09T15:31:48Z | Clustering on Multi-Layer Graphs via Subspace Analysis on Grassmann
Manifolds | Relationships between entities in datasets are often of multiple nature, like
geographical distance, social relationships, or common interests among people
in a social network, for example. This information can naturally be modeled by
a set of weighted and undirected graphs that form a global multilayer graph,
where the common vertex set represents the entities and the edges on different
layers capture the similarities of the entities in term of the different
modalities. In this paper, we address the problem of analyzing multi-layer
graphs and propose methods for clustering the vertices by efficiently merging
the information provided by the multiple modalities. To this end, we propose to
combine the characteristics of individual graph layers using tools from
subspace analysis on a Grassmann manifold. The resulting combination can then
be viewed as a low dimensional representation of the original data which
preserves the most important information from diverse relationships between
entities. We use this information in new clustering methods and test our
algorithm on several synthetic and real world datasets where we demonstrate
superior or competitive performances compared to baseline and state-of-the-art
techniques. Our generic framework further extends to numerous analysis and
learning problems that involve different types of information on graphs.
| [
"Xiaowen Dong, Pascal Frossard, Pierre Vandergheynst, Nikolai Nefedov",
"['Xiaowen Dong' 'Pascal Frossard' 'Pierre Vandergheynst' 'Nikolai Nefedov']"
] |
math.OC cs.GT cs.LG | null | 1303.2270 | null | null | http://arxiv.org/pdf/1303.2270v2 | 2014-04-06T20:59:47Z | 2013-03-09T21:49:25Z | Penalty-regulated dynamics and robust learning procedures in games | Starting from a heuristic learning scheme for N-person games, we derive a new
class of continuous-time learning dynamics consisting of a replicator-like
drift adjusted by a penalty term that renders the boundary of the game's
strategy space repelling. These penalty-regulated dynamics are equivalent to
players keeping an exponentially discounted aggregate of their on-going payoffs
and then using a smooth best response to pick an action based on these
performance scores. Owing to this inherent duality, the proposed dynamics
satisfy a variant of the folk theorem of evolutionary game theory and they
converge to (arbitrarily precise) approximations of Nash equilibria in
potential games. Motivated by applications to traffic engineering, we exploit
this duality further to design a discrete-time, payoff-based learning algorithm
which retains these convergence properties and only requires players to observe
their in-game payoffs: moreover, the algorithm remains robust in the presence
of stochastic perturbations and observation errors, and it does not require any
synchronization between players.
| [
"Pierre Coucheney, Bruno Gaujal, Panayotis Mertikopoulos",
"['Pierre Coucheney' 'Bruno Gaujal' 'Panayotis Mertikopoulos']"
] |
cs.LG math.OC | null | 1303.2314 | null | null | http://arxiv.org/pdf/1303.2314v1 | 2013-03-10T12:00:59Z | 2013-03-10T12:00:59Z | Mini-Batch Primal and Dual Methods for SVMs | We address the issue of using mini-batches in stochastic optimization of
SVMs. We show that the same quantity, the spectral norm of the data, controls
the parallelization speedup obtained for both primal stochastic subgradient
descent (SGD) and stochastic dual coordinate ascent (SCDA) methods and use it
to derive novel variants of mini-batched SDCA. Our guarantees for both methods
are expressed in terms of the original nonsmooth primal problem based on the
hinge-loss.
| [
"Martin Tak\\'a\\v{c} and Avleen Bijral and Peter Richt\\'arik and Nathan\n Srebro",
"['Martin Takáč' 'Avleen Bijral' 'Peter Richtárik' 'Nathan Srebro']"
] |
math.DS cs.IT cs.LG math.IT math.PR stat.ML | null | 1303.2395 | null | null | http://arxiv.org/pdf/1303.2395v1 | 2013-03-10T23:20:12Z | 2013-03-10T23:20:12Z | State estimation under non-Gaussian Levy noise: A modified Kalman
filtering method | The Kalman filter is extensively used for state estimation for linear systems
under Gaussian noise. When non-Gaussian L\'evy noise is present, the
conventional Kalman filter may fail to be effective due to the fact that the
non-Gaussian L\'evy noise may have infinite variance. A modified Kalman filter
for linear systems with non-Gaussian L\'evy noise is devised. It works
effectively with reasonable computational cost. Simulation results are
presented to illustrate this non-Gaussian filtering method.
| [
"['Xu Sun' 'Jinqiao Duan' 'Xiaofan Li' 'Xiangjun Wang']",
"Xu Sun, Jinqiao Duan, Xiaofan Li, Xiangjun Wang"
] |
cs.LG stat.ML | null | 1303.2417 | null | null | http://arxiv.org/pdf/1303.2417v1 | 2013-03-11T03:29:35Z | 2013-03-11T03:29:35Z | Linear NDCG and Pair-wise Loss | Linear NDCG is used for measuring the performance of the Web content quality
assessment in ECML/PKDD Discovery Challenge 2010. In this paper, we will prove
that the DCG error equals a new pair-wise loss.
| [
"Xiao-Bo Jin and Guang-Gang Geng",
"['Xiao-Bo Jin' 'Guang-Gang Geng']"
] |
cs.LG stat.ML | 10.1109/CDC.2013.6761048 | 1303.2506 | null | null | http://arxiv.org/abs/1303.2506v1 | 2013-03-11T13:06:49Z | 2013-03-11T13:06:49Z | Monte-Carlo utility estimates for Bayesian reinforcement learning | This paper introduces a set of algorithms for Monte-Carlo Bayesian
reinforcement learning. Firstly, Monte-Carlo estimation of upper bounds on the
Bayes-optimal value function is employed to construct an optimistic policy.
Secondly, gradient-based algorithms for approximate upper and lower bounds are
introduced. Finally, we introduce a new class of gradient algorithms for
Bayesian Bellman error minimisation. We theoretically show that the gradient
methods are sound. Experimentally, we demonstrate the superiority of the upper
bound method in terms of reward obtained. However, we also show that the
Bayesian Bellman error method is a close second, despite its significant
computational simplicity.
| [
"['Christos Dimitrakakis']",
"Christos Dimitrakakis"
] |
cs.LG cs.GT | null | 1303.2643 | null | null | http://arxiv.org/pdf/1303.2643v1 | 2013-03-11T19:52:48Z | 2013-03-11T19:52:48Z | Revealing Cluster Structure of Graph by Path Following Replicator
Dynamic | In this paper, we propose a path following replicator dynamic, and
investigate its potentials in uncovering the underlying cluster structure of a
graph. The proposed dynamic is a generalization of the discrete replicator
dynamic. The replicator dynamic has been successfully used to extract dense
clusters of graphs; however, it is often sensitive to the degree distribution
of a graph, and usually biased by vertices with large degrees, thus may fail to
detect the densest cluster. To overcome this problem, we introduce a dynamic
parameter, called path parameter, into the evolution process. The path
parameter can be interpreted as the maximal possible probability of a current
cluster containing a vertex, and it monotonically increases as evolution
process proceeds. By limiting the maximal probability, the phenomenon of some
vertices dominating the early stage of evolution process is suppressed, thus
making evolution process more robust. To solve the optimization problem with a
fixed path parameter, we propose an efficient fixed point algorithm. The time
complexity of the path following replicator dynamic is only linear in the
number of edges of a graph, thus it can analyze graphs with millions of
vertices and tens of millions of edges on a common PC in a few minutes.
Besides, it can be naturally generalized to hypergraph and graph with edges of
different orders. We apply it to four important problems: maximum clique
problem, densest k-subgraph problem, structure fitting, and discovery of
high-density regions. The extensive experimental results clearly demonstrate
its advantages, in terms of robustness, scalability and flexility.
| [
"Hairong Liu, Longin Jan Latecki, Shuicheng Yan",
"['Hairong Liu' 'Longin Jan Latecki' 'Shuicheng Yan']"
] |
cs.LG cs.IR | null | 1303.2651 | null | null | http://arxiv.org/pdf/1303.2651v2 | 2014-03-30T08:26:31Z | 2013-03-10T12:51:03Z | Hybrid Q-Learning Applied to Ubiquitous recommender system | Ubiquitous information access becomes more and more important nowadays and
research is aimed at making it adapted to users. Our work consists in applying
machine learning techniques in order to bring a solution to some of the
problems concerning the acceptance of the system by users. To achieve this, we
propose a fundamental shift in terms of how we model the learning of
recommender system: inspired by models of human reasoning developed in robotic,
we combine reinforcement learning and case-base reasoning to define a
recommendation process that uses these two approaches for generating
recommendations on different context dimensions (social, temporal, geographic).
We describe an implementation of the recommender system based on this
framework. We also present preliminary results from experiments with the system
and show how our approach increases the recommendation quality.
| [
"['Djallel Bouneffouf']",
"Djallel Bouneffouf"
] |
cs.SI cs.LG physics.soc-ph stat.ML | 10.1103/PhysRevE.88.042813 | 1303.2663 | null | null | http://arxiv.org/abs/1303.2663v2 | 2013-10-04T05:12:35Z | 2013-03-11T20:00:32Z | Spectral Clustering with Epidemic Diffusion | Spectral clustering is widely used to partition graphs into distinct modules
or communities. Existing methods for spectral clustering use the eigenvalues
and eigenvectors of the graph Laplacian, an operator that is closely associated
with random walks on graphs. We propose a new spectral partitioning method that
exploits the properties of epidemic diffusion. An epidemic is a dynamic process
that, unlike the random walk, simultaneously transitions to all the neighbors
of a given node. We show that the replicator, an operator describing epidemic
diffusion, is equivalent to the symmetric normalized Laplacian of a reweighted
graph with edges reweighted by the eigenvector centralities of their incident
nodes. Thus, more weight is given to edges connecting more central nodes. We
describe a method that partitions the nodes based on the componentwise ratio of
the replicator's second eigenvector to the first, and compare its performance
to traditional spectral clustering techniques on synthetic graphs with known
community structure. We demonstrate that the replicator gives preference to
dense, clique-like structures, enabling it to more effectively discover
communities that may be obscured by dense intercommunity linking.
| [
"Laura M. Smith, Kristina Lerman, Cristina Garcia-Cardona, Allon G.\n Percus, Rumi Ghosh",
"['Laura M. Smith' 'Kristina Lerman' 'Cristina Garcia-Cardona'\n 'Allon G. Percus' 'Rumi Ghosh']"
] |
cs.LG stat.AP | null | 1303.2739 | null | null | http://arxiv.org/pdf/1303.2739v1 | 2013-03-12T01:13:44Z | 2013-03-12T01:13:44Z | Machine Learning for Bioclimatic Modelling | Many machine learning (ML) approaches are widely used to generate bioclimatic
models for prediction of geographic range of organism as a function of climate.
Applications such as prediction of range shift in organism, range of invasive
species influenced by climate change are important parameters in understanding
the impact of climate change. However, success of machine learning-based
approaches depends on a number of factors. While it can be safely said that no
particular ML technique can be effective in all applications and success of a
technique is predominantly dependent on the application or the type of the
problem, it is useful to understand their behavior to ensure informed choice of
techniques. This paper presents a comprehensive review of machine
learning-based bioclimatic model generation and analyses the factors
influencing success of such models. Considering the wide use of statistical
techniques, in our discussion we also include conventional statistical
techniques used in bioclimatic modelling.
| [
"Maumita Bhattacharya",
"['Maumita Bhattacharya']"
] |
cs.MA cs.LG | null | 1303.2789 | null | null | http://arxiv.org/pdf/1303.2789v1 | 2013-03-12T07:00:04Z | 2013-03-12T07:00:04Z | A Cooperative Q-learning Approach for Real-time Power Allocation in
Femtocell Networks | In this paper, we address the problem of distributed interference management
of cognitive femtocells that share the same frequency range with macrocells
(primary user) using distributed multi-agent Q-learning. We formulate and solve
three problems representing three different Q-learning algorithms: namely,
centralized, distributed and partially distributed power control using
Q-learning (CPC-Q, DPC-Q and PDPC-Q). CPCQ, although not of practical interest,
characterizes the global optimum. Each of DPC-Q and PDPC-Q works in two
different learning paradigms: Independent (IL) and Cooperative (CL). The former
is considered the simplest form for applying Qlearning in multi-agent
scenarios, where all the femtocells learn independently. The latter is the
proposed scheme in which femtocells share partial information during the
learning process in order to strike a balance between practical relevance and
performance. In terms of performance, the simulation results showed that the CL
paradigm outperforms the IL paradigm and achieves an aggregate femtocells
capacity that is very close to the optimal one. For the practical relevance
issue, we evaluate the robustness and scalability of DPC-Q, in real time, by
deploying new femtocells in the system during the learning process, where we
showed that DPC-Q in the CL paradigm is scalable to large number of femtocells
and more robust to the network dynamics compared to the IL paradigm
| [
"['Hussein Saad' 'Amr Mohamed' 'Tamer ElBatt']",
"Hussein Saad, Amr Mohamed and Tamer ElBatt"
] |
cs.LG cs.IT math.IT stat.ML | 10.1109/MSP.2013.2250352 | 1303.2823 | null | null | http://arxiv.org/abs/1303.2823v2 | 2013-09-27T11:07:52Z | 2013-03-12T10:16:29Z | Gaussian Processes for Nonlinear Signal Processing | Gaussian processes (GPs) are versatile tools that have been successfully
employed to solve nonlinear estimation problems in machine learning, but that
are rarely used in signal processing. In this tutorial, we present GPs for
regression as a natural nonlinear extension to optimal Wiener filtering. After
establishing their basic formulation, we discuss several important aspects and
extensions, including recursive and adaptive algorithms for dealing with
non-stationarity, low-complexity solutions, non-Gaussian noise models and
classification scenarios. Furthermore, we provide a selection of relevant
applications to wireless digital communications.
| [
"['Fernando Pérez-Cruz' 'Steven Van Vaerenbergh'\n 'Juan José Murillo-Fuentes' 'Miguel Lázaro-Gredilla' 'Ignacio Santamaria']",
"Fernando P\\'erez-Cruz, Steven Van Vaerenbergh, Juan Jos\\'e\n Murillo-Fuentes, Miguel L\\'azaro-Gredilla and Ignacio Santamaria"
] |
cs.LG stat.ML | null | 1303.3055 | null | null | http://arxiv.org/pdf/1303.3055v1 | 2013-03-12T23:25:37Z | 2013-03-12T23:25:37Z | Online Learning in Markov Decision Processes with Adversarially Chosen
Transition Probability Distributions | We study the problem of learning Markov decision processes with finite state
and action spaces when the transition probability distributions and loss
functions are chosen adversarially and are allowed to change with time. We
introduce an algorithm whose regret with respect to any policy in a comparison
class grows as the square root of the number of rounds of the game, provided
the transition probabilities satisfy a uniform mixing condition. Our approach
is efficient as long as the comparison class is polynomial and we can compute
expectations over sample paths for each policy. Designing an efficient
algorithm with small regret for the general case remains an open problem.
| [
"['Yasin Abbasi-Yadkori' 'Peter L. Bartlett' 'Csaba Szepesvari']",
"Yasin Abbasi-Yadkori and Peter L. Bartlett and Csaba Szepesvari"
] |
cs.AI cs.LG stat.ML | null | 1303.3163 | null | null | http://arxiv.org/pdf/1303.3163v3 | 2013-06-13T01:04:03Z | 2013-03-13T14:06:21Z | A Greedy Approximation of Bayesian Reinforcement Learning with Probably
Optimistic Transition Model | Bayesian Reinforcement Learning (RL) is capable of not only incorporating
domain knowledge, but also solving the exploration-exploitation dilemma in a
natural way. As Bayesian RL is intractable except for special cases, previous
work has proposed several approximation methods. However, these methods are
usually too sensitive to parameter values, and finding an acceptable parameter
setting is practically impossible in many applications. In this paper, we
propose a new algorithm that greedily approximates Bayesian RL to achieve
robustness in parameter space. We show that for a desired learning behavior,
our proposed algorithm has a polynomial sample complexity that is lower than
those of existing algorithms. We also demonstrate that the proposed algorithm
naturally outperforms other existing algorithms when the prior distributions
are not significantly misleading. On the other hand, the proposed algorithm
cannot handle greatly misspecified priors as well as the other algorithms can.
This is a natural consequence of the fact that the proposed algorithm is
greedier than the other algorithms. Accordingly, we discuss a way to select an
appropriate algorithm for different tasks based on the algorithms' greediness.
We also introduce a new way of simplifying Bayesian planning, based on which
future work would be able to derive new algorithms.
| [
"Kenji Kawaguchi and Mauricio Araya",
"['Kenji Kawaguchi' 'Mauricio Araya']"
] |
cs.SY cs.CE cs.LG q-bio.MN | null | 1303.3183 | null | null | http://arxiv.org/pdf/1303.3183v2 | 2015-02-25T13:06:00Z | 2013-03-12T15:34:41Z | Toggling a Genetic Switch Using Reinforcement Learning | In this paper, we consider the problem of optimal exogenous control of gene
regulatory networks. Our approach consists in adapting an established
reinforcement learning algorithm called the fitted Q iteration. This algorithm
infers the control law directly from the measurements of the system's response
to external control inputs without the use of a mathematical model of the
system. The measurement data set can either be collected from wet-lab
experiments or artificially created by computer simulations of dynamical models
of the system. The algorithm is applicable to a wide range of biological
systems due to its ability to deal with nonlinear and stochastic system
dynamics. To illustrate the application of the algorithm to a gene regulatory
network, the regulation of the toggle switch system is considered. The control
objective of this problem is to drive the concentrations of two specific
proteins to a target region in the state space.
| [
"Aivar Sootla, Natalja Strelkowa, Damien Ernst, Mauricio Barahona,\n Guy-Bart Stan",
"['Aivar Sootla' 'Natalja Strelkowa' 'Damien Ernst' 'Mauricio Barahona'\n 'Guy-Bart Stan']"
] |
cs.LG cs.IT math.IT stat.ML | null | 1303.3207 | null | null | http://arxiv.org/pdf/1303.3207v4 | 2015-03-04T14:30:21Z | 2013-03-13T16:22:03Z | Group-Sparse Model Selection: Hardness and Relaxations | Group-based sparsity models are proven instrumental in linear regression
problems for recovering signals from much fewer measurements than standard
compressive sensing. The main promise of these models is the recovery of
"interpretable" signals through the identification of their constituent groups.
In this paper, we establish a combinatorial framework for group-model selection
problems and highlight the underlying tractability issues. In particular, we
show that the group-model selection problem is equivalent to the well-known
NP-hard weighted maximum coverage problem (WMC). Leveraging a graph-based
understanding of group models, we describe group structures which enable
correct model selection in polynomial time via dynamic programming.
Furthermore, group structures that lead to totally unimodular constraints have
tractable discrete as well as convex relaxations. We also present a
generalization of the group-model that allows for within group sparsity, which
can be used to model hierarchical sparsity. Finally, we study the Pareto
frontier of group-sparse approximations for two tractable models, among which
the tree sparsity model, and illustrate selection and computation trade-offs
between our framework and the existing convex relaxations.
| [
"Luca Baldassarre and Nirav Bhan and Volkan Cevher and Anastasios\n Kyrillidis and Siddhartha Satpathi",
"['Luca Baldassarre' 'Nirav Bhan' 'Volkan Cevher' 'Anastasios Kyrillidis'\n 'Siddhartha Satpathi']"
] |
cs.LG cs.CV stat.ML | null | 1303.3240 | null | null | http://arxiv.org/pdf/1303.3240v2 | 2014-11-14T15:33:29Z | 2013-03-13T18:18:14Z | A Unified Framework for Probabilistic Component Analysis | We present a unifying framework which reduces the construction of
probabilistic component analysis techniques to a mere selection of the latent
neighbourhood, thus providing an elegant and principled framework for creating
novel component analysis models as well as constructing probabilistic
equivalents of deterministic component analysis methods. Under our framework,
we unify many very popular and well-studied component analysis algorithms, such
as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA),
Locality Preserving Projections (LPP) and Slow Feature Analysis (SFA), some of
which have no probabilistic equivalents in literature thus far. We firstly
define the Markov Random Fields (MRFs) which encapsulate the latent
connectivity of the aforementioned component analysis techniques; subsequently,
we show that the projection directions produced by all PCA, LDA, LPP and SFA
are also produced by the Maximum Likelihood (ML) solution of a single joint
probability density function, composed by selecting one of the defined MRF
priors while utilising a simple observation model. Furthermore, we propose
novel Expectation Maximization (EM) algorithms, exploiting the proposed joint
PDF, while we generalize the proposed methodologies to arbitrary connectivities
via parameterizable MRF products. Theoretical analysis and experiments on both
simulated and real world data show the usefulness of the proposed framework, by
deriving methods which well outperform state-of-the-art equivalents.
| [
"Mihalis A. Nicolaou, Stefanos Zafeiriou and Maja Pantic",
"['Mihalis A. Nicolaou' 'Stefanos Zafeiriou' 'Maja Pantic']"
] |
stat.ML cs.LG | 10.1073/pnas.1219097111 | 1303.3257 | null | null | http://arxiv.org/abs/1303.3257v3 | 2013-11-24T17:22:41Z | 2013-03-13T19:45:03Z | Ranking and combining multiple predictors without labeled data | In a broad range of classification and decision making problems, one is given
the advice or predictions of several classifiers, of unknown reliability, over
multiple questions or queries. This scenario is different from the standard
supervised setting, where each classifier accuracy can be assessed using
available labeled data, and raises two questions: given only the predictions of
several classifiers over a large set of unlabeled test data, is it possible to
a) reliably rank them; and b) construct a meta-classifier more accurate than
most classifiers in the ensemble? Here we present a novel spectral approach to
address these questions. First, assuming conditional independence between
classifiers, we show that the off-diagonal entries of their covariance matrix
correspond to a rank-one matrix. Moreover, the classifiers can be ranked using
the leading eigenvector of this covariance matrix, as its entries are
proportional to their balanced accuracies. Second, via a linear approximation
to the maximum likelihood estimator, we derive the Spectral Meta-Learner (SML),
a novel ensemble classifier whose weights are equal to this eigenvector
entries. On both simulated and real data, SML typically achieves a higher
accuracy than most classifiers in the ensemble and can provide a better
starting point than majority voting, for estimating the maximum likelihood
solution. Furthermore, SML is robust to the presence of small malicious groups
of classifiers designed to veer the ensemble prediction away from the (unknown)
ground truth.
| [
"['Fabio Parisi' 'Francesco Strino' 'Boaz Nadler' 'Yuval Kluger']",
"Fabio Parisi, Francesco Strino, Boaz Nadler and Yuval Kluger"
] |
cs.DC cs.DB cs.LG | null | 1303.3517 | null | null | http://arxiv.org/pdf/1303.3517v1 | 2013-03-13T04:24:12Z | 2013-03-13T04:24:12Z | Iterative MapReduce for Large Scale Machine Learning | Large datasets ("Big Data") are becoming ubiquitous because the potential
value in deriving insights from data, across a wide range of business and
scientific applications, is increasingly recognized. In particular, machine
learning - one of the foundational disciplines for data analysis, summarization
and inference - on Big Data has become routine at most organizations that
operate large clouds, usually based on systems such as Hadoop that support the
MapReduce programming paradigm. It is now widely recognized that while
MapReduce is highly scalable, it suffers from a critical weakness for machine
learning: it does not support iteration. Consequently, one has to program
around this limitation, leading to fragile, inefficient code. Further, reliance
on the programmer is inherently flawed in a multi-tenanted cloud environment,
since the programmer does not have visibility into the state of the system when
his or her program executes. Prior work has sought to address this problem by
either developing specialized systems aimed at stylized applications, or by
augmenting MapReduce with ad hoc support for saving state across iterations
(driven by an external loop). In this paper, we advocate support for looping as
a first-class construct, and propose an extension of the MapReduce programming
paradigm called {\em Iterative MapReduce}. We then develop an optimizer for a
class of Iterative MapReduce programs that cover most machine learning
techniques, provide theoretical justifications for the key optimization steps,
and empirically demonstrate that system-optimized programs for significant
machine learning tasks are competitive with state-of-the-art specialized
solutions.
| [
"['Joshua Rosen' 'Neoklis Polyzotis' 'Vinayak Borkar' 'Yingyi Bu'\n 'Michael J. Carey' 'Markus Weimer' 'Tyson Condie' 'Raghu Ramakrishnan']",
"Joshua Rosen, Neoklis Polyzotis, Vinayak Borkar, Yingyi Bu, Michael J.\n Carey, Markus Weimer, Tyson Condie, Raghu Ramakrishnan"
] |
cs.RO cs.CV cs.LG | null | 1303.3605 | null | null | http://arxiv.org/pdf/1303.3605v1 | 2013-03-14T20:51:29Z | 2013-03-14T20:51:29Z | A survey on sensing methods and feature extraction algorithms for SLAM
problem | This paper is a survey work for a bigger project for designing a Visual SLAM
robot to generate 3D dense map of an unknown unstructured environment. A lot of
factors have to be considered while designing a SLAM robot. Sensing method of
the SLAM robot should be determined by considering the kind of environment to
be modeled. Similarly the type of environment determines the suitable feature
extraction method. This paper goes through the sensing methods used in some
recently published papers. The main objective of this survey is to conduct a
comparative study among the current sensing methods and feature extraction
algorithms and to extract out the best for our work.
| [
"Adheen Ajay and D. Venkataraman",
"['Adheen Ajay' 'D. Venkataraman']"
] |
cs.DC cs.LG cs.PF | null | 1303.3632 | null | null | http://arxiv.org/pdf/1303.3632v1 | 2013-03-14T22:40:32Z | 2013-03-14T22:40:32Z | Statistical Regression to Predict Total Cumulative CPU Usage of
MapReduce Jobs | Recently, businesses have started using MapReduce as a popular computation
framework for processing large amount of data, such as spam detection, and
different data mining tasks, in both public and private clouds. Two of the
challenging questions in such environments are (1) choosing suitable values for
MapReduce configuration parameters e.g., number of mappers, number of reducers,
and DFS block size, and (2) predicting the amount of resources that a user
should lease from the service provider. Currently, the tasks of both choosing
configuration parameters and estimating required resources are solely the users
responsibilities. In this paper, we present an approach to provision the total
CPU usage in clock cycles of jobs in MapReduce environment. For a MapReduce
job, a profile of total CPU usage in clock cycles is built from the job past
executions with different values of two configuration parameters e.g., number
of mappers, and number of reducers. Then, a polynomial regression is used to
model the relation between these configuration parameters and total CPU usage
in clock cycles of the job. We also briefly study the influence of input data
scaling on measured total CPU usage in clock cycles. This derived model along
with the scaling result can then be used to provision the total CPU usage in
clock cycles of the same jobs with different input data size. We validate the
accuracy of our models using three realistic applications (WordCount, Exim
MainLog parsing, and TeraSort). Results show that the predicted total CPU usage
in clock cycles of generated resource provisioning options are less than 8% of
the measured total CPU usage in clock cycles in our 20-node virtual Hadoop
cluster.
| [
"Nikzad Babaii Rizvandi, Javid Taheri, Reza Moraveji, Albert Y. Zomaya",
"['Nikzad Babaii Rizvandi' 'Javid Taheri' 'Reza Moraveji'\n 'Albert Y. Zomaya']"
] |
stat.ML cs.LG | null | 1303.3664 | null | null | http://arxiv.org/pdf/1303.3664v2 | 2013-03-18T13:11:02Z | 2013-03-15T02:37:19Z | Topic Discovery through Data Dependent and Random Projections | We present algorithms for topic modeling based on the geometry of
cross-document word-frequency patterns. This perspective gains significance
under the so called separability condition. This is a condition on existence of
novel-words that are unique to each topic. We present a suite of highly
efficient algorithms based on data-dependent and random projections of
word-frequency patterns to identify novel words and associated topics. We will
also discuss the statistical guarantees of the data-dependent projections
method based on two mild assumptions on the prior density of topic document
matrix. Our key insight here is that the maximum and minimum values of
cross-document frequency patterns projected along any direction are associated
with novel words. While our sample complexity bounds for topic recovery are
similar to the state-of-art, the computational complexity of our random
projection scheme scales linearly with the number of documents and the number
of words per document. We present several experiments on synthetic and
real-world datasets to demonstrate qualitative and quantitative merits of our
scheme.
| [
"Weicong Ding, Mohammad H. Rohban, Prakash Ishwar, Venkatesh Saligrama",
"['Weicong Ding' 'Mohammad H. Rohban' 'Prakash Ishwar'\n 'Venkatesh Saligrama']"
] |
cs.IT cs.LG math.IT math.ST stat.ML stat.TH | null | 1303.3716 | null | null | http://arxiv.org/pdf/1303.3716v1 | 2013-03-15T09:52:54Z | 2013-03-15T09:52:54Z | Subspace Clustering via Thresholding and Spectral Clustering | We consider the problem of clustering a set of high-dimensional data points
into sets of low-dimensional linear subspaces. The number of subspaces, their
dimensions, and their orientations are unknown. We propose a simple and
low-complexity clustering algorithm based on thresholding the correlations
between the data points followed by spectral clustering. A probabilistic
performance analysis shows that this algorithm succeeds even when the subspaces
intersect, and when the dimensions of the subspaces scale (up to a log-factor)
linearly in the ambient dimension. Moreover, we prove that the algorithm also
succeeds for data points that are subject to erasures with the number of
erasures scaling (up to a log-factor) linearly in the ambient dimension.
Finally, we propose a simple scheme that provably detects outliers.
| [
"['Reinhard Heckel' 'Helmut Bölcskei']",
"Reinhard Heckel and Helmut B\\\"olcskei"
] |
cs.LG | null | 1303.3754 | null | null | http://arxiv.org/pdf/1303.3754v1 | 2013-03-15T12:20:53Z | 2013-03-15T12:20:53Z | A Last-Step Regression Algorithm for Non-Stationary Online Learning | The goal of a learner in standard online learning is to maintain an average
loss close to the loss of the best-performing single function in some class. In
many real-world problems, such as rating or ranking items, there is no single
best target function during the runtime of the algorithm, instead the best
(local) target function is drifting over time. We develop a novel last-step
minmax optimal algorithm in context of a drift. We analyze the algorithm in the
worst-case regret framework and show that it maintains an average loss close to
that of the best slowly changing sequence of linear functions, as long as the
total of drift is sublinear. In some situations, our bound improves over
existing bounds, and additionally the algorithm suffers logarithmic regret when
there is no drift. We also build on the H_infinity filter and its bound, and
develop and analyze a second algorithm for drifting setting. Synthetic
simulations demonstrate the advantages of our algorithms in a worst-case
constant drift setting.
| [
"Edward Moroshko, Koby Crammer",
"['Edward Moroshko' 'Koby Crammer']"
] |
cs.LG | null | 1303.3934 | null | null | http://arxiv.org/pdf/1303.3934v2 | 2015-10-06T21:12:52Z | 2013-03-16T00:49:56Z | A Quorum Sensing Inspired Algorithm for Dynamic Clustering | Quorum sensing is a decentralized biological process, through which a
community of cells with no global awareness coordinate their functional
behaviors based solely on cell-medium interactions and local decisions. This
paper draws inspirations from quorum sensing and colony competition to derive a
new algorithm for data clustering. The algorithm treats each data as a single
cell, and uses knowledge of local connectivity to cluster cells into multiple
colonies simultaneously. It simulates auto-inducers secretion in quorum sensing
to tune the influence radius for each cell. At the same time, sparsely
distributed core cells spread their influences to form colonies, and
interactions between colonies eventually determine each cell's identity. The
algorithm has the flexibility to analyze not only static but also time-varying
data, which surpasses the capacity of many existing algorithms. Its stability
and convergence properties are established. The algorithm is tested on several
applications, including both synthetic and real benchmarks data sets, alleles
clustering, community detection, image segmentation. In particular, the
algorithm's distinctive capability to deal with time-varying data allows us to
experiment it on novel applications such as robotic swarms grouping and
switching model identification. We believe that the algorithm's promising
performance would stimulate many more exciting applications.
| [
"Feng Tan and Jean-Jacques Slotine",
"['Feng Tan' 'Jean-Jacques Slotine']"
] |
null | null | 1303.3987 | null | null | http://arxiv.org/pdf/1303.3987v1 | 2013-03-16T15:06:12Z | 2013-03-16T15:06:12Z | $l_{2,p}$ Matrix Norm and Its Application in Feature Selection | Recently, $l_{2,1}$ matrix norm has been widely applied to many areas such as computer vision, pattern recognition, biological study and etc. As an extension of $l_1$ vector norm, the mixed $l_{2,1}$ matrix norm is often used to find jointly sparse solutions. Moreover, an efficient iterative algorithm has been designed to solve $l_{2,1}$-norm involved minimizations. Actually, computational studies have showed that $l_p$-regularization ($0<p<1$) is sparser than $l_1$-regularization, but the extension to matrix norm has been seldom considered. This paper presents a definition of mixed $l_{2,p}$ $(pin (0, 1])$ matrix pseudo norm which is thought as both generalizations of $l_p$ vector norm to matrix and $l_{2,1}$-norm to nonconvex cases $(0<p<1)$. Fortunately, an efficient unified algorithm is proposed to solve the induced $l_{2,p}$-norm $(pin (0, 1])$ optimization problems. The convergence can also be uniformly demonstrated for all $pin (0, 1]$. Typical $pin (0,1]$ are applied to select features in computational biology and the experimental results show that some choices of $0<p<1$ do improve the sparse pattern of using $p=1$. | [
"['Liping Wang' 'Songcan Chen']"
] |
cs.LG | null | 1303.4015 | null | null | http://arxiv.org/pdf/1303.4015v2 | 2013-11-03T10:25:48Z | 2013-03-16T20:09:16Z | On multi-class learning through the minimization of the confusion matrix
norm | In imbalanced multi-class classification problems, the misclassification rate
as an error measure may not be a relevant choice. Several methods have been
developed where the performance measure retained richer information than the
mere misclassification rate: misclassification costs, ROC-based information,
etc. Following this idea of dealing with alternate measures of performance, we
propose to address imbalanced classification problems by using a new measure to
be optimized: the norm of the confusion matrix. Indeed, recent results show
that using the norm of the confusion matrix as an error measure can be quite
interesting due to the fine-grain informations contained in the matrix,
especially in the case of imbalanced classes. Our first contribution then
consists in showing that optimizing criterion based on the confusion matrix
gives rise to a common background for cost-sensitive methods aimed at dealing
with imbalanced classes learning problems. As our second contribution, we
propose an extension of a recent multi-class boosting method --- namely
AdaBoost.MM --- to the imbalanced class problem, by greedily minimizing the
empirical norm of the confusion matrix. A theoretical analysis of the
properties of the proposed method is presented, while experimental results
illustrate the behavior of the algorithm and show the relevancy of the approach
compared to other methods.
| [
"['Sokol Koço' 'Cécile Capponi']",
"Sokol Ko\\c{c}o (LIF), C\\'ecile Capponi (LIF)"
] |
cs.LG | null | 1303.4169 | null | null | http://arxiv.org/pdf/1303.4169v1 | 2013-03-18T07:14:15Z | 2013-03-18T07:14:15Z | Markov Chain Monte Carlo for Arrangement of Hyperplanes in
Locality-Sensitive Hashing | Since Hamming distances can be calculated by bitwise computations, they can
be calculated with less computational load than L2 distances. Similarity
searches can therefore be performed faster in Hamming distance space. The
elements of Hamming distance space are bit strings. On the other hand, the
arrangement of hyperplanes induce the transformation from the feature vectors
into feature bit strings. This transformation method is a type of
locality-sensitive hashing that has been attracting attention as a way of
performing approximate similarity searches at high speed. Supervised learning
of hyperplane arrangements allows us to obtain a method that transforms them
into feature bit strings reflecting the information of labels applied to
higher-dimensional feature vectors. In this p aper, we propose a supervised
learning method for hyperplane arrangements in feature space that uses a Markov
chain Monte Carlo (MCMC) method. We consider the probability density functions
used during learning, and evaluate their performance. We also consider the
sampling method for learning data pairs needed in learning, and we evaluate its
performance. We confirm that the accuracy of this learning method when using a
suitable probability density function and sampling method is greater than the
accuracy of existing learning methods.
| [
"Yui Noma, Makiko Konoshima",
"['Yui Noma' 'Makiko Konoshima']"
] |
cs.LG stat.ML | null | 1303.4172 | null | null | http://arxiv.org/pdf/1303.4172v1 | 2013-03-18T07:33:29Z | 2013-03-18T07:33:29Z | Margins, Shrinkage, and Boosting | This manuscript shows that AdaBoost and its immediate variants can produce
approximate maximum margin classifiers simply by scaling step size choices with
a fixed small constant. In this way, when the unscaled step size is an optimal
choice, these results provide guarantees for Friedman's empirically successful
"shrinkage" procedure for gradient boosting (Friedman, 2000). Guarantees are
also provided for a variety of other step sizes, affirming the intuition that
increasingly regularized line searches provide improved margin guarantees. The
results hold for the exponential loss and similar losses, most notably the
logistic loss.
| [
"['Matus Telgarsky']",
"Matus Telgarsky"
] |
cs.LG cs.NA | null | 1303.4207 | null | null | http://arxiv.org/pdf/1303.4207v7 | 2013-10-01T06:31:11Z | 2013-03-18T11:17:55Z | Improving CUR Matrix Decomposition and the Nystr\"{o}m Approximation via
Adaptive Sampling | The CUR matrix decomposition and the Nystr\"{o}m approximation are two
important low-rank matrix approximation techniques. The Nystr\"{o}m method
approximates a symmetric positive semidefinite matrix in terms of a small
number of its columns, while CUR approximates an arbitrary data matrix by a
small number of its columns and rows. Thus, CUR decomposition can be regarded
as an extension of the Nystr\"{o}m approximation.
In this paper we establish a more general error bound for the adaptive
column/row sampling algorithm, based on which we propose more accurate CUR and
Nystr\"{o}m algorithms with expected relative-error bounds. The proposed CUR
and Nystr\"{o}m algorithms also have low time complexity and can avoid
maintaining the whole data matrix in RAM. In addition, we give theoretical
analysis for the lower error bounds of the standard Nystr\"{o}m method and the
ensemble Nystr\"{o}m method. The main theoretical results established in this
paper are novel, and our analysis makes no special assumption on the data
matrices.
| [
"['Shusen Wang' 'Zhihua Zhang']",
"Shusen Wang, Zhihua Zhang"
] |
cs.LG cs.NA stat.CO stat.ML | null | 1303.4434 | null | null | http://arxiv.org/pdf/1303.4434v1 | 2013-03-18T21:41:53Z | 2013-03-18T21:41:53Z | A General Iterative Shrinkage and Thresholding Algorithm for Non-convex
Regularized Optimization Problems | Non-convex sparsity-inducing penalties have recently received considerable
attentions in sparse learning. Recent theoretical investigations have
demonstrated their superiority over the convex counterparts in several sparse
learning settings. However, solving the non-convex optimization problems
associated with non-convex penalties remains a big challenge. A commonly used
approach is the Multi-Stage (MS) convex relaxation (or DC programming), which
relaxes the original non-convex problem to a sequence of convex problems. This
approach is usually not very practical for large-scale problems because its
computational cost is a multiple of solving a single convex problem. In this
paper, we propose a General Iterative Shrinkage and Thresholding (GIST)
algorithm to solve the nonconvex optimization problem for a large class of
non-convex penalties. The GIST algorithm iteratively solves a proximal operator
problem, which in turn has a closed-form solution for many commonly used
penalties. At each outer iteration of the algorithm, we use a line search
initialized by the Barzilai-Borwein (BB) rule that allows finding an
appropriate step size quickly. The paper also presents a detailed convergence
analysis of the GIST algorithm. The efficiency of the proposed algorithm is
demonstrated by extensive experiments on large-scale data sets.
| [
"['Pinghua Gong' 'Changshui Zhang' 'Zhaosong Lu' 'Jianhua Huang'\n 'Jieping Ye']",
"Pinghua Gong, Changshui Zhang, Zhaosong Lu, Jianhua Huang, Jieping Ye"
] |
cs.LG cs.GT | null | 1303.4638 | null | null | http://arxiv.org/pdf/1303.4638v1 | 2013-03-13T10:22:09Z | 2013-03-13T10:22:09Z | On Improving Energy Efficiency within Green Femtocell Networks: A
Hierarchical Reinforcement Learning Approach | One of the efficient solutions of improving coverage and increasing capacity
in cellular networks is the deployment of femtocells. As the cellular networks
are becoming more complex, energy consumption of whole network infrastructure
is becoming important in terms of both operational costs and environmental
impacts. This paper investigates energy efficiency of two-tier femtocell
networks through combining game theory and stochastic learning. With the
Stackelberg game formulation, a hierarchical reinforcement learning framework
is applied for studying the joint expected utility maximization of macrocells
and femtocells subject to the minimum signal-to-interference-plus-noise-ratio
requirements. In the learning procedure, the macrocells act as leaders and the
femtocells are followers. At each time step, the leaders commit to dynamic
strategies based on the best responses of the followers, while the followers
compete against each other with no further information but the leaders'
transmission parameters. In this paper, we propose two reinforcement learning
based intelligent algorithms to schedule each cell's stochastic power levels.
Numerical experiments are presented to validate the investigations. The results
show that the two learning algorithms substantially improve the energy
efficiency of the femtocell networks.
| [
"Xianfu Chen, Honggang Zhang, Tao Chen, Mika Lasanen, and Jacques\n Palicot",
"['Xianfu Chen' 'Honggang Zhang' 'Tao Chen' 'Mika Lasanen'\n 'Jacques Palicot']"
] |
cs.LG | null | 1303.4664 | null | null | http://arxiv.org/pdf/1303.4664v1 | 2013-03-19T17:00:22Z | 2013-03-19T17:00:22Z | Large-Scale Learning with Less RAM via Randomization | We reduce the memory footprint of popular large-scale online learning methods
by projecting our weight vector onto a coarse discrete set using randomized
rounding. Compared to standard 32-bit float encodings, this reduces RAM usage
by more than 50% during training and by up to 95% when making predictions from
a fixed model, with almost no loss in accuracy. We also show that randomized
counting can be used to implement per-coordinate learning rates, improving
model quality with little additional RAM. We prove these memory-saving methods
achieve regret guarantees similar to their exact variants. Empirical evaluation
confirms excellent performance, dominating standard approaches across memory
versus accuracy tradeoffs.
| [
"Daniel Golovin, D. Sculley, H. Brendan McMahan, Michael Young",
"['Daniel Golovin' 'D. Sculley' 'H. Brendan McMahan' 'Michael Young']"
] |
math.NA cs.LG stat.ML | null | 1303.4694 | null | null | http://arxiv.org/pdf/1303.4694v2 | 2013-09-20T20:22:33Z | 2013-03-12T04:33:14Z | Recovering Non-negative and Combined Sparse Representations | The non-negative solution to an underdetermined linear system can be uniquely
recovered sometimes, even without imposing any additional sparsity constraints.
In this paper, we derive conditions under which a unique non-negative solution
for such a system can exist, based on the theory of polytopes. Furthermore, we
develop the paradigm of combined sparse representations, where only a part of
the coefficient vector is constrained to be non-negative, and the rest is
unconstrained (general). We analyze the recovery of the unique, sparsest
solution, for combined representations, under three different cases of
coefficient support knowledge: (a) the non-zero supports of non-negative and
general coefficients are known, (b) the non-zero support of general
coefficients alone is known, and (c) both the non-zero supports are unknown.
For case (c), we propose the combined orthogonal matching pursuit algorithm for
coefficient recovery and derive the deterministic sparsity threshold under
which recovery of the unique, sparsest coefficient vector is possible. We
quantify the order complexity of the algorithms, and examine their performance
in exact and approximate recovery of coefficients under various conditions of
noise. Furthermore, we also obtain their empirical phase transition
characteristics. We show that the basis pursuit algorithm, with partial
non-negative constraints, and the proposed greedy algorithm perform better in
recovering the unique sparse representation when compared to their
unconstrained counterparts. Finally, we demonstrate the utility of the proposed
methods in recovering images corrupted by saturation noise.
| [
"['Karthikeyan Natesan Ramamurthy' 'Jayaraman J. Thiagarajan'\n 'Andreas Spanias']",
"Karthikeyan Natesan Ramamurthy, Jayaraman J. Thiagarajan and Andreas\n Spanias"
] |
stat.ML cs.LG | 10.1109/TSP.2014.2350956 | 1303.4756 | null | null | http://arxiv.org/abs/1303.4756v6 | 2014-08-13T16:16:16Z | 2013-03-19T20:34:47Z | Marginal Likelihoods for Distributed Parameter Estimation of Gaussian
Graphical Models | We consider distributed estimation of the inverse covariance matrix, also
called the concentration or precision matrix, in Gaussian graphical models.
Traditional centralized estimation often requires global inference of the
covariance matrix, which can be computationally intensive in large dimensions.
Approximate inference based on message-passing algorithms, on the other hand,
can lead to unstable and biased estimation in loopy graphical models. In this
paper, we propose a general framework for distributed estimation based on a
maximum marginal likelihood (MML) approach. This approach computes local
parameter estimates by maximizing marginal likelihoods defined with respect to
data collected from local neighborhoods. Due to the non-convexity of the MML
problem, we introduce and solve a convex relaxation. The local estimates are
then combined into a global estimate without the need for iterative
message-passing between neighborhoods. The proposed algorithm is naturally
parallelizable and computationally efficient, thereby making it suitable for
high-dimensional problems. In the classical regime where the number of
variables $p$ is fixed and the number of samples $T$ increases to infinity, the
proposed estimator is shown to be asymptotically consistent and to improve
monotonically as the local neighborhood size increases. In the high-dimensional
scaling regime where both $p$ and $T$ increase to infinity, the convergence
rate to the true parameters is derived and is seen to be comparable to
centralized maximum likelihood estimation. Extensive numerical experiments
demonstrate the improved performance of the two-hop version of the proposed
estimator, which suffices to almost close the gap to the centralized maximum
likelihood estimator at a reduced computational cost.
| [
"Zhaoshi Meng, Dennis Wei, Ami Wiesel, Alfred O. Hero III",
"['Zhaoshi Meng' 'Dennis Wei' 'Ami Wiesel' 'Alfred O. Hero III']"
] |
cs.LG math.NA stat.ML | null | 1303.4778 | null | null | http://arxiv.org/pdf/1303.4778v2 | 2013-07-03T19:07:34Z | 2013-03-19T22:17:20Z | Greedy Feature Selection for Subspace Clustering | Unions of subspaces provide a powerful generalization to linear subspace
models for collections of high-dimensional data. To learn a union of subspaces
from a collection of data, sets of signals in the collection that belong to the
same subspace must be identified in order to obtain accurate estimates of the
subspace structures present in the data. Recently, sparse recovery methods have
been shown to provide a provable and robust strategy for exact feature
selection (EFS)--recovering subsets of points from the ensemble that live in
the same subspace. In parallel with recent studies of EFS with L1-minimization,
in this paper, we develop sufficient conditions for EFS with a greedy method
for sparse signal recovery known as orthogonal matching pursuit (OMP).
Following our analysis, we provide an empirical study of feature selection
strategies for signals living on unions of subspaces and characterize the gap
between sparse recovery methods and nearest neighbor (NN)-based approaches. In
particular, we demonstrate that sparse recovery methods provide significant
advantages over NN methods and the gap between the two approaches is
particularly pronounced when the sampling of subspaces in the dataset is
sparse. Our results suggest that OMP may be employed to reliably recover exact
feature sets in a number of regimes where NN approaches fail to reveal the
subspace membership of points in the ensemble.
| [
"Eva L. Dyer, Aswin C. Sankaranarayanan, Richard G. Baraniuk",
"['Eva L. Dyer' 'Aswin C. Sankaranarayanan' 'Richard G. Baraniuk']"
] |
stat.ML cs.LG math.OC | null | 1303.5145 | null | null | http://arxiv.org/pdf/1303.5145v4 | 2014-01-22T21:30:33Z | 2013-03-21T02:10:10Z | Node-Based Learning of Multiple Gaussian Graphical Models | We consider the problem of estimating high-dimensional Gaussian graphical
models corresponding to a single set of variables under several distinct
conditions. This problem is motivated by the task of recovering transcriptional
regulatory networks on the basis of gene expression data {containing
heterogeneous samples, such as different disease states, multiple species, or
different developmental stages}. We assume that most aspects of the conditional
dependence networks are shared, but that there are some structured differences
between them. Rather than assuming that similarities and differences between
networks are driven by individual edges, we take a node-based approach, which
in many cases provides a more intuitive interpretation of the network
differences. We consider estimation under two distinct assumptions: (1)
differences between the K networks are due to individual nodes that are
perturbed across conditions, or (2) similarities among the K networks are due
to the presence of common hub nodes that are shared across all K networks.
Using a row-column overlap norm penalty function, we formulate two convex
optimization problems that correspond to these two assumptions. We solve these
problems using an alternating direction method of multipliers algorithm, and we
derive a set of necessary and sufficient conditions that allows us to decompose
the problem into independent subproblems so that our algorithm can be scaled to
high-dimensional settings. Our proposal is illustrated on synthetic data, a
webpage data set, and a brain cancer gene expression data set.
| [
"Karthik Mohan, Palma London, Maryam Fazel, Daniela Witten, Su-In Lee",
"['Karthik Mohan' 'Palma London' 'Maryam Fazel' 'Daniela Witten'\n 'Su-In Lee']"
] |
cs.CL cs.LG | null | 1303.5148 | null | null | http://arxiv.org/pdf/1303.5148v1 | 2013-03-21T02:56:43Z | 2013-03-21T02:56:43Z | Estimating Confusions in the ASR Channel for Improved Topic-based
Language Model Adaptation | Human language is a combination of elemental languages/domains/styles that
change across and sometimes within discourses. Language models, which play a
crucial role in speech recognizers and machine translation systems, are
particularly sensitive to such changes, unless some form of adaptation takes
place. One approach to speech language model adaptation is self-training, in
which a language model's parameters are tuned based on automatically
transcribed audio. However, transcription errors can misguide self-training,
particularly in challenging settings such as conversational speech. In this
work, we propose a model that considers the confusions (errors) of the ASR
channel. By modeling the likely confusions in the ASR output instead of using
just the 1-best, we improve self-training efficacy by obtaining a more reliable
reference transcription estimate. We demonstrate improved topic-based language
modeling adaptation results over both 1-best and lattice self-training using
our ASR channel confusion estimates on telephone conversations.
| [
"['Damianos Karakos' 'Mark Dredze' 'Sanjeev Khudanpur']",
"Damianos Karakos and Mark Dredze and Sanjeev Khudanpur"
] |
cs.CV cs.LG stat.ML | null | 1303.5244 | null | null | http://arxiv.org/pdf/1303.5244v1 | 2013-03-21T12:40:05Z | 2013-03-21T12:40:05Z | Separable Dictionary Learning | Many techniques in computer vision, machine learning, and statistics rely on
the fact that a signal of interest admits a sparse representation over some
dictionary. Dictionaries are either available analytically, or can be learned
from a suitable training set. While analytic dictionaries permit to capture the
global structure of a signal and allow a fast implementation, learned
dictionaries often perform better in applications as they are more adapted to
the considered class of signals. In imagery, unfortunately, the numerical
burden for (i) learning a dictionary and for (ii) employing the dictionary for
reconstruction tasks only allows to deal with relatively small image patches
that only capture local image information. The approach presented in this paper
aims at overcoming these drawbacks by allowing a separable structure on the
dictionary throughout the learning process. On the one hand, this permits
larger patch-sizes for the learning phase, on the other hand, the dictionary is
applied efficiently in reconstruction tasks. The learning procedure is based on
optimizing over a product of spheres which updates the dictionary as a whole,
thus enforces basic dictionary properties such as mutual coherence explicitly
during the learning procedure. In the special case where no separable structure
is enforced, our method competes with state-of-the-art dictionary learning
methods like K-SVD.
| [
"['Simon Hawe' 'Matthias Seibert' 'Martin Kleinsteuber']",
"Simon Hawe, Matthias Seibert, and Martin Kleinsteuber"
] |
cs.LG cs.AI cs.CV | null | 1303.5403 | null | null | http://arxiv.org/pdf/1303.5403v1 | 2013-03-13T12:52:37Z | 2013-03-13T12:52:37Z | An Entropy-based Learning Algorithm of Bayesian Conditional Trees | This article offers a modification of Chow and Liu's learning algorithm in
the context of handwritten digit recognition. The modified algorithm directs
the user to group digits into several classes consisting of digits that are
hard to distinguish and then constructing an optimal conditional tree
representation for each class of digits instead of for each single digit as
done by Chow and Liu (1968). Advantages and extensions of the new method are
discussed. Related works of Wong and Wang (1977) and Wong and Poon (1989) which
offer a different entropy-based learning algorithm are shown to rest on
inappropriate assumptions.
| [
"['Dan Geiger']",
"Dan Geiger"
] |
cs.CV cs.LG stat.ML | null | 1303.5508 | null | null | http://arxiv.org/pdf/1303.5508v2 | 2013-03-28T19:21:33Z | 2013-03-22T03:24:10Z | Sparse Projections of Medical Images onto Manifolds | Manifold learning has been successfully applied to a variety of medical
imaging problems. Its use in real-time applications requires fast projection
onto the low-dimensional space. To this end, out-of-sample extensions are
applied by constructing an interpolation function that maps from the input
space to the low-dimensional manifold. Commonly used approaches such as the
Nystr\"{o}m extension and kernel ridge regression require using all training
points. We propose an interpolation function that only depends on a small
subset of the input training data. Consequently, in the testing phase each new
point only needs to be compared against a small number of input training data
in order to project the point onto the low-dimensional space. We interpret our
method as an out-of-sample extension that approximates kernel ridge regression.
Our method involves solving a simple convex optimization problem and has the
attractive property of guaranteeing an upper bound on the approximation error,
which is crucial for medical applications. Tuning this error bound controls the
sparsity of the resulting interpolation function. We illustrate our method in
two clinical applications that require fast mapping of input images onto a
low-dimensional space.
| [
"George H. Chen, Christian Wachinger, Polina Golland",
"['George H. Chen' 'Christian Wachinger' 'Polina Golland']"
] |
cs.SI cs.LG math.ST physics.soc-ph stat.ML stat.TH | 10.1109/TSP.2014.2336613 | 1303.5613 | null | null | http://arxiv.org/abs/1303.5613v1 | 2013-03-22T13:34:28Z | 2013-03-22T13:34:28Z | Network Detection Theory and Performance | Network detection is an important capability in many areas of applied
research in which data can be represented as a graph of entities and
relationships. Oftentimes the object of interest is a relatively small subgraph
in an enormous, potentially uninteresting background. This aspect characterizes
network detection as a "big data" problem. Graph partitioning and network
discovery have been major research areas over the last ten years, driven by
interest in internet search, cyber security, social networks, and criminal or
terrorist activities. The specific problem of network discovery is addressed as
a special case of graph partitioning in which membership in a small subgraph of
interest must be determined. Algebraic graph theory is used as the basis to
analyze and compare different network detection methods. A new Bayesian network
detection framework is introduced that partitions the graph based on prior
information and direct observations. The new approach, called space-time threat
propagation, is proved to maximize the probability of detection and is
therefore optimum in the Neyman-Pearson sense. This optimality criterion is
compared to spectral community detection approaches which divide the global
graph into subsets or communities with optimal connectivity properties. We also
explore a new generative stochastic model for covert networks and analyze using
receiver operating characteristics the detection performance of both classes of
optimal detection techniques.
| [
"['Steven T. Smith' 'Kenneth D. Senne' 'Scott Philips' 'Edward K. Kao'\n 'Garrett Bernstein']",
"Steven T. Smith, Kenneth D. Senne, Scott Philips, Edward K. Kao, and\n Garrett Bernstein"
] |
stat.ML cs.LG math.OC stat.AP | null | 1303.5685 | null | null | http://arxiv.org/pdf/1303.5685v2 | 2013-07-19T20:33:18Z | 2013-03-22T18:44:56Z | Sparse Factor Analysis for Learning and Content Analytics | We develop a new model and algorithms for machine learning-based learning
analytics, which estimate a learner's knowledge of the concepts underlying a
domain, and content analytics, which estimate the relationships among a
collection of questions and those concepts. Our model represents the
probability that a learner provides the correct response to a question in terms
of three factors: their understanding of a set of underlying concepts, the
concepts involved in each question, and each question's intrinsic difficulty.
We estimate these factors given the graded responses to a collection of
questions. The underlying estimation problem is ill-posed in general,
especially when only a subset of the questions are answered. The key
observation that enables a well-posed solution is the fact that typical
educational domains of interest involve only a small number of key concepts.
Leveraging this observation, we develop both a bi-convex maximum-likelihood and
a Bayesian solution to the resulting SPARse Factor Analysis (SPARFA) problem.
We also incorporate user-defined tags on questions to facilitate the
interpretability of the estimated factors. Experiments with synthetic and
real-world data demonstrate the efficacy of our approach. Finally, we make a
connection between SPARFA and noisy, binary-valued (1-bit) dictionary learning
that is of independent interest.
| [
"Andrew S. Lan, Andrew E. Waters, Christoph Studer and Richard G.\n Baraniuk",
"['Andrew S. Lan' 'Andrew E. Waters' 'Christoph Studer'\n 'Richard G. Baraniuk']"
] |
cs.CV cs.LG cs.RO stat.ML | null | 1303.5913 | null | null | http://arxiv.org/pdf/1303.5913v1 | 2013-03-24T04:55:40Z | 2013-03-24T04:55:40Z | A Diffusion Process on Riemannian Manifold for Visual Tracking | Robust visual tracking for long video sequences is a research area that has
many important applications. The main challenges include how the target image
can be modeled and how this model can be updated. In this paper, we model the
target using a covariance descriptor, as this descriptor is robust to problems
such as pixel-pixel misalignment, pose and illumination changes, that commonly
occur in visual tracking. We model the changes in the template using a
generative process. We introduce a new dynamical model for the template update
using a random walk on the Riemannian manifold where the covariance descriptors
lie in. This is done using log-transformed space of the manifold to free the
constraints imposed inherently by positive semidefinite matrices. Modeling
template variations and poses kinetics together in the state space enables us
to jointly quantify the uncertainties relating to the kinematic states and the
template in a principled way. Finally, the sequential inference of the
posterior distribution of the kinematic states and the template is done using a
particle filter. Our results shows that this principled approach can be robust
to changes in illumination, poses and spatial affine transformation. In the
experiments, our method outperformed the current state-of-the-art algorithm -
the incremental Principal Component Analysis method, particularly when a target
underwent fast poses changes and also maintained a comparable performance in
stable target tracking cases.
| [
"['Marcus Chen' 'Cham Tat Jen' 'Pang Sze Kim' 'Alvina Goh']",
"Marcus Chen, Cham Tat Jen, Pang Sze Kim, Alvina Goh"
] |
stat.ML cs.LG | null | 1303.5976 | null | null | http://arxiv.org/pdf/1303.5976v1 | 2013-03-24T18:32:38Z | 2013-03-24T18:32:38Z | On Learnability, Complexity and Stability | We consider the fundamental question of learnability of a hypotheses class in
the supervised learning setting and in the general learning setting introduced
by Vladimir Vapnik. We survey classic results characterizing learnability in
term of suitable notions of complexity, as well as more recent results that
establish the connection between learnability and stability of a learning
algorithm.
| [
"Silvia Villa, Lorenzo Rosasco and Tomaso Poggio",
"['Silvia Villa' 'Lorenzo Rosasco' 'Tomaso Poggio']"
] |
stat.ML cs.LG math.OC | null | 1303.5984 | null | null | http://arxiv.org/pdf/1303.5984v1 | 2013-03-24T19:56:49Z | 2013-03-24T19:56:49Z | Efficient Reinforcement Learning for High Dimensional Linear Quadratic
Systems | We study the problem of adaptive control of a high dimensional linear
quadratic (LQ) system. Previous work established the asymptotic convergence to
an optimal controller for various adaptive control schemes. More recently, for
the average cost LQ problem, a regret bound of ${O}(\sqrt{T})$ was shown, apart
form logarithmic factors. However, this bound scales exponentially with $p$,
the dimension of the state space. In this work we consider the case where the
matrices describing the dynamic of the LQ system are sparse and their
dimensions are large. We present an adaptive control scheme that achieves a
regret bound of ${O}(p \sqrt{T})$, apart from logarithmic factors. In
particular, our algorithm has an average cost of $(1+\eps)$ times the optimum
cost after $T = \polylog(p) O(1/\eps^2)$. This is in comparison to previous
work on the dense dynamics where the algorithm requires time that scales
exponentially with dimension in order to achieve regret of $\eps$ times the
optimal cost.
We believe that our result has prominent applications in the emerging area of
computational advertising, in particular targeted online advertising and
advertising in social networks.
| [
"Morteza Ibrahimi and Adel Javanmard and Benjamin Van Roy",
"['Morteza Ibrahimi' 'Adel Javanmard' 'Benjamin Van Roy']"
] |
cs.LG cs.CV stat.ML | null | 1303.6001 | null | null | http://arxiv.org/pdf/1303.6001v1 | 2013-03-24T22:33:15Z | 2013-03-24T22:33:15Z | Generalizing k-means for an arbitrary distance matrix | The original k-means clustering method works only if the exact vectors
representing the data points are known. Therefore calculating the distances
from the centroids needs vector operations, since the average of abstract data
points is undefined. Existing algorithms can be extended for those cases when
the sole input is the distance matrix, and the exact representing vectors are
unknown. This extension may be named relational k-means after a notation for a
similar algorithm invented for fuzzy clustering. A method is then proposed for
generalizing k-means for scenarios when the data points have absolutely no
connection with a Euclidean space.
| [
"['Balázs Szalkai']",
"Bal\\'azs Szalkai"
] |
cs.LG stat.ML | null | 1303.6086 | null | null | http://arxiv.org/pdf/1303.6086v1 | 2013-03-25T11:09:08Z | 2013-03-25T11:09:08Z | On Sparsity Inducing Regularization Methods for Machine Learning | During the past years there has been an explosion of interest in learning
methods based on sparsity regularization. In this paper, we discuss a general
class of such methods, in which the regularizer can be expressed as the
composition of a convex function $\omega$ with a linear function. This setting
includes several methods such the group Lasso, the Fused Lasso, multi-task
learning and many more. We present a general approach for solving
regularization problems of this kind, under the assumption that the proximity
operator of the function $\omega$ is available. Furthermore, we comment on the
application of this approach to support vector machines, a technique pioneered
by the groundbreaking work of Vladimir Vapnik.
| [
"Andreas Argyriou, Luca Baldassarre, Charles A. Micchelli, Massimiliano\n Pontil",
"['Andreas Argyriou' 'Luca Baldassarre' 'Charles A. Micchelli'\n 'Massimiliano Pontil']"
] |
math.ST cs.LG math.OC stat.TH | null | 1303.6149 | null | null | http://arxiv.org/pdf/1303.6149v3 | 2014-03-16T06:25:08Z | 2013-03-25T14:53:33Z | Adaptivity of averaged stochastic gradient descent to local strong
convexity for logistic regression | In this paper, we consider supervised learning problems such as logistic
regression and study the stochastic gradient method with averaging, in the
usual stochastic approximation setting where observations are used only once.
We show that after $N$ iterations, with a constant step-size proportional to
$1/R^2 \sqrt{N}$ where $N$ is the number of observations and $R$ is the maximum
norm of the observations, the convergence rate is always of order
$O(1/\sqrt{N})$, and improves to $O(R^2 / \mu N)$ where $\mu$ is the lowest
eigenvalue of the Hessian at the global optimum (when this eigenvalue is
greater than $R^2/\sqrt{N}$). Since $\mu$ does not need to be known in advance,
this shows that averaged stochastic gradient is adaptive to \emph{unknown
local} strong convexity of the objective function. Our proof relies on the
generalized self-concordance properties of the logistic loss and thus extends
to all generalized linear models with uniformly bounded features.
| [
"Francis Bach (INRIA Paris - Rocquencourt, LIENS)",
"['Francis Bach']"
] |
cs.CV cs.LG | 10.1371/journal.pone.0071715 | 1303.6163 | null | null | http://arxiv.org/abs/1303.6163v3 | 2013-07-23T11:15:25Z | 2013-03-25T15:20:09Z | Machine learning of hierarchical clustering to segment 2D and 3D images | We aim to improve segmentation through the use of machine learning tools
during region agglomeration. We propose an active learning approach for
performing hierarchical agglomerative segmentation from superpixels. Our method
combines multiple features at all scales of the agglomerative process, works
for data with an arbitrary number of dimensions, and scales to very large
datasets. We advocate the use of variation of information to measure
segmentation accuracy, particularly in 3D electron microscopy (EM) images of
neural tissue, and using this metric demonstrate an improvement over competing
algorithms in EM and natural images.
| [
"Juan Nunez-Iglesias, Ryan Kennedy, Toufiq Parag, Jianbo Shi, Dmitri B.\n Chklovskii",
"['Juan Nunez-Iglesias' 'Ryan Kennedy' 'Toufiq Parag' 'Jianbo Shi'\n 'Dmitri B. Chklovskii']"
] |
stat.ML cs.LG cs.NA | null | 1303.6370 | null | null | http://arxiv.org/pdf/1303.6370v1 | 2013-03-26T02:36:49Z | 2013-03-26T02:36:49Z | Convex Tensor Decomposition via Structured Schatten Norm Regularization | We discuss structured Schatten norms for tensor decomposition that includes
two recently proposed norms ("overlapped" and "latent") for
convex-optimization-based tensor decomposition, and connect tensor
decomposition with wider literature on structured sparsity. Based on the
properties of the structured Schatten norms, we mathematically analyze the
performance of "latent" approach for tensor decomposition, which was
empirically found to perform better than the "overlapped" approach in some
settings. We show theoretically that this is indeed the case. In particular,
when the unknown true tensor is low-rank in a specific mode, this approach
performs as good as knowing the mode with the smallest rank. Along the way, we
show a novel duality result for structures Schatten norms, establish the
consistency, and discuss the identifiability of this approach. We confirm
through numerical simulations that our theoretical prediction can precisely
predict the scaling behavior of the mean squared error.
| [
"Ryota Tomioka, Taiji Suzuki",
"['Ryota Tomioka' 'Taiji Suzuki']"
] |
cs.LG | null | 1303.6390 | null | null | http://arxiv.org/pdf/1303.6390v2 | 2013-03-27T16:23:48Z | 2013-03-26T06:01:34Z | A Note on k-support Norm Regularized Risk Minimization | The k-support norm has been recently introduced to perform correlated
sparsity regularization. Although Argyriou et al. only reported experiments
using squared loss, here we apply it to several other commonly used settings
resulting in novel machine learning algorithms with interesting and familiar
limit cases. Source code for the algorithms described here is available.
| [
"['Matthew Blaschko']",
"Matthew Blaschko (INRIA Saclay - Ile de France, CVN)"
] |
stat.ML cs.LG | null | 1303.6746 | null | null | http://arxiv.org/pdf/1303.6746v4 | 2013-11-11T10:52:24Z | 2013-03-27T06:17:09Z | Exploiting correlation and budget constraints in Bayesian multi-armed
bandit optimization | We address the problem of finding the maximizer of a nonlinear smooth
function, that can only be evaluated point-wise, subject to constraints on the
number of permitted function evaluations. This problem is also known as
fixed-budget best arm identification in the multi-armed bandit literature. We
introduce a Bayesian approach for this problem and show that it empirically
outperforms both the existing frequentist counterpart and other Bayesian
optimization methods. The Bayesian approach places emphasis on detailed
modelling, including the modelling of correlations among the arms. As a result,
it can perform well in situations where the number of arms is much larger than
the number of allowed function evaluation, whereas the frequentist counterpart
is inapplicable. This feature enables us to develop and deploy practical
applications, such as automatic machine learning toolboxes. The paper presents
comprehensive comparisons of the proposed approach, Thompson sampling,
classical Bayesian optimization techniques, more recent Bayesian bandit
approaches, and state-of-the-art best arm identification methods. This is the
first comparison of many of these methods in the literature and allows us to
examine the relative merits of their different features.
| [
"Matthew W. Hoffman, Bobak Shahriari, Nando de Freitas",
"['Matthew W. Hoffman' 'Bobak Shahriari' 'Nando de Freitas']"
] |
stat.ML cs.LG | 10.1117/12.2017754 | 1303.6750 | null | null | http://arxiv.org/abs/1303.6750v1 | 2013-03-27T06:53:26Z | 2013-03-27T06:53:26Z | Sequential testing over multiple stages and performance analysis of data
fusion | We describe a methodology for modeling the performance of decision-level data
fusion between different sensor configurations, implemented as part of the
JIEDDO Analytic Decision Engine (JADE). We first discuss a Bayesian network
formulation of classical probabilistic data fusion, which allows elementary
fusion structures to be stacked and analyzed efficiently. We then present an
extension of the Wald sequential test for combining the outputs of the Bayesian
network over time. We discuss an algorithm to compute its performance
statistics and illustrate the approach on some examples. This variant of the
sequential test involves multiple, distinct stages, where the evidence
accumulated from each stage is carried over into the next one, and is motivated
by a need to keep certain sensors in the network inactive unless triggered by
other sensors.
| [
"['Gaurav Thakur']",
"Gaurav Thakur"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.