categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
stat.ML cs.LG | null | 1402.2447 | null | null | http://arxiv.org/pdf/1402.2447v2 | 2014-04-09T10:49:48Z | 2014-02-11T11:13:51Z | A comparison of linear and non-linear calibrations for speaker
recognition | In recent work on both generative and discriminative score to
log-likelihood-ratio calibration, it was shown that linear transforms give good
accuracy only for a limited range of operating points. Moreover, these methods
required tailoring of the calibration training objective functions in order to
target the desired region of best accuracy. Here, we generalize the linear
recipes to non-linear ones. We experiment with a non-linear, non-parametric,
discriminative PAV solution, as well as parametric, generative,
maximum-likelihood solutions that use Gaussian, Student's T and
normal-inverse-Gaussian score distributions. Experiments on NIST SRE'12 scores
suggest that the non-linear methods provide wider ranges of optimal accuracy
and can be trained without having to resort to objective function tailoring.
| [
"['Niko Brümmer' 'Albert Swart' 'David van Leeuwen']",
"Niko Br\\\"ummer, Albert Swart and David van Leeuwen"
]
|
stat.ML cs.LG math.ST stat.TH | null | 1402.2594 | null | null | http://arxiv.org/pdf/1402.2594v1 | 2014-02-11T18:36:11Z | 2014-02-11T18:36:11Z | Online Nonparametric Regression | We establish optimal rates for online regression for arbitrary classes of
regression functions in terms of the sequential entropy introduced in (Rakhlin,
Sridharan, Tewari, 2010). The optimal rates are shown to exhibit a phase
transition analogous to the i.i.d./statistical learning case, studied in
(Rakhlin, Sridharan, Tsybakov 2013). In the frequently encountered situation
when sequential entropy and i.i.d. empirical entropy match, our results point
to the interesting phenomenon that the rates for statistical learning with
squared loss and online nonparametric regression are the same.
In addition to a non-algorithmic study of minimax regret, we exhibit a
generic forecaster that enjoys the established optimal rates. We also provide a
recipe for designing online regression algorithms that can be computationally
efficient. We illustrate the techniques by deriving existing and new
forecasters for the case of finite experts and for online linear regression.
| [
"['Alexander Rakhlin' 'Karthik Sridharan']",
"Alexander Rakhlin, Karthik Sridharan"
]
|
cs.LG stat.ML | null | 1402.2667 | null | null | http://arxiv.org/pdf/1402.2667v1 | 2014-02-11T21:18:11Z | 2014-02-11T21:18:11Z | On Zeroth-Order Stochastic Convex Optimization via Random Walks | We propose a method for zeroth order stochastic convex optimization that
attains the suboptimality rate of $\tilde{\mathcal{O}}(n^{7}T^{-1/2})$ after
$T$ queries for a convex bounded function $f:{\mathbb R}^n\to{\mathbb R}$. The
method is based on a random walk (the \emph{Ball Walk}) on the epigraph of the
function. The randomized approach circumvents the problem of gradient
estimation, and appears to be less sensitive to noisy function evaluations
compared to noiseless zeroth order methods.
| [
"Tengyuan Liang, Hariharan Narayanan and Alexander Rakhlin",
"['Tengyuan Liang' 'Hariharan Narayanan' 'Alexander Rakhlin']"
]
|
stat.ML cs.DC cs.LG stat.CO | null | 1402.2676 | null | null | http://arxiv.org/pdf/1402.2676v4 | 2014-08-21T06:00:32Z | 2014-02-11T21:39:54Z | Ranking via Robust Binary Classification and Parallel Parameter
Estimation in Large-Scale Data | We propose RoBiRank, a ranking algorithm that is motivated by observing a
close connection between evaluation metrics for learning to rank and loss
functions for robust classification. The algorithm shows a very competitive
performance on standard benchmark datasets against other representative
algorithms in the literature. On the other hand, in large scale problems where
explicit feature vectors and scores are not given, our algorithm can be
efficiently parallelized across a large number of machines; for a task that
requires 386,133 x 49,824,519 pairwise interactions between items to be ranked,
our algorithm finds solutions that are of dramatically higher quality than that
can be found by a state-of-the-art competitor algorithm, given the same amount
of wall-clock time for computation.
| [
"Hyokun Yun, Parameswaran Raman, S.V.N. Vishwanathan",
"['Hyokun Yun' 'Parameswaran Raman' 'S. V. N. Vishwanathan']"
]
|
stat.ML cs.LG | null | 1402.3032 | null | null | http://arxiv.org/pdf/1402.3032v1 | 2014-02-13T05:06:53Z | 2014-02-13T05:06:53Z | Regularization for Multiple Kernel Learning via Sum-Product Networks | In this paper, we are interested in constructing general graph-based
regularizers for multiple kernel learning (MKL) given a structure which is used
to describe the way of combining basis kernels. Such structures are represented
by sum-product networks (SPNs) in our method. Accordingly we propose a new
convex regularization method for MLK based on a path-dependent kernel weighting
function which encodes the entire SPN structure in our method. Under certain
conditions and from the view of probability, this function can be considered to
follow multinomial distributions over the weights associated with product nodes
in SPNs. We also analyze the convexity of our regularizer and the complexity of
our induced classifiers, and further propose an efficient wrapper algorithm to
optimize our formulation. In our experiments, we apply our method to ......
| [
"['Ziming Zhang']",
"Ziming Zhang"
]
|
cs.IR cs.LG stat.ML | null | 1402.3070 | null | null | http://arxiv.org/pdf/1402.3070v1 | 2014-02-13T09:54:01Z | 2014-02-13T09:54:01Z | Squeezing bottlenecks: exploring the limits of autoencoder semantic
representation capabilities | We present a comprehensive study on the use of autoencoders for modelling
text data, in which (differently from previous studies) we focus our attention
on the following issues: i) we explore the suitability of two different models
bDA and rsDA for constructing deep autoencoders for text data at the sentence
level; ii) we propose and evaluate two novel metrics for better assessing the
text-reconstruction capabilities of autoencoders; and iii) we propose an
automatic method to find the critical bottleneck dimensionality for text
language representations (below which structural information is lost).
| [
"['Parth Gupta' 'Rafael E. Banchs' 'Paolo Rosso']",
"Parth Gupta, Rafael E. Banchs and Paolo Rosso"
]
|
stat.ML cs.LG | 10.1016/j.neucom.2014.10.081 | 1402.3144 | null | null | http://arxiv.org/abs/1402.3144v2 | 2014-10-21T12:29:58Z | 2014-02-13T14:18:17Z | A Robust Ensemble Approach to Learn From Positive and Unlabeled Data
Using SVM Base Models | We present a novel approach to learn binary classifiers when only positive
and unlabeled instances are available (PU learning). This problem is routinely
cast as a supervised task with label noise in the negative set. We use an
ensemble of SVM models trained on bootstrap resamples of the training data for
increased robustness against label noise. The approach can be considered in a
bagging framework which provides an intuitive explanation for its mechanics in
a semi-supervised setting. We compared our method to state-of-the-art
approaches in simulations using multiple public benchmark data sets. The
included benchmark comprises three settings with increasing label noise: (i)
fully supervised, (ii) PU learning and (iii) PU learning with false positives.
Our approach shows a marginal improvement over existing methods in the second
setting and a significant improvement in the third.
| [
"['Marc Claesen' 'Frank De Smet' 'Johan A. K. Suykens' 'Bart De Moor']",
"Marc Claesen, Frank De Smet, Johan A. K. Suykens, Bart De Moor"
]
|
stat.ML cs.CV cs.LG cs.NE | null | 1402.3337 | null | null | http://arxiv.org/pdf/1402.3337v5 | 2015-04-08T14:51:11Z | 2014-02-13T23:37:39Z | Zero-bias autoencoders and the benefits of co-adapting features | Regularized training of an autoencoder typically results in hidden unit
biases that take on large negative values. We show that negative biases are a
natural result of using a hidden layer whose responsibility is to both
represent the input data and act as a selection mechanism that ensures sparsity
of the representation. We then show that negative biases impede the learning of
data distributions whose intrinsic dimensionality is high. We also propose a
new activation function that decouples the two roles of the hidden layer and
that allows us to learn representations on data with very high intrinsic
dimensionality, where standard autoencoders typically fail. Since the decoupled
activation function acts like an implicit regularizer, the model can be trained
by minimizing the reconstruction error of training data, without requiring any
additional regularization.
| [
"['Kishore Konda' 'Roland Memisevic' 'David Krueger']",
"Kishore Konda, Roland Memisevic, David Krueger"
]
|
cs.NE cs.LG stat.ML | null | 1402.3346 | null | null | http://arxiv.org/pdf/1402.3346v3 | 2015-03-12T15:20:04Z | 2014-02-14T02:15:09Z | Geometry and Expressive Power of Conditional Restricted Boltzmann
Machines | Conditional restricted Boltzmann machines are undirected stochastic neural
networks with a layer of input and output units connected bipartitely to a
layer of hidden units. These networks define models of conditional probability
distributions on the states of the output units given the states of the input
units, parametrized by interaction weights and biases. We address the
representational power of these models, proving results their ability to
represent conditional Markov random fields and conditional distributions with
restricted supports, the minimal size of universal approximators, the maximal
model approximation errors, and on the dimension of the set of representable
conditional distributions. We contribute new tools for investigating
conditional probability models, which allow us to improve the results that can
be derived from existing work on restricted Boltzmann machine probability
models.
| [
"Guido Montufar, Nihat Ay, Keyan Ghazi-Zahedi",
"['Guido Montufar' 'Nihat Ay' 'Keyan Ghazi-Zahedi']"
]
|
cs.LG | null | 1402.3427 | null | null | http://arxiv.org/pdf/1402.3427v7 | 2018-03-31T13:38:19Z | 2014-02-14T10:44:48Z | Indian Buffet Process Deep Generative Models for Semi-Supervised
Classification | Deep generative models (DGMs) have brought about a major breakthrough, as
well as renewed interest, in generative latent variable models. However, DGMs
do not allow for performing data-driven inference of the number of latent
features needed to represent the observed data. Traditional linear formulations
address this issue by resorting to tools from the field of nonparametric
statistics. Indeed, linear latent variable models imposed an Indian Buffet
Process (IBP) prior have been extensively studied by the machine learning
community; inference for such models can been performed either via exact
sampling or via approximate variational techniques. Based on this inspiration,
in this paper we examine whether similar ideas from the field of Bayesian
nonparametrics can be utilized in the context of modern DGMs in order to
address the latent variable dimensionality inference problem. To this end, we
propose a novel DGM formulation, based on the imposition of an IBP prior. We
devise an efficient Black-Box Variational inference algorithm for our model,
and exhibit its efficacy in a number of semi-supervised classification
experiments. In all cases, we use popular benchmark datasets, and compare to
state-of-the-art DGMs.
| [
"['Sotirios P. Chatzis']",
"Sotirios P. Chatzis"
]
|
cs.NE cs.LG | null | 1402.3511 | null | null | http://arxiv.org/pdf/1402.3511v1 | 2014-02-14T16:05:12Z | 2014-02-14T16:05:12Z | A Clockwork RNN | Sequence prediction and classification are ubiquitous and challenging
problems in machine learning that can require identifying complex dependencies
between temporally distant inputs. Recurrent Neural Networks (RNNs) have the
ability, in theory, to cope with these temporal dependencies by virtue of the
short-term memory implemented by their recurrent (feedback) connections.
However, in practice they are difficult to train successfully when the
long-term memory is required. This paper introduces a simple, yet powerful
modification to the standard RNN architecture, the Clockwork RNN (CW-RNN), in
which the hidden layer is partitioned into separate modules, each processing
inputs at its own temporal granularity, making computations only at its
prescribed clock rate. Rather than making the standard RNN models more complex,
CW-RNN reduces the number of RNN parameters, improves the performance
significantly in the tasks tested, and speeds up the network evaluation. The
network is demonstrated in preliminary experiments involving two tasks: audio
signal generation and TIMIT spoken word classification, where it outperforms
both RNN and LSTM networks.
| [
"Jan Koutn\\'ik, Klaus Greff, Faustino Gomez, J\\\"urgen Schmidhuber",
"['Jan Koutník' 'Klaus Greff' 'Faustino Gomez' 'Jürgen Schmidhuber']"
]
|
cs.AI cs.DL cs.LG cs.LO | null | 1402.3578 | null | null | http://arxiv.org/pdf/1402.3578v1 | 2014-02-11T03:08:02Z | 2014-02-11T03:08:02Z | Learning-assisted Theorem Proving with Millions of Lemmas | Large formal mathematical libraries consist of millions of atomic inference
steps that give rise to a corresponding number of proved statements (lemmas).
Analogously to the informal mathematical practice, only a tiny fraction of such
statements is named and re-used in later proofs by formal mathematicians. In
this work, we suggest and implement criteria defining the estimated usefulness
of the HOL Light lemmas for proving further theorems. We use these criteria to
mine the large inference graph of the lemmas in the HOL Light and Flyspeck
libraries, adding up to millions of the best lemmas to the pool of statements
that can be re-used in later proofs. We show that in combination with
learning-based relevance filtering, such methods significantly strengthen
automated theorem proving of new conjectures over large formal mathematical
libraries such as Flyspeck.
| [
"['Cezary Kaliszyk' 'Josef Urban']",
"Cezary Kaliszyk and Josef Urban"
]
|
cs.DS cs.CR cs.LG | 10.1007/978-3-662-43948-7_51 | 1402.3631 | null | null | http://arxiv.org/abs/1402.3631v2 | 2014-05-08T19:52:34Z | 2014-02-15T00:55:46Z | Privately Solving Linear Programs | In this paper, we initiate the systematic study of solving linear programs
under differential privacy. The first step is simply to define the problem: to
this end, we introduce several natural classes of private linear programs that
capture different ways sensitive data can be incorporated into a linear
program. For each class of linear programs we give an efficient, differentially
private solver based on the multiplicative weights framework, or we give an
impossibility result.
| [
"Justin Hsu and Aaron Roth and Tim Roughgarden and Jonathan Ullman",
"['Justin Hsu' 'Aaron Roth' 'Tim Roughgarden' 'Jonathan Ullman']"
]
|
cs.CL cs.LG stat.ML | null | 1402.3722 | null | null | http://arxiv.org/pdf/1402.3722v1 | 2014-02-15T21:03:02Z | 2014-02-15T21:03:02Z | word2vec Explained: deriving Mikolov et al.'s negative-sampling
word-embedding method | The word2vec software of Tomas Mikolov and colleagues
(https://code.google.com/p/word2vec/ ) has gained a lot of traction lately, and
provides state-of-the-art word embeddings. The learning models behind the
software are described in two research papers. We found the description of the
models in these papers to be somewhat cryptic and hard to follow. While the
motivations and presentation may be obvious to the neural-networks
language-modeling crowd, we had to struggle quite a bit to figure out the
rationale behind the equations.
This note is an attempt to explain equation (4) (negative sampling) in
"Distributed Representations of Words and Phrases and their Compositionality"
by Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado and Jeffrey Dean.
| [
"['Yoav Goldberg' 'Omer Levy']",
"Yoav Goldberg and Omer Levy"
]
|
cs.CV cs.DS cs.LG | null | 1402.3849 | null | null | http://arxiv.org/pdf/1402.3849v1 | 2014-02-16T22:19:40Z | 2014-02-16T22:19:40Z | Scalable Kernel Clustering: Approximate Kernel k-means | Kernel-based clustering algorithms have the ability to capture the non-linear
structure in real world data. Among various kernel-based clustering algorithms,
kernel k-means has gained popularity due to its simple iterative nature and
ease of implementation. However, its run-time complexity and memory footprint
increase quadratically in terms of the size of the data set, and hence, large
data sets cannot be clustered efficiently. In this paper, we propose an
approximation scheme based on randomization, called the Approximate Kernel
k-means. We approximate the cluster centers using the kernel similarity between
a few sampled points and all the points in the data set. We show that the
proposed method achieves better clustering performance than the traditional low
rank kernel approximation based clustering schemes. We also demonstrate that
its running time and memory requirements are significantly lower than those of
kernel k-means, with only a small reduction in the clustering quality on
several public domain large data sets. We then employ ensemble clustering
techniques to further enhance the performance of our algorithm.
| [
"['Radha Chitta' 'Rong Jin' 'Timothy C. Havens' 'Anil K. Jain']",
"Radha Chitta, Rong Jin, Timothy C. Havens, Anil K. Jain"
]
|
cs.LG cs.CL cs.IR | null | 1402.3891 | null | null | http://arxiv.org/pdf/1402.3891v1 | 2014-02-17T05:24:42Z | 2014-02-17T05:24:42Z | Performance Evaluation of Machine Learning Classifiers in Sentiment
Mining | In recent years, the use of machine learning classifiers is of great value in
solving a variety of problems in text classification. Sentiment mining is a
kind of text classification in which, messages are classified according to
sentiment orientation such as positive or negative. This paper extends the idea
of evaluating the performance of various classifiers to show their
effectiveness in sentiment mining of online product reviews. The product
reviews are collected from Amazon reviews. To evaluate the performance of
classifiers various evaluation methods like random sampling, linear sampling
and bootstrap sampling are used. Our results shows that support vector machine
with bootstrap sampling method outperforms others classifiers and sampling
methods in terms of misclassification rate.
| [
"Vinodhini G Chandrasekaran RM",
"['Vinodhini G Chandrasekaran RM']"
]
|
cs.LG | null | 1402.3902 | null | null | http://arxiv.org/pdf/1402.3902v4 | 2014-11-07T03:00:28Z | 2014-02-17T06:00:16Z | Sparse Polynomial Learning and Graph Sketching | Let $f:\{-1,1\}^n$ be a polynomial with at most $s$ non-zero real
coefficients. We give an algorithm for exactly reconstructing f given random
examples from the uniform distribution on $\{-1,1\}^n$ that runs in time
polynomial in $n$ and $2s$ and succeeds if the function satisfies the unique
sign property: there is one output value which corresponds to a unique set of
values of the participating parities. This sufficient condition is satisfied
when every coefficient of f is perturbed by a small random noise, or satisfied
with high probability when s parity functions are chosen randomly or when all
the coefficients are positive. Learning sparse polynomials over the Boolean
domain in time polynomial in $n$ and $2s$ is considered notoriously hard in the
worst-case. Our result shows that the problem is tractable for almost all
sparse polynomials. Then, we show an application of this result to hypergraph
sketching which is the problem of learning a sparse (both in the number of
hyperedges and the size of the hyperedges) hypergraph from uniformly drawn
random cuts. We also provide experimental results on a real world dataset.
| [
"['Murat Kocaoglu' 'Karthikeyan Shanmugam' 'Alexandros G. Dimakis'\n 'Adam Klivans']",
"Murat Kocaoglu, Karthikeyan Shanmugam, Alexandros G. Dimakis and Adam\n Klivans"
]
|
cs.LG | null | 1402.4084 | null | null | http://arxiv.org/pdf/1402.4084v1 | 2014-02-17T17:53:57Z | 2014-02-17T17:53:57Z | Selective Sampling with Drift | Recently there has been much work on selective sampling, an online active
learning setting, in which algorithms work in rounds. On each round an
algorithm receives an input and makes a prediction. Then, it can decide whether
to query a label, and if so to update its model, otherwise the input is
discarded. Most of this work is focused on the stationary case, where it is
assumed that there is a fixed target model, and the performance of the
algorithm is compared to a fixed model. However, in many real-world
applications, such as spam prediction, the best target function may drift over
time, or have shifts from time to time. We develop a novel selective sampling
algorithm for the drifting setting, analyze it under no assumptions on the
mechanism generating the sequence of instances, and derive new mistake bounds
that depend on the amount of drift in the problem. Simulations on synthetic and
real-world datasets demonstrate the superiority of our algorithms as a
selective sampling algorithm in the drifting setting.
| [
"Edward Moroshko, Koby Crammer",
"['Edward Moroshko' 'Koby Crammer']"
]
|
stat.ME cs.LG stat.ML | null | 1402.4102 | null | null | http://arxiv.org/pdf/1402.4102v2 | 2014-05-12T06:38:21Z | 2014-02-17T19:57:59Z | Stochastic Gradient Hamiltonian Monte Carlo | Hamiltonian Monte Carlo (HMC) sampling methods provide a mechanism for
defining distant proposals with high acceptance probabilities in a
Metropolis-Hastings framework, enabling more efficient exploration of the state
space than standard random-walk proposals. The popularity of such methods has
grown significantly in recent years. However, a limitation of HMC methods is
the required gradient computation for simulation of the Hamiltonian dynamical
system-such computation is infeasible in problems involving a large sample size
or streaming data. Instead, we must rely on a noisy gradient estimate computed
from a subset of the data. In this paper, we explore the properties of such a
stochastic gradient HMC approach. Surprisingly, the natural implementation of
the stochastic approximation can be arbitrarily bad. To address this problem we
introduce a variant that uses second-order Langevin dynamics with a friction
term that counteracts the effects of the noisy gradient, maintaining the
desired target distribution as the invariant distribution. Results on simulated
data validate our theory. We also provide an application of our methods to a
classification task using neural networks and to online Bayesian matrix
factorization.
| [
"Tianqi Chen, Emily B. Fox, Carlos Guestrin",
"['Tianqi Chen' 'Emily B. Fox' 'Carlos Guestrin']"
]
|
cs.LG stat.ME stat.ML | null | 1402.4279 | null | null | http://arxiv.org/pdf/1402.4279v2 | 2015-03-06T10:22:12Z | 2014-02-18T10:34:41Z | A Bayesian Model of node interaction in networks | We are concerned with modeling the strength of links in networks by taking
into account how often those links are used. Link usage is a strong indicator
of how closely two nodes are related, but existing network models in Bayesian
Statistics and Machine Learning are able to predict only wether a link exists
at all. As priors for latent attributes of network nodes we explore the Chinese
Restaurant Process (CRP) and a multivariate Gaussian with fixed dimensionality.
The model is applied to a social network dataset and a word coocurrence
dataset.
| [
"Ingmar Schuster",
"['Ingmar Schuster']"
]
|
cs.DB cs.LG | null | 1402.4283 | null | null | http://arxiv.org/pdf/1402.4283v1 | 2014-02-18T10:44:01Z | 2014-02-18T10:44:01Z | Discretization of Temporal Data: A Survey | In real world, the huge amount of temporal data is to be processed in many
application areas such as scientific, financial, network monitoring, sensor
data analysis. Data mining techniques are primarily oriented to handle discrete
features. In the case of temporal data the time plays an important role on the
characteristics of data. To consider this effect, the data discretization
techniques have to consider the time while processing to resolve the issue by
finding the intervals of data which are more concise and precise with respect
to time. Here, this research is reviewing different data discretization
techniques used in temporal data applications according to the inclusion or
exclusion of: class label, temporal order of the data and handling of stream
data to open the research direction for temporal data discretization to improve
the performance of data mining technique.
| [
"P. Chaudhari, D. P. Rana, R. G. Mehta, N. J. Mistry, M. M. Raghuwanshi",
"['P. Chaudhari' 'D. P. Rana' 'R. G. Mehta' 'N. J. Mistry'\n 'M. M. Raghuwanshi']"
]
|
stat.ML cs.LG | null | 1402.4293 | null | null | http://arxiv.org/pdf/1402.4293v1 | 2014-02-18T11:13:45Z | 2014-02-18T11:13:45Z | The Random Forest Kernel and other kernels for big data from random
partitions | We present Random Partition Kernels, a new class of kernels derived by
demonstrating a natural connection between random partitions of objects and
kernels between those objects. We show how the construction can be used to
create kernels from methods that would not normally be viewed as random
partitions, such as Random Forest. To demonstrate the potential of this method,
we propose two new kernels, the Random Forest Kernel and the Fast Cluster
Kernel, and show that these kernels consistently outperform standard kernels on
problems involving real-world datasets. Finally, we show how the form of these
kernels lend themselves to a natural approximation that is appropriate for
certain big data problems, allowing $O(N)$ inference in methods such as
Gaussian Processes, Support Vector Machines and Kernel PCA.
| [
"Alex Davies, Zoubin Ghahramani",
"['Alex Davies' 'Zoubin Ghahramani']"
]
|
stat.ML cs.LG | null | 1402.4304 | null | null | http://arxiv.org/pdf/1402.4304v3 | 2014-04-24T11:44:13Z | 2014-02-18T11:38:11Z | Automatic Construction and Natural-Language Description of Nonparametric
Regression Models | This paper presents the beginnings of an automatic statistician, focusing on
regression problems. Our system explores an open-ended space of statistical
models to discover a good explanation of a data set, and then produces a
detailed report with figures and natural-language text. Our approach treats
unknown regression functions nonparametrically using Gaussian processes, which
has two important consequences. First, Gaussian processes can model functions
in terms of high-level properties (e.g. smoothness, trends, periodicity,
changepoints). Taken together with the compositional structure of our language
of models this allows us to automatically describe functions in simple terms.
Second, the use of flexible nonparametric models and a rich language for
composing them in an open-ended manner also results in state-of-the-art
extrapolation performance evaluated over 13 real time series data sets from
various domains.
| [
"['James Robert Lloyd' 'David Duvenaud' 'Roger Grosse'\n 'Joshua B. Tenenbaum' 'Zoubin Ghahramani']",
"James Robert Lloyd, David Duvenaud, Roger Grosse, Joshua B. Tenenbaum,\n Zoubin Ghahramani"
]
|
stat.ML cs.AI cs.LG stat.ME | null | 1402.4306 | null | null | http://arxiv.org/pdf/1402.4306v2 | 2014-02-19T10:49:16Z | 2014-02-18T11:47:38Z | Student-t Processes as Alternatives to Gaussian Processes | We investigate the Student-t process as an alternative to the Gaussian
process as a nonparametric prior over functions. We derive closed form
expressions for the marginal likelihood and predictive distribution of a
Student-t process, by integrating away an inverse Wishart process prior over
the covariance kernel of a Gaussian process model. We show surprising
equivalences between different hierarchical Gaussian process models leading to
Student-t processes, and derive a new sampling scheme for the inverse Wishart
process, which helps elucidate these equivalences. Overall, we show that a
Student-t process can retain the attractive properties of a Gaussian process --
a nonparametric representation, analytic marginal and predictive distributions,
and easy model selection through covariance kernels -- but has enhanced
flexibility, and predictive covariances that, unlike a Gaussian process,
explicitly depend on the values of training observations. We verify empirically
that a Student-t process is especially useful in situations where there are
changes in covariance structure, or in applications like Bayesian optimization,
where accurate predictive covariances are critical for good performance. These
advantages come at no additional computational cost over Gaussian processes.
| [
"Amar Shah, Andrew Gordon Wilson and Zoubin Ghahramani",
"['Amar Shah' 'Andrew Gordon Wilson' 'Zoubin Ghahramani']"
]
|
cs.LG | null | 1402.4322 | null | null | http://arxiv.org/pdf/1402.4322v1 | 2014-02-18T13:08:47Z | 2014-02-18T13:08:47Z | On the properties of $\alpha$-unchaining single linkage hierarchical
clustering | In the election of a hierarchical clustering method, theoretic properties may
give some insight to determine which method is the most suitable to treat a
clustering problem. Herein, we study some basic properties of two hierarchical
clustering methods: $\alpha$-unchaining single linkage or $SL(\alpha)$ and a
modified version of this one, $SL^*(\alpha)$. We compare the results with the
properties satisfied by the classical linkage-based hierarchical clustering
methods.
| [
"A. Mart\\'inez-P\\'erez",
"['A. Martínez-Pérez']"
]
|
cs.LG stat.ML | null | 1402.4354 | null | null | http://arxiv.org/pdf/1402.4354v1 | 2014-02-18T14:35:30Z | 2014-02-18T14:35:30Z | Hybrid SRL with Optimization Modulo Theories | Generally speaking, the goal of constructive learning could be seen as, given
an example set of structured objects, to generate novel objects with similar
properties. From a statistical-relational learning (SRL) viewpoint, the task
can be interpreted as a constraint satisfaction problem, i.e. the generated
objects must obey a set of soft constraints, whose weights are estimated from
the data. Traditional SRL approaches rely on (finite) First-Order Logic (FOL)
as a description language, and on MAX-SAT solvers to perform inference. Alas,
FOL is unsuited for con- structive problems where the objects contain a mixture
of Boolean and numerical variables. It is in fact difficult to implement, e.g.
linear arithmetic constraints within the language of FOL. In this paper we
propose a novel class of hybrid SRL methods that rely on Satisfiability Modulo
Theories, an alternative class of for- mal languages that allow to describe,
and reason over, mixed Boolean-numerical objects and constraints. The resulting
methods, which we call Learning Mod- ulo Theories, are formulated within the
structured output SVM framework, and employ a weighted SMT solver as an
optimization oracle to perform efficient in- ference and discriminative max
margin weight learning. We also present a few examples of constructive learning
applications enabled by our method.
| [
"Stefano Teso and Roberto Sebastiani and Andrea Passerini",
"['Stefano Teso' 'Roberto Sebastiani' 'Andrea Passerini']"
]
|
math.OC cs.LG stat.ML | null | 1402.4371 | null | null | http://arxiv.org/pdf/1402.4371v1 | 2014-02-18T15:16:45Z | 2014-02-18T15:16:45Z | A convergence proof of the split Bregman method for regularized
least-squares problems | The split Bregman (SB) method [T. Goldstein and S. Osher, SIAM J. Imaging
Sci., 2 (2009), pp. 323-43] is a fast splitting-based algorithm that solves
image reconstruction problems with general l1, e.g., total-variation (TV) and
compressed sensing (CS), regularizations by introducing a single variable split
to decouple the data-fitting term and the regularization term, yielding simple
subproblems that are separable (or partially separable) and easy to minimize.
Several convergence proofs have been proposed, and these proofs either impose a
"full column rank" assumption to the split or assume exact updates in all
subproblems. However, these assumptions are impractical in many applications
such as the X-ray computed tomography (CT) image reconstructions, where the
inner least-squares problem usually cannot be solved efficiently due to the
highly shift-variant Hessian. In this paper, we show that when the data-fitting
term is quadratic, the SB method is a convergent alternating direction method
of multipliers (ADMM), and a straightforward convergence proof with inexact
updates is given using [J. Eckstein and D. P. Bertsekas, Mathematical
Programming, 55 (1992), pp. 293-318, Theorem 8]. Furthermore, since the SB
method is just a special case of an ADMM algorithm, it seems likely that the
ADMM algorithm will be faster than the SB method if the augmented Largangian
(AL) penalty parameters are selected appropriately. To have a concrete example,
we conduct a convergence rate analysis of the ADMM algorithm using two splits
for image restoration problems with quadratic data-fitting term and
regularization term. According to our analysis, we can show that the two-split
ADMM algorithm can be faster than the SB method if the AL penalty parameter of
the SB method is suboptimal. Numerical experiments were conducted to verify our
analysis.
| [
"['Hung Nien' 'Jeffrey A. Fessler']",
"Hung Nien and Jeffrey A. Fessler"
]
|
math.OC cs.LG stat.ML | 10.1109/TMI.2014.2358499 | 1402.4381 | null | null | http://arxiv.org/abs/1402.4381v1 | 2014-02-18T16:02:36Z | 2014-02-18T16:02:36Z | Fast X-ray CT image reconstruction using the linearized augmented
Lagrangian method with ordered subsets | The augmented Lagrangian (AL) method that solves convex optimization problems
with linear constraints has drawn more attention recently in imaging
applications due to its decomposable structure for composite cost functions and
empirical fast convergence rate under weak conditions. However, for problems
such as X-ray computed tomography (CT) image reconstruction and large-scale
sparse regression with "big data", where there is no efficient way to solve the
inner least-squares problem, the AL method can be slow due to the inevitable
iterative inner updates. In this paper, we focus on solving regularized
(weighted) least-squares problems using a linearized variant of the AL method
that replaces the quadratic AL penalty term in the scaled augmented Lagrangian
with its separable quadratic surrogate (SQS) function, thus leading to a much
simpler ordered-subsets (OS) accelerable splitting-based algorithm, OS-LALM,
for X-ray CT image reconstruction. To further accelerate the proposed
algorithm, we use a second-order recursive system analysis to design a
deterministic downward continuation approach that avoids tedious parameter
tuning and provides fast convergence. Experimental results show that the
proposed algorithm significantly accelerates the "convergence" of X-ray CT
image reconstruction with negligible overhead and greatly reduces the OS
artifacts in the reconstructed image when using many subsets for OS
acceleration.
| [
"['Hung Nien' 'Jeffrey A. Fessler']",
"Hung Nien and Jeffrey A. Fessler"
]
|
math.OC cs.LG stat.ML | null | 1402.4419 | null | null | http://arxiv.org/pdf/1402.4419v3 | 2015-02-01T07:20:36Z | 2014-02-18T17:50:30Z | Incremental Majorization-Minimization Optimization with Application to
Large-Scale Machine Learning | Majorization-minimization algorithms consist of successively minimizing a
sequence of upper bounds of the objective function. These upper bounds are
tight at the current estimate, and each iteration monotonically drives the
objective function downhill. Such a simple principle is widely applicable and
has been very popular in various scientific fields, especially in signal
processing and statistics. In this paper, we propose an incremental
majorization-minimization scheme for minimizing a large sum of continuous
functions, a problem of utmost importance in machine learning. We present
convergence guarantees for non-convex and convex optimization when the upper
bounds approximate the objective up to a smooth error; we call such upper
bounds "first-order surrogate functions". More precisely, we study asymptotic
stationary point guarantees for non-convex problems, and for convex ones, we
provide convergence rates for the expected objective function value. We apply
our scheme to composite optimization and obtain a new incremental proximal
gradient algorithm with linear convergence rate for strongly convex functions.
In our experiments, we show that our method is competitive with the state of
the art for solving machine learning problems such as logistic regression when
the number of training samples is large enough, and we demonstrate its
usefulness for sparse estimation with non-convex penalties.
| [
"Julien Mairal (INRIA Grenoble Rh\\^one-Alpes / LJK Laboratoire Jean\n Kuntzmann)",
"['Julien Mairal']"
]
|
cs.LG | null | 1402.4437 | null | null | http://arxiv.org/pdf/1402.4437v2 | 2014-05-25T12:21:34Z | 2014-02-18T18:47:41Z | Learning the Irreducible Representations of Commutative Lie Groups | We present a new probabilistic model of compact commutative Lie groups that
produces invariant-equivariant and disentangled representations of data. To
define the notion of disentangling, we borrow a fundamental principle from
physics that is used to derive the elementary particles of a system from its
symmetries. Our model employs a newfound Bayesian conjugacy relation that
enables fully tractable probabilistic inference over compact commutative Lie
groups -- a class that includes the groups that describe the rotation and
cyclic translation of images. We train the model on pairs of transformed image
patches, and show that the learned invariant representation is highly effective
for classification.
| [
"['Taco Cohen' 'Max Welling']",
"Taco Cohen, Max Welling"
]
|
cs.LG stat.ML | null | 1402.4512 | null | null | http://arxiv.org/pdf/1402.4512v2 | 2014-09-04T16:53:19Z | 2014-02-18T22:08:50Z | Classification with Sparse Overlapping Groups | Classification with a sparsity constraint on the solution plays a central
role in many high dimensional machine learning applications. In some cases, the
features can be grouped together so that entire subsets of features can be
selected or not selected. In many applications, however, this can be too
restrictive. In this paper, we are interested in a less restrictive form of
structured sparse feature selection: we assume that while features can be
grouped according to some notion of similarity, not all features in a group
need be selected for the task at hand. When the groups are comprised of
disjoint sets of features, this is sometimes referred to as the "sparse group"
lasso, and it allows for working with a richer class of models than traditional
group lasso methods. Our framework generalizes conventional sparse group lasso
further by allowing for overlapping groups, an additional flexiblity needed in
many applications and one that presents further challenges. The main
contribution of this paper is a new procedure called Sparse Overlapping Group
(SOG) lasso, a convex optimization program that automatically selects similar
features for classification in high dimensions. We establish model selection
error bounds for SOGlasso classification problems under a fairly general
setting. In particular, the error bounds are the first such results for
classification using the sparse group lasso. Furthermore, the general SOGlasso
bound specializes to results for the lasso and the group lasso, some known and
some new. The SOGlasso is motivated by multi-subject fMRI studies in which
functional activity is classified using brain voxels as features, source
localization problems in Magnetoencephalography (MEG), and analyzing gene
activation patterns in microarray data analysis. Experiments with real and
synthetic data demonstrate the advantages of SOGlasso compared to the lasso and
group lasso.
| [
"Nikhil Rao, Robert Nowak, Christopher Cox and Timothy Rogers",
"['Nikhil Rao' 'Robert Nowak' 'Christopher Cox' 'Timothy Rogers']"
]
|
cs.LG cs.AI stat.ML | null | 1402.4542 | null | null | http://arxiv.org/pdf/1402.4542v1 | 2014-02-19T01:29:14Z | 2014-02-19T01:29:14Z | Unsupervised Ranking of Multi-Attribute Objects Based on Principal
Curves | Unsupervised ranking faces one critical challenge in evaluation applications,
that is, no ground truth is available. When PageRank and its variants show a
good solution in related subjects, they are applicable only for ranking from
link-structure data. In this work, we focus on unsupervised ranking from
multi-attribute data which is also common in evaluation tasks. To overcome the
challenge, we propose five essential meta-rules for the design and assessment
of unsupervised ranking approaches: scale and translation invariance, strict
monotonicity, linear/nonlinear capacities, smoothness, and explicitness of
parameter size. These meta-rules are regarded as high level knowledge for
unsupervised ranking tasks. Inspired by the works in [8] and [14], we propose a
ranking principal curve (RPC) model, which learns a one-dimensional manifold
function to perform unsupervised ranking tasks on multi-attribute observations.
Furthermore, the RPC is modeled to be a cubic B\'ezier curve with control
points restricted in the interior of a hypercube, thereby complying with all
the five meta-rules to infer a reasonable ranking list. With control points as
the model parameters, one is able to understand the learned manifold and to
interpret the ranking list semantically. Numerical experiments of the presented
RPC model are conducted on two open datasets of different ranking applications.
In comparison with the state-of-the-art approaches, the new model is able to
show more reasonable ranking lists.
| [
"Chun-Guo Li, Xing Mei, Bao-Gang Hu",
"['Chun-Guo Li' 'Xing Mei' 'Bao-Gang Hu']"
]
|
cs.CV cs.LG stat.ML | null | 1402.4566 | null | null | http://arxiv.org/pdf/1402.4566v2 | 2014-03-18T02:05:34Z | 2014-02-19T06:41:12Z | Transduction on Directed Graphs via Absorbing Random Walks | In this paper we consider the problem of graph-based transductive
classification, and we are particularly interested in the directed graph
scenario which is a natural form for many real world applications. Different
from existing research efforts that either only deal with undirected graphs or
circumvent directionality by means of symmetrization, we propose a novel random
walk approach on directed graphs using absorbing Markov chains, which can be
regarded as maximizing the accumulated expected number of visits from the
unlabeled transient states. Our algorithm is simple, easy to implement, and
works with large-scale graphs. In particular, it is capable of preserving the
graph structure even when the input graph is sparse and changes over time, as
well as retaining weak signals presented in the directed edges. We present its
intimate connections to a number of existing methods, including graph kernels,
graph Laplacian based methods, and interestingly, spanning forest of graphs.
Its computational complexity and the generalization error are also studied.
Empirically our algorithm is systematically evaluated on a wide range of
applications, where it has shown to perform competitively comparing to a suite
of state-of-the-art methods.
| [
"['Jaydeep De' 'Xiaowei Zhang' 'Li Cheng']",
"Jaydeep De and Xiaowei Zhang and Li Cheng"
]
|
cs.LG | 10.14445/22312803/IJCTT-V8P105 | 1402.4645 | null | null | http://arxiv.org/abs/1402.4645v1 | 2014-02-19T12:40:31Z | 2014-02-19T12:40:31Z | A Survey on Semi-Supervised Learning Techniques | Semisupervised learning is a learning standard which deals with the study of
how computers and natural systems such as human beings acquire knowledge in the
presence of both labeled and unlabeled data. Semisupervised learning based
methods are preferred when compared to the supervised and unsupervised learning
because of the improved performance shown by the semisupervised approaches in
the presence of large volumes of data. Labels are very hard to attain while
unlabeled data are surplus, therefore semisupervised learning is a noble
indication to shrink human labor and improve accuracy. There has been a large
spectrum of ideas on semisupervised learning. In this paper we bring out some
of the key approaches for semisupervised learning.
| [
"['V. Jothi Prakash' 'Dr. L. M. Nithya']",
"V. Jothi Prakash, Dr. L.M. Nithya"
]
|
stat.ML cs.IR cs.LG | null | 1402.4653 | null | null | http://arxiv.org/pdf/1402.4653v1 | 2014-02-19T13:21:40Z | 2014-02-19T13:21:40Z | Retrieval of Experiments by Efficient Estimation of Marginal Likelihood | We study the task of retrieving relevant experiments given a query
experiment. By experiment, we mean a collection of measurements from a set of
`covariates' and the associated `outcomes'. While similar experiments can be
retrieved by comparing available `annotations', this approach ignores the
valuable information available in the measurements themselves. To incorporate
this information in the retrieval task, we suggest employing a retrieval metric
that utilizes probabilistic models learned from the measurements. We argue that
such a metric is a sensible measure of similarity between two experiments since
it permits inclusion of experiment-specific prior knowledge. However, accurate
models are often not analytical, and one must resort to storing posterior
samples which demands considerable resources. Therefore, we study strategies to
select informative posterior samples to reduce the computational load while
maintaining the retrieval performance. We demonstrate the efficacy of our
approach on simulated data with simple linear regression as the models, and
real world datasets.
| [
"['Sohan Seth' 'John Shawe-Taylor' 'Samuel Kaski']",
"Sohan Seth, John Shawe-Taylor, Samuel Kaski"
]
|
stat.ML cs.LG stat.AP | null | 1402.4732 | null | null | http://arxiv.org/pdf/1402.4732v1 | 2014-02-19T17:09:14Z | 2014-02-19T17:09:14Z | Efficient Inference of Gaussian Process Modulated Renewal Processes with
Application to Medical Event Data | The episodic, irregular and asynchronous nature of medical data render them
difficult substrates for standard machine learning algorithms. We would like to
abstract away this difficulty for the class of time-stamped categorical
variables (or events) by modeling them as a renewal process and inferring a
probability density over continuous, longitudinal, nonparametric intensity
functions modulating that process. Several methods exist for inferring such a
density over intensity functions, but either their constraints and assumptions
prevent their use with our potentially bursty event streams, or their time
complexity renders their use intractable on our long-duration observations of
high-resolution events, or both. In this paper we present a new and efficient
method for inferring a distribution over intensity functions that uses direct
numeric integration and smooth interpolation over Gaussian processes. We
demonstrate that our direct method is up to twice as accurate and two orders of
magnitude more efficient than the best existing method (thinning). Importantly,
the direct method can infer intensity functions over the full range of bursty
to memoryless to regular events, which thinning and many other methods cannot.
Finally, we apply the method to clinical event data and demonstrate the
face-validity of the abstraction, which is now amenable to standard learning
algorithms.
| [
"['Thomas A. Lasko']",
"Thomas A. Lasko"
]
|
cs.LG cs.DS cs.IT math.IT stat.ML | null | 1402.4746 | null | null | http://arxiv.org/pdf/1402.4746v1 | 2014-02-19T17:59:55Z | 2014-02-19T17:59:55Z | Near-optimal-sample estimators for spherical Gaussian mixtures | Statistical and machine-learning algorithms are frequently applied to
high-dimensional data. In many of these applications data is scarce, and often
much more costly than computation time. We provide the first sample-efficient
polynomial-time estimator for high-dimensional spherical Gaussian mixtures.
For mixtures of any $k$ $d$-dimensional spherical Gaussians, we derive an
intuitive spectral-estimator that uses
$\mathcal{O}_k\bigl(\frac{d\log^2d}{\epsilon^4}\bigr)$ samples and runs in time
$\mathcal{O}_{k,\epsilon}(d^3\log^5 d)$, both significantly lower than
previously known. The constant factor $\mathcal{O}_k$ is polynomial for sample
complexity and is exponential for the time complexity, again much smaller than
what was previously known. We also show that
$\Omega_k\bigl(\frac{d}{\epsilon^2}\bigr)$ samples are needed for any
algorithm. Hence the sample complexity is near-optimal in the number of
dimensions.
We also derive a simple estimator for one-dimensional mixtures that uses
$\mathcal{O}\bigl(\frac{k \log \frac{k}{\epsilon} }{\epsilon^2} \bigr)$ samples
and runs in time
$\widetilde{\mathcal{O}}\left(\bigl(\frac{k}{\epsilon}\bigr)^{3k+1}\right)$.
Our other technical contributions include a faster algorithm for choosing a
density estimate from a set of distributions, that minimizes the $\ell_1$
distance to an unknown underlying distribution.
| [
"['Jayadev Acharya' 'Ashkan Jafarpour' 'Alon Orlitsky'\n 'Ananda Theertha Suresh']",
"Jayadev Acharya, Ashkan Jafarpour, Alon Orlitsky, Ananda Theertha\n Suresh"
]
|
cs.LG stat.ML | null | 1402.4844 | null | null | http://arxiv.org/pdf/1402.4844v2 | 2016-05-26T14:06:50Z | 2014-02-19T22:57:03Z | Subspace Learning with Partial Information | The goal of subspace learning is to find a $k$-dimensional subspace of
$\mathbb{R}^d$, such that the expected squared distance between instance
vectors and the subspace is as small as possible. In this paper we study
subspace learning in a partial information setting, in which the learner can
only observe $r \le d$ attributes from each instance vector. We propose several
efficient algorithms for this task, and analyze their sample complexity
| [
"['Alon Gonen' 'Dan Rosenbaum' 'Yonina Eldar' 'Shai Shalev-Shwartz']",
"Alon Gonen, Dan Rosenbaum, Yonina Eldar, Shai Shalev-Shwartz"
]
|
cs.LG cs.MA | null | 1402.4845 | null | null | http://arxiv.org/pdf/1402.4845v1 | 2014-02-19T22:59:14Z | 2014-02-19T22:59:14Z | Diffusion Least Mean Square: Simulations | In this technical report we analyse the performance of diffusion strategies
applied to the Least-Mean-Square adaptive filter. We configure a network of
cooperative agents running adaptive filters and discuss their behaviour when
compared with a non-cooperative agent which represents the average of the
network. The analysis provides conditions under which diversity in the filter
parameters is beneficial in terms of convergence and stability. Simulations
drive and support the analysis.
| [
"Jonathan Gelati and Sithan Kanna",
"['Jonathan Gelati' 'Sithan Kanna']"
]
|
cs.LG | null | 1402.4861 | null | null | http://arxiv.org/pdf/1402.4861v1 | 2014-02-20T01:44:33Z | 2014-02-20T01:44:33Z | A Quasi-Newton Method for Large Scale Support Vector Machines | This paper adapts a recently developed regularized stochastic version of the
Broyden, Fletcher, Goldfarb, and Shanno (BFGS) quasi-Newton method for the
solution of support vector machine classification problems. The proposed method
is shown to converge almost surely to the optimal classifier at a rate that is
linear in expectation. Numerical results show that the proposed method exhibits
a convergence rate that degrades smoothly with the dimensionality of the
feature vectors.
| [
"['Aryan Mokhtari' 'Alejandro Ribeiro']",
"Aryan Mokhtari and Alejandro Ribeiro"
]
|
stat.ML cs.LG | null | 1402.4862 | null | null | http://arxiv.org/pdf/1402.4862v1 | 2014-02-20T01:54:37Z | 2014-02-20T01:54:37Z | Learning the Parameters of Determinantal Point Process Kernels | Determinantal point processes (DPPs) are well-suited for modeling repulsion
and have proven useful in many applications where diversity is desired. While
DPPs have many appealing properties, such as efficient sampling, learning the
parameters of a DPP is still considered a difficult problem due to the
non-convex nature of the likelihood function. In this paper, we propose using
Bayesian methods to learn the DPP kernel parameters. These methods are
applicable in large-scale and continuous DPP settings even when the exact form
of the eigendecomposition is unknown. We demonstrate the utility of our DPP
learning methods in studying the progression of diabetic neuropathy based on
spatial distribution of nerve fibers, and in studying human perception of
diversity in images.
| [
"['Raja Hafiz Affandi' 'Emily B. Fox' 'Ryan P. Adams' 'Ben Taskar']",
"Raja Hafiz Affandi, Emily B. Fox, Ryan P. Adams and Ben Taskar"
]
|
cs.IR cs.CV cs.LG stat.ML | 10.14445/22312803/IJCTT-V8P106 | 1402.4888 | null | null | http://arxiv.org/abs/1402.4888v1 | 2014-02-20T04:32:40Z | 2014-02-20T04:32:40Z | Survey on Sparse Coded Features for Content Based Face Image Retrieval | Content based image retrieval, a technique which uses visual contents of
image to search images from large scale image databases according to users'
interests. This paper provides a comprehensive survey on recent technology used
in the area of content based face image retrieval. Nowadays digital devices and
photo sharing sites are getting more popularity, large human face photos are
available in database. Multiple types of facial features are used to represent
discriminality on large scale human facial image database. Searching and mining
of facial images are challenging problems and important research issues. Sparse
representation on features provides significant improvement in indexing related
images to query image.
| [
"D. Johnvictor, G. Selvavinayagam",
"['D. Johnvictor' 'G. Selvavinayagam']"
]
|
cs.LG cs.CV stat.ML | null | 1402.5077 | null | null | http://arxiv.org/pdf/1402.5077v1 | 2014-02-20T17:08:34Z | 2014-02-20T17:08:34Z | Group-sparse Matrix Recovery | We apply the OSCAR (octagonal selection and clustering algorithms for
regression) in recovering group-sparse matrices (two-dimensional---2D---arrays)
from compressive measurements. We propose a 2D version of OSCAR (2OSCAR)
consisting of the $\ell_1$ norm and the pair-wise $\ell_{\infty}$ norm, which
is convex but non-differentiable. We show that the proximity operator of 2OSCAR
can be computed based on that of OSCAR. The 2OSCAR problem can thus be
efficiently solved by state-of-the-art proximal splitting algorithms.
Experiments on group-sparse 2D array recovery show that 2OSCAR regularization
solved by the SpaRSA algorithm is the fastest choice, while the PADMM algorithm
(with debiasing) yields the most accurate results.
| [
"Xiangrong Zeng and M\\'ario A. T. Figueiredo",
"['Xiangrong Zeng' 'Mário A. T. Figueiredo']"
]
|
cs.LG math.OC stat.ML | null | 1402.5131 | null | null | http://arxiv.org/pdf/1402.5131v6 | 2015-07-07T00:13:55Z | 2014-02-20T20:48:10Z | Multi-Step Stochastic ADMM in High Dimensions: Applications to Sparse
Optimization and Noisy Matrix Decomposition | We propose an efficient ADMM method with guarantees for high-dimensional
problems. We provide explicit bounds for the sparse optimization problem and
the noisy matrix decomposition problem. For sparse optimization, we establish
that the modified ADMM method has an optimal convergence rate of
$\mathcal{O}(s\log d/T)$, where $s$ is the sparsity level, $d$ is the data
dimension and $T$ is the number of steps. This matches with the minimax lower
bounds for sparse estimation. For matrix decomposition into sparse and low rank
components, we provide the first guarantees for any online method, and prove a
convergence rate of $\tilde{\mathcal{O}}((s+r)\beta^2(p) /T) +
\mathcal{O}(1/p)$ for a $p\times p$ matrix, where $s$ is the sparsity level,
$r$ is the rank and $\Theta(\sqrt{p})\leq \beta(p)\leq \Theta(p)$. Our
guarantees match the minimax lower bound with respect to $s,r$ and $T$. In
addition, we match the minimax lower bound with respect to the matrix dimension
$p$, i.e. $\beta(p)=\Theta(\sqrt{p})$, for many important statistical models
including the independent noise model, the linear Bayesian network and the
latent Gaussian graphical model under some conditions. Our ADMM method is based
on epoch-based annealing and consists of inexpensive steps which involve
projections on to simple norm balls. Experiments show that for both sparse
optimization and matrix decomposition problems, our algorithm outperforms the
state-of-the-art methods. In particular, we reach higher accuracy with same
time complexity.
| [
"Hanie Sedghi and Anima Anandkumar and Edmond Jonckheere",
"['Hanie Sedghi' 'Anima Anandkumar' 'Edmond Jonckheere']"
]
|
cs.LG cs.CC cs.DS | null | 1402.5164 | null | null | http://arxiv.org/pdf/1402.5164v1 | 2014-02-20T22:41:39Z | 2014-02-20T22:41:39Z | Distribution-Independent Reliable Learning | We study several questions in the reliable agnostic learning framework of
Kalai et al. (2009), which captures learning tasks in which one type of error
is costlier than others. A positive reliable classifier is one that makes no
false positive errors. The goal in the positive reliable agnostic framework is
to output a hypothesis with the following properties: (i) its false positive
error rate is at most $\epsilon$, (ii) its false negative error rate is at most
$\epsilon$ more than that of the best positive reliable classifier from the
class. A closely related notion is fully reliable agnostic learning, which
considers partial classifiers that are allowed to predict "unknown" on some
inputs. The best fully reliable partial classifier is one that makes no errors
and minimizes the probability of predicting "unknown", and the goal in fully
reliable learning is to output a hypothesis that is almost as good as the best
fully reliable partial classifier from a class.
For distribution-independent learning, the best known algorithms for PAC
learning typically utilize polynomial threshold representations, while the
state of the art agnostic learning algorithms use point-wise polynomial
approximations. We show that one-sided polynomial approximations, an
intermediate notion between polynomial threshold representations and point-wise
polynomial approximations, suffice for learning in the reliable agnostic
settings. We then show that majorities can be fully reliably learned and
disjunctions of majorities can be positive reliably learned, through
constructions of appropriate one-sided polynomial approximations. Our fully
reliable algorithm for majorities provides the first evidence that fully
reliable learning may be strictly easier than agnostic learning. Our algorithms
also satisfy strong attribute-efficiency properties, and provide smooth
tradeoffs between sample complexity and running time.
| [
"Varun Kanade and Justin Thaler",
"['Varun Kanade' 'Justin Thaler']"
]
|
cs.IR cs.LG stat.ML | 10.1109/TIP.2014.2378057 | 1402.5176 | null | null | http://arxiv.org/abs/1402.5176v1 | 2014-02-21T00:42:48Z | 2014-02-21T00:42:48Z | Pareto-depth for Multiple-query Image Retrieval | Most content-based image retrieval systems consider either one single query,
or multiple queries that include the same object or represent the same semantic
information. In this paper we consider the content-based image retrieval
problem for multiple query images corresponding to different image semantics.
We propose a novel multiple-query information retrieval algorithm that combines
the Pareto front method (PFM) with efficient manifold ranking (EMR). We show
that our proposed algorithm outperforms state of the art multiple-query
retrieval algorithms on real-world image databases. We attribute this
performance improvement to concavity properties of the Pareto fronts, and prove
a theoretical result that characterizes the asymptotic concavity of the fronts.
| [
"Ko-Jen Hsiao, Jeff Calder, Alfred O. Hero III",
"['Ko-Jen Hsiao' 'Jeff Calder' 'Alfred O. Hero III']"
]
|
cs.LG math.NA stat.ML | null | 1402.5180 | null | null | http://arxiv.org/pdf/1402.5180v4 | 2015-03-04T20:40:42Z | 2014-02-21T01:37:02Z | Guaranteed Non-Orthogonal Tensor Decomposition via Alternating Rank-$1$
Updates | In this paper, we provide local and global convergence guarantees for
recovering CP (Candecomp/Parafac) tensor decomposition. The main step of the
proposed algorithm is a simple alternating rank-$1$ update which is the
alternating version of the tensor power iteration adapted for asymmetric
tensors. Local convergence guarantees are established for third order tensors
of rank $k$ in $d$ dimensions, when $k=o \bigl( d^{1.5} \bigr)$ and the tensor
components are incoherent. Thus, we can recover overcomplete tensor
decomposition. We also strengthen the results to global convergence guarantees
under stricter rank condition $k \le \beta d$ (for arbitrary constant $\beta >
1$) through a simple initialization procedure where the algorithm is
initialized by top singular vectors of random tensor slices. Furthermore, the
approximate local convergence guarantees for $p$-th order tensors are also
provided under rank condition $k=o \bigl( d^{p/2} \bigr)$. The guarantees also
include tight perturbation analysis given noisy tensor.
| [
"Animashree Anandkumar and Rong Ge and Majid Janzamin",
"['Animashree Anandkumar' 'Rong Ge' 'Majid Janzamin']"
]
|
math.OC cs.LG math.NA | null | 1402.5284 | null | null | http://arxiv.org/pdf/1402.5284v3 | 2015-04-22T17:15:02Z | 2014-02-21T12:49:51Z | Convergence results for projected line-search methods on varieties of
low-rank matrices via \L{}ojasiewicz inequality | The aim of this paper is to derive convergence results for projected
line-search methods on the real-algebraic variety $\mathcal{M}_{\le k}$ of real
$m \times n$ matrices of rank at most $k$. Such methods extend Riemannian
optimization methods, which are successfully used on the smooth manifold
$\mathcal{M}_k$ of rank-$k$ matrices, to its closure by taking steps along
gradient-related directions in the tangent cone, and afterwards projecting back
to $\mathcal{M}_{\le k}$. Considering such a method circumvents the
difficulties which arise from the nonclosedness and the unbounded curvature of
$\mathcal{M}_k$. The pointwise convergence is obtained for real-analytic
functions on the basis of a \L{}ojasiewicz inequality for the projection of the
antigradient to the tangent cone. If the derived limit point lies on the smooth
part of $\mathcal{M}_{\le k}$, i.e. in $\mathcal{M}_k$, this boils down to more
or less known results, but with the benefit that asymptotic convergence rate
estimates (for specific step-sizes) can be obtained without an a priori
curvature bound, simply from the fact that the limit lies on a smooth manifold.
At the same time, one can give a convincing justification for assuming critical
points to lie in $\mathcal{M}_k$: if $X$ is a critical point of $f$ on
$\mathcal{M}_{\le k}$, then either $X$ has rank $k$, or $\nabla f(X) = 0$.
| [
"['Reinhold Schneider' 'André Uschmajew']",
"Reinhold Schneider and Andr\\'e Uschmajew"
]
|
cs.LG stat.AP stat.ML | null | 1402.5360 | null | null | http://arxiv.org/pdf/1402.5360v1 | 2014-02-21T17:24:53Z | 2014-02-21T17:24:53Z | Important Molecular Descriptors Selection Using Self Tuned Reweighted
Sampling Method for Prediction of Antituberculosis Activity | In this paper, a new descriptor selection method for selecting an optimal
combination of important descriptors of sulfonamide derivatives data, named
self tuned reweighted sampling (STRS), is developed. descriptors are defined as
the descriptors with large absolute coefficients in a multivariate linear
regression model such as partial least squares(PLS). In this study, the
absolute values of regression coefficients of PLS model are used as an index
for evaluating the importance of each descriptor Then, based on the importance
level of each descriptor, STRS sequentially selects N subsets of descriptors
from N Monte Carlo (MC) sampling runs in an iterative and competitive manner.
In each sampling run, a fixed ratio (e.g. 80%) of samples is first randomly
selected to establish a regresson model. Next, based on the regression
coefficients, a two-step procedure including rapidly decreasing function (RDF)
based enforced descriptor selection and self tuned sampling (STS) based
competitive descriptor selection is adopted to select the important
descriptorss. After running the loops, a number of subsets of descriptors are
obtained and root mean squared error of cross validation (RMSECV) of PLS models
established with subsets of descriptors is computed. The subset of descriptors
with the lowest RMSECV is considered as the optimal descriptor subset. The
performance of the proposed algorithm is evaluated by sulfanomide derivative
dataset. The results reveal an good characteristic of STRS that it can usually
locate an optimal combination of some important descriptors which are
interpretable to the biologically of interest. Additionally, our study shows
that better prediction is obtained by STRS when compared to full descriptor set
PLS modeling, Monte Carlo uninformative variable elimination (MC-UVE).
| [
"['Doreswamy' 'Chanabasayya M. Vastrad']",
"Doreswamy, Chanabasayya M. Vastrad"
]
|
stat.ML cs.LG math.OC | null | 1402.5481 | null | null | http://arxiv.org/pdf/1402.5481v4 | 2018-07-19T15:36:29Z | 2014-02-22T05:10:56Z | From Predictive to Prescriptive Analytics | In this paper, we combine ideas from machine learning (ML) and operations
research and management science (OR/MS) in developing a framework, along with
specific methods, for using data to prescribe optimal decisions in OR/MS
problems. In a departure from other work on data-driven optimization and
reflecting our practical experience with the data available in applications of
OR/MS, we consider data consisting, not only of observations of quantities with
direct effect on costs/revenues, such as demand or returns, but predominantly
of observations of associated auxiliary quantities. The main problem of
interest is a conditional stochastic optimization problem, given imperfect
observations, where the joint probability distributions that specify the
problem are unknown. We demonstrate that our proposed solution methods, which
are inspired by ML methods such as local regression, CART, and random forests,
are generally applicable to a wide range of decision problems. We prove that
they are tractable and asymptotically optimal even when data is not iid and may
be censored. We extend this to the case where decision variables may directly
affect uncertainty in unknown ways, such as pricing's effect on demand. As an
analogue to R^2, we develop a metric P termed the coefficient of
prescriptiveness to measure the prescriptive content of data and the efficacy
of a policy from an operations perspective. To demonstrate the power of our
approach in a real-world setting we study an inventory management problem faced
by the distribution arm of an international media conglomerate, which ships an
average of 1bil units per year. We leverage internal data and public online
data harvested from IMDb, Rotten Tomatoes, and Google to prescribe operational
decisions that outperform baseline measures. Specifically, the data we collect,
leveraged by our methods, accounts for an 88\% improvement as measured by our
P.
| [
"['Dimitris Bertsimas' 'Nathan Kallus']",
"Dimitris Bertsimas, Nathan Kallus"
]
|
cs.LG cs.CV | null | 1402.5497 | null | null | http://arxiv.org/pdf/1402.5497v1 | 2014-02-22T09:39:52Z | 2014-02-22T09:39:52Z | Efficient Semidefinite Spectral Clustering via Lagrange Duality | We propose an efficient approach to semidefinite spectral clustering (SSC),
which addresses the Frobenius normalization with the positive semidefinite
(p.s.d.) constraint for spectral clustering. Compared with the original
Frobenius norm approximation based algorithm, the proposed algorithm can more
accurately find the closest doubly stochastic approximation to the affinity
matrix by considering the p.s.d. constraint. In this paper, SSC is formulated
as a semidefinite programming (SDP) problem. In order to solve the high
computational complexity of SDP, we present a dual algorithm based on the
Lagrange dual formalization. Two versions of the proposed algorithm are
proffered: one with less memory usage and the other with faster convergence
rate. The proposed algorithm has much lower time complexity than that of the
standard interior-point based SDP solvers. Experimental results on both UCI
data sets and real-world image data sets demonstrate that 1) compared with the
state-of-the-art spectral clustering methods, the proposed algorithm achieves
better clustering performance; and 2) our algorithm is much more efficient and
can solve larger-scale SSC problems than those standard interior-point SDP
solvers.
| [
"['Yan Yan' 'Chunhua Shen' 'Hanzi Wang']",
"Yan Yan, Chunhua Shen, Hanzi Wang"
]
|
stat.ML cs.IR cs.LG | null | 1402.5565 | null | null | http://arxiv.org/pdf/1402.5565v1 | 2014-02-23T00:26:48Z | 2014-02-23T00:26:48Z | Semi-Supervised Nonlinear Distance Metric Learning via Forests of
Max-Margin Cluster Hierarchies | Metric learning is a key problem for many data mining and machine learning
applications, and has long been dominated by Mahalanobis methods. Recent
advances in nonlinear metric learning have demonstrated the potential power of
non-Mahalanobis distance functions, particularly tree-based functions. We
propose a novel nonlinear metric learning method that uses an iterative,
hierarchical variant of semi-supervised max-margin clustering to construct a
forest of cluster hierarchies, where each individual hierarchy can be
interpreted as a weak metric over the data. By introducing randomness during
hierarchy training and combining the output of many of the resulting
semi-random weak hierarchy metrics, we can obtain a powerful and robust
nonlinear metric model. This method has two primary contributions: first, it is
semi-supervised, incorporating information from both constrained and
unconstrained points. Second, we take a relaxed approach to constraint
satisfaction, allowing the method to satisfy different subsets of the
constraints at different levels of the hierarchy rather than attempting to
simultaneously satisfy all of them. This leads to a more robust learning
algorithm. We compare our method to a number of state-of-the-art benchmarks on
$k$-nearest neighbor classification, large-scale image retrieval and
semi-supervised clustering problems, and find that our algorithm yields results
comparable or superior to the state-of-the-art, and is significantly more
robust to noise.
| [
"David M. Johnson, Caiming Xiong and Jason J. Corso",
"['David M. Johnson' 'Caiming Xiong' 'Jason J. Corso']"
]
|
stat.ME cs.LG math.ST stat.ML stat.TH | null | 1402.5596 | null | null | http://arxiv.org/pdf/1402.5596v2 | 2014-02-28T00:28:21Z | 2014-02-23T10:30:21Z | Exact Post Model Selection Inference for Marginal Screening | We develop a framework for post model selection inference, via marginal
screening, in linear regression. At the core of this framework is a result that
characterizes the exact distribution of linear functions of the response $y$,
conditional on the model being selected (``condition on selection" framework).
This allows us to construct valid confidence intervals and hypothesis tests for
regression coefficients that account for the selection procedure. In contrast
to recent work in high-dimensional statistics, our results are exact
(non-asymptotic) and require no eigenvalue-like assumptions on the design
matrix $X$. Furthermore, the computational cost of marginal regression,
constructing confidence intervals and hypothesis testing is negligible compared
to the cost of linear regression, thus making our methods particularly suitable
for extremely large datasets. Although we focus on marginal screening to
illustrate the applicability of the condition on selection framework, this
framework is much more broadly applicable. We show how to apply the proposed
framework to several other selection procedures including orthogonal matching
pursuit, non-negative least squares, and marginal screening+Lasso.
| [
"Jason D Lee and Jonathan E Taylor",
"['Jason D Lee' 'Jonathan E Taylor']"
]
|
cs.LG | null | 1402.5634 | null | null | http://arxiv.org/pdf/1402.5634v1 | 2014-02-23T16:51:51Z | 2014-02-23T16:51:51Z | To go deep or wide in learning? | To achieve acceptable performance for AI tasks, one can either use
sophisticated feature extraction methods as the first layer in a two-layered
supervised learning model, or learn the features directly using a deep
(multi-layered) model. While the first approach is very problem-specific, the
second approach has computational overheads in learning multiple layers and
fine-tuning of the model. In this paper, we propose an approach called wide
learning based on arc-cosine kernels, that learns a single layer of infinite
width. We propose exact and inexact learning strategies for wide learning and
show that wide learning with single layer outperforms single layer as well as
deep architectures of finite width for some benchmark datasets.
| [
"['Gaurav Pandey' 'Ambedkar Dukkipati']",
"Gaurav Pandey and Ambedkar Dukkipati"
]
|
cs.IT cs.LG math.IT | null | 1402.5666 | null | null | http://arxiv.org/pdf/1402.5666v2 | 2014-05-12T17:06:23Z | 2014-02-23T20:16:41Z | Dynamic Rate and Channel Selection in Cognitive Radio Systems | In this paper, we investigate dynamic channel and rate selection in cognitive
radio systems which exploit a large number of channels free from primary users.
In such systems, transmitters may rapidly change the selected (channel, rate)
pair to opportunistically learn and track the pair offering the highest
throughput. We formulate the problem of sequential channel and rate selection
as an online optimization problem, and show its equivalence to a {\it
structured} Multi-Armed Bandit problem. The structure stems from inherent
properties of the achieved throughput as a function of the selected channel and
rate. We derive fundamental performance limits satisfied by {\it any} channel
and rate adaptation algorithm, and propose algorithms that achieve (or
approach) these limits. In turn, the proposed algorithms optimally exploit the
inherent structure of the throughput. We illustrate the efficiency of our
algorithms using both test-bed and simulation experiments, in both stationary
and non-stationary radio environments. In stationary environments, the packet
successful transmission probabilities at the various channel and rate pairs do
not evolve over time, whereas in non-stationary environments, they may evolve.
In practical scenarios, the proposed algorithms are able to track the best
channel and rate quite accurately without the need of any explicit measurement
and feedback of the quality of the various channels.
| [
"Richard Combes, Alexandre Proutiere",
"['Richard Combes' 'Alexandre Proutiere']"
]
|
cs.AI cs.CE cs.CV cs.LG | null | 1402.5684 | null | null | http://arxiv.org/pdf/1402.5684v2 | 2014-03-06T19:02:07Z | 2014-02-23T22:01:11Z | Discriminative Functional Connectivity Measures for Brain Decoding | We propose a statistical learning model for classifying cognitive processes
based on distributed patterns of neural activation in the brain, acquired via
functional magnetic resonance imaging (fMRI). In the proposed learning method,
local meshes are formed around each voxel. The distance between voxels in the
mesh is determined by using a functional neighbourhood concept. In order to
define the functional neighbourhood, the similarities between the time series
recorded for voxels are measured and functional connectivity matrices are
constructed. Then, the local mesh for each voxel is formed by including the
functionally closest neighbouring voxels in the mesh. The relationship between
the voxels within a mesh is estimated by using a linear regression model. These
relationship vectors, called Functional Connectivity aware Local Relational
Features (FC-LRF) are then used to train a statistical learning machine. The
proposed method was tested on a recognition memory experiment, including data
pertaining to encoding and retrieval of words belonging to ten different
semantic categories. Two popular classifiers, namely k-nearest neighbour (k-nn)
and Support Vector Machine (SVM), are trained in order to predict the semantic
category of the item being retrieved, based on activation patterns during
encoding. The classification performance of the Functional Mesh Learning model,
which range in 62%-71% is superior to the classical multi-voxel pattern
analysis (MVPA) methods, which range in 40%-48%, for ten semantic categories.
| [
"['Orhan Firat' 'Mete Ozay' 'Ilke Oztekin' 'Fatos T. Yarman Vural']",
"Orhan Firat and Mete Ozay and Ilke Oztekin and Fatos T. Yarman Vural"
]
|
stat.ML cs.LG | null | 1402.5715 | null | null | http://arxiv.org/pdf/1402.5715v3 | 2015-12-06T04:40:24Z | 2014-02-24T03:58:16Z | Variational Particle Approximations | Approximate inference in high-dimensional, discrete probabilistic models is a
central problem in computational statistics and machine learning. This paper
describes discrete particle variational inference (DPVI), a new approach that
combines key strengths of Monte Carlo, variational and search-based techniques.
DPVI is based on a novel family of particle-based variational approximations
that can be fit using simple, fast, deterministic search techniques. Like Monte
Carlo, DPVI can handle multiple modes, and yields exact results in a
well-defined limit. Like unstructured mean-field, DPVI is based on optimizing a
lower bound on the partition function; when this quantity is not of intrinsic
interest, it facilitates convergence assessment and debugging. Like both Monte
Carlo and combinatorial search, DPVI can take advantage of factorization,
sequential structure, and custom search operators. This paper defines DPVI
particle-based approximation family and partition function lower bounds, along
with the sequential DPVI and local DPVI algorithm templates for optimizing
them. DPVI is illustrated and evaluated via experiments on lattice Markov
Random Fields, nonparametric Bayesian mixtures and block-models, and parametric
as well as non-parametric hidden Markov models. Results include applications to
real-world spike-sorting and relational modeling problems, and show that DPVI
can offer appealing time/accuracy trade-offs as compared to multiple
alternatives.
| [
"Ardavan Saeedi, Tejas D Kulkarni, Vikash Mansinghka, Samuel Gershman",
"['Ardavan Saeedi' 'Tejas D Kulkarni' 'Vikash Mansinghka' 'Samuel Gershman']"
]
|
q-bio.QM cs.LG stat.ML | 10.1098/rspa.2014.0081 | 1402.5728 | null | null | http://arxiv.org/abs/1402.5728v1 | 2014-02-24T06:07:56Z | 2014-02-24T06:07:56Z | Machine Learning Methods in the Computational Biology of Cancer | The objectives of this "perspective" paper are to review some recent advances
in sparse feature selection for regression and classification, as well as
compressed sensing, and to discuss how these might be used to develop tools to
advance personalized cancer therapy. As an illustration of the possibilities, a
new algorithm for sparse regression is presented, and is applied to predict the
time to tumor recurrence in ovarian cancer. A new algorithm for sparse feature
selection in classification problems is presented, and its validation in
endometrial cancer is briefly discussed. Some open problems are also presented.
| [
"Mathukumalli Vidyasagar",
"['Mathukumalli Vidyasagar']"
]
|
cs.IT cs.LG math.IT math.ST stat.TH | null | 1402.5731 | null | null | http://arxiv.org/pdf/1402.5731v2 | 2014-04-29T19:18:08Z | 2014-02-24T06:20:34Z | Information-Theoretic Bounds for Adaptive Sparse Recovery | We derive an information-theoretic lower bound for sample complexity in
sparse recovery problems where inputs can be chosen sequentially and
adaptively. This lower bound is in terms of a simple mutual information
expression and unifies many different linear and nonlinear observation models.
Using this formula we derive bounds for adaptive compressive sensing (CS),
group testing and 1-bit CS problems. We show that adaptivity cannot decrease
sample complexity in group testing, 1-bit CS and CS with linear sparsity. In
contrast, we show there might be mild performance gains for CS in the sublinear
regime. Our unified analysis also allows characterization of gains due to
adaptivity from a wider perspective on sparse problems.
| [
"['Cem Aksoylar' 'Venkatesh Saligrama']",
"Cem Aksoylar and Venkatesh Saligrama"
]
|
cs.LG | null | 1402.5758 | null | null | http://arxiv.org/pdf/1402.5758v1 | 2014-02-24T09:27:18Z | 2014-02-24T09:27:18Z | Bandits with concave rewards and convex knapsacks | In this paper, we consider a very general model for exploration-exploitation
tradeoff which allows arbitrary concave rewards and convex constraints on the
decisions across time, in addition to the customary limitation on the time
horizon. This model subsumes the classic multi-armed bandit (MAB) model, and
the Bandits with Knapsacks (BwK) model of Badanidiyuru et al.[2013]. We also
consider an extension of this model to allow linear contexts, similar to the
linear contextual extension of the MAB model. We demonstrate that a natural and
simple extension of the UCB family of algorithms for MAB provides a polynomial
time algorithm that has near-optimal regret guarantees for this substantially
more general model, and matches the bounds provided by Badanidiyuru et
al.[2013] for the special case of BwK, which is quite surprising. We also
provide computationally more efficient algorithms by establishing interesting
connections between this problem and other well studied problems/algorithms
such as the Blackwell approachability problem, online convex optimization, and
the Frank-Wolfe technique for convex optimization. We give examples of several
concrete applications, where this more general model of bandits allows for
richer and/or more efficient formulations of the problem.
| [
"Shipra Agrawal and Nikhil R. Devanur",
"['Shipra Agrawal' 'Nikhil R. Devanur']"
]
|
cs.LG cs.CV | null | 1402.5766 | null | null | http://arxiv.org/pdf/1402.5766v1 | 2014-02-24T09:49:04Z | 2014-02-24T09:49:04Z | No more meta-parameter tuning in unsupervised sparse feature learning | We propose a meta-parameter free, off-the-shelf, simple and fast unsupervised
feature learning algorithm, which exploits a new way of optimizing for
sparsity. Experiments on STL-10 show that the method presents state-of-the-art
performance and provides discriminative features that generalize well.
| [
"['Adriana Romero' 'Petia Radeva' 'Carlo Gatta']",
"Adriana Romero, Petia Radeva and Carlo Gatta"
]
|
cs.IT cs.LG math.IT | null | 1402.5803 | null | null | http://arxiv.org/pdf/1402.5803v1 | 2014-02-24T11:55:04Z | 2014-02-24T11:55:04Z | Sparse phase retrieval via group-sparse optimization | This paper deals with sparse phase retrieval, i.e., the problem of estimating
a vector from quadratic measurements under the assumption that few components
are nonzero. In particular, we consider the problem of finding the sparsest
vector consistent with the measurements and reformulate it as a group-sparse
optimization problem with linear constraints. Then, we analyze the convex
relaxation of the latter based on the minimization of a block l1-norm and show
various exact recovery and stability results in the real and complex cases.
Invariance to circular shifts and reflections are also discussed for real
vectors measured via complex matrices.
| [
"Fabien Lauer (LORIA), Henrik Ohlsson",
"['Fabien Lauer' 'Henrik Ohlsson']"
]
|
stat.ML cs.LG | null | 1402.5836 | null | null | http://arxiv.org/pdf/1402.5836v3 | 2016-07-08T22:59:45Z | 2014-02-24T14:27:40Z | Avoiding pathologies in very deep networks | Choosing appropriate architectures and regularization strategies for deep
networks is crucial to good predictive performance. To shed light on this
problem, we analyze the analogous problem of constructing useful priors on
compositions of functions. Specifically, we study the deep Gaussian process, a
type of infinitely-wide, deep neural network. We show that in standard
architectures, the representational capacity of the network tends to capture
fewer degrees of freedom as the number of layers increases, retaining only a
single degree of freedom in the limit. We propose an alternate network
architecture which does not suffer from this pathology. We also examine deep
covariance functions, obtained by composing infinitely many feature transforms.
Lastly, we characterize the class of models obtained by performing dropout on
Gaussian processes.
| [
"['David Duvenaud' 'Oren Rippel' 'Ryan P. Adams' 'Zoubin Ghahramani']",
"David Duvenaud, Oren Rippel, Ryan P. Adams, Zoubin Ghahramani"
]
|
cs.LG stat.ML | null | 1402.5874 | null | null | http://arxiv.org/pdf/1402.5874v2 | 2016-03-21T10:56:40Z | 2014-02-24T16:16:17Z | Predictive Interval Models for Non-parametric Regression | Having a regression model, we are interested in finding two-sided intervals
that are guaranteed to contain at least a desired proportion of the conditional
distribution of the response variable given a specific combination of
predictors. We name such intervals predictive intervals. This work presents a
new method to find two-sided predictive intervals for non-parametric least
squares regression without the homoscedasticity assumption. Our predictive
intervals are built by using tolerance intervals on prediction errors in the
query point's neighborhood. We proposed a predictive interval model test and we
also used it as a constraint in our hyper-parameter tuning algorithm. This
gives an algorithm that finds the smallest reliable predictive intervals for a
given dataset. We also introduce a measure for comparing different interval
prediction methods yielding intervals having different size and coverage. These
experiments show that our methods are more reliable, effective and precise than
other interval prediction methods.
| [
"['Mohammad Ghasemi Hamed' 'Mathieu Serrurier' 'Nicolas Durand']",
"Mohammad Ghasemi Hamed, Mathieu Serrurier, Nicolas Durand"
]
|
stat.ML cs.LG | null | 1402.5876 | null | null | http://arxiv.org/pdf/1402.5876v4 | 2016-04-11T11:07:31Z | 2014-02-24T16:19:51Z | Manifold Gaussian Processes for Regression | Off-the-shelf Gaussian Process (GP) covariance functions encode smoothness
assumptions on the structure of the function to be modeled. To model complex
and non-differentiable functions, these smoothness assumptions are often too
restrictive. One way to alleviate this limitation is to find a different
representation of the data by introducing a feature space. This feature space
is often learned in an unsupervised way, which might lead to data
representations that are not useful for the overall regression task. In this
paper, we propose Manifold Gaussian Processes, a novel supervised method that
jointly learns a transformation of the data into a feature space and a GP
regression from the feature space to observed space. The Manifold GP is a full
GP and allows to learn data representations, which are useful for the overall
regression task. As a proof-of-concept, we evaluate our approach on complex
non-smooth functions where standard GPs perform poorly, such as step functions
and robotics tasks with contacts.
| [
"['Roberto Calandra' 'Jan Peters' 'Carl Edward Rasmussen'\n 'Marc Peter Deisenroth']",
"Roberto Calandra and Jan Peters and Carl Edward Rasmussen and Marc\n Peter Deisenroth"
]
|
cs.LG cs.AI | null | 1402.5886 | null | null | http://arxiv.org/pdf/1402.5886v1 | 2014-02-24T16:59:21Z | 2014-02-24T16:59:21Z | Near Optimal Bayesian Active Learning for Decision Making | How should we gather information to make effective decisions? We address
Bayesian active learning and experimental design problems, where we
sequentially select tests to reduce uncertainty about a set of hypotheses.
Instead of minimizing uncertainty per se, we consider a set of overlapping
decision regions of these hypotheses. Our goal is to drive uncertainty into a
single decision region as quickly as possible.
We identify necessary and sufficient conditions for correctly identifying a
decision region that contains all hypotheses consistent with observations. We
develop a novel Hyperedge Cutting (HEC) algorithm for this problem, and prove
that is competitive with the intractable optimal policy. Our efficient
implementation of the algorithm relies on computing subsets of the complete
homogeneous symmetric polynomials. Finally, we demonstrate its effectiveness on
two practical applications: approximate comparison-based learning and active
localization using a robot manipulator.
| [
"Shervin Javdani, Yuxin Chen, Amin Karbasi, Andreas Krause, J. Andrew\n Bagnell, Siddhartha Srinivasa",
"['Shervin Javdani' 'Yuxin Chen' 'Amin Karbasi' 'Andreas Krause'\n 'J. Andrew Bagnell' 'Siddhartha Srinivasa']"
]
|
stat.ML cs.LG | null | 1402.5902 | null | null | http://arxiv.org/pdf/1402.5902v2 | 2015-02-11T23:38:42Z | 2014-02-24T17:40:09Z | On Learning from Label Proportions | Learning from Label Proportions (LLP) is a learning setting, where the
training data is provided in groups, or "bags", and only the proportion of each
class in each bag is known. The task is to learn a model to predict the class
labels of the individual instances. LLP has broad applications in political
science, marketing, healthcare, and computer vision. This work answers the
fundamental question, when and why LLP is possible, by introducing a general
framework, Empirical Proportion Risk Minimization (EPRM). EPRM learns an
instance label classifier to match the given label proportions on the training
data. Our result is based on a two-step analysis. First, we provide a VC bound
on the generalization error of the bag proportions. We show that the bag sample
complexity is only mildly sensitive to the bag size. Second, we show that under
some mild assumptions, good bag proportion prediction guarantees good instance
label prediction. The results together provide a formal guarantee that the
individual labels can indeed be learned in the LLP setting. We discuss
applications of the analysis, including justification of LLP algorithms,
learning with population proportions, and a paradigm for learning algorithms
with privacy guarantees. We also demonstrate the feasibility of LLP based on a
case study in real-world setting: predicting income based on census data.
| [
"['Felix X. Yu' 'Krzysztof Choromanski' 'Sanjiv Kumar' 'Tony Jebara'\n 'Shih-Fu Chang']",
"Felix X. Yu, Krzysztof Choromanski, Sanjiv Kumar, Tony Jebara, Shih-Fu\n Chang"
]
|
cs.LG cs.AI | null | 1402.5988 | null | null | http://arxiv.org/pdf/1402.5988v2 | 2014-11-22T13:24:29Z | 2014-02-24T21:22:51Z | Incremental Learning of Event Definitions with Inductive Logic
Programming | Event recognition systems rely on properly engineered knowledge bases of
event definitions to infer occurrences of events in time. The manual
development of such knowledge is a tedious and error-prone task, thus
event-based applications may benefit from automated knowledge construction
techniques, such as Inductive Logic Programming (ILP), which combines machine
learning with the declarative and formal semantics of First-Order Logic.
However, learning temporal logical formalisms, which are typically utilized by
logic-based Event Recognition systems is a challenging task, which most ILP
systems cannot fully undertake. In addition, event-based data is usually
massive and collected at different times and under various circumstances.
Ideally, systems that learn from temporal data should be able to operate in an
incremental mode, that is, revise prior constructed knowledge in the face of
new evidence. Most ILP systems are batch learners, in the sense that in order
to account for new evidence they have no alternative but to forget past
knowledge and learn from scratch. Given the increased inherent complexity of
ILP and the volumes of real-life temporal data, this results to algorithms that
scale poorly. In this work we present an incremental method for learning and
revising event-based knowledge, in the form of Event Calculus programs. The
proposed algorithm relies on abductive-inductive learning and comprises a
scalable clause refinement methodology, based on a compressive summarization of
clause coverage in a stream of examples. We present an empirical evaluation of
our approach on real and synthetic data from activity recognition and city
transport applications.
| [
"Nikos Katzouris, Alexander Artikis, George Paliouras",
"['Nikos Katzouris' 'Alexander Artikis' 'George Paliouras']"
]
|
cs.LG cs.DL | null | 1402.6013 | null | null | http://arxiv.org/pdf/1402.6013v1 | 2014-02-24T23:12:42Z | 2014-02-24T23:12:42Z | Open science in machine learning | We present OpenML and mldata, open science platforms that provides easy
access to machine learning data, software and results to encourage further
study and application. They go beyond the more traditional repositories for
data sets and software packages in that they allow researchers to also easily
share the results they obtained in experiments and to compare their solutions
with those of others.
| [
"Joaquin Vanschoren and Mikio L. Braun and Cheng Soon Ong",
"['Joaquin Vanschoren' 'Mikio L. Braun' 'Cheng Soon Ong']"
]
|
cs.AI cs.LG | null | 1402.6028 | null | null | http://arxiv.org/pdf/1402.6028v1 | 2014-02-25T01:34:43Z | 2014-02-25T01:34:43Z | Algorithms for multi-armed bandit problems | Although many algorithms for the multi-armed bandit problem are
well-understood theoretically, empirical confirmation of their effectiveness is
generally scarce. This paper presents a thorough empirical study of the most
popular multi-armed bandit algorithms. Three important observations can be made
from our results. Firstly, simple heuristics such as epsilon-greedy and
Boltzmann exploration outperform theoretically sound algorithms on most
settings by a significant margin. Secondly, the performance of most algorithms
varies dramatically with the parameters of the bandit problem. Our study
identifies for each algorithm the settings where it performs well, and the
settings where it performs poorly. Thirdly, the algorithms' performance
relative each to other is affected only by the number of bandit arms and the
variance of the rewards. This finding may guide the design of subsequent
empirical evaluations. In the second part of the paper, we turn our attention
to an important area of application of bandit algorithms: clinical trials.
Although the design of clinical trials has been one of the principal practical
problems motivating research on multi-armed bandits, bandit algorithms have
never been evaluated as potential treatment allocation strategies. Using data
from a real study, we simulate the outcome that a 2001-2002 clinical trial
would have had if bandit algorithms had been used to allocate patients to
treatments. We find that an adaptive trial would have successfully treated at
least 50% more patients, while significantly reducing the number of adverse
effects and increasing patient retention. At the end of the trial, the best
treatment could have still been identified with a high level of statistical
confidence. Our findings demonstrate that bandit algorithms are attractive
alternatives to current adaptive treatment allocation strategies.
| [
"Volodymyr Kuleshov and Doina Precup",
"['Volodymyr Kuleshov' 'Doina Precup']"
]
|
cs.LG cs.MS stat.ML | null | 1402.6076 | null | null | http://arxiv.org/pdf/1402.6076v1 | 2014-02-25T07:50:50Z | 2014-02-25T07:50:50Z | Machine Learning at Scale | It takes skill to build a meaningful predictive model even with the abundance
of implementations of modern machine learning algorithms and readily available
computing resources. Building a model becomes challenging if hundreds of
terabytes of data need to be processed to produce the training data set. In a
digital advertising technology setting, we are faced with the need to build
thousands of such models that predict user behavior and power advertising
campaigns in a 24/7 chaotic real-time production environment. As data
scientists, we also have to convince other internal departments critical to
implementation success, our management, and our customers that our machine
learning system works. In this paper, we present the details of the design and
implementation of an automated, robust machine learning platform that impacts
billions of advertising impressions monthly. This platform enables us to
continuously optimize thousands of campaigns over hundreds of millions of
users, on multiple continents, against varying performance objectives.
| [
"['Sergei Izrailev' 'Jeremy M. Stanley']",
"Sergei Izrailev and Jeremy M. Stanley"
]
|
cs.LG cs.AI | null | 1402.6077 | null | null | http://arxiv.org/pdf/1402.6077v1 | 2014-02-25T07:53:49Z | 2014-02-25T07:53:49Z | Inductive Logic Boosting | Recent years have seen a surge of interest in Probabilistic Logic Programming
(PLP) and Statistical Relational Learning (SRL) models that combine logic with
probabilities. Structure learning of these systems is an intersection area of
Inductive Logic Programming (ILP) and statistical learning (SL). However, ILP
cannot deal with probabilities, SL cannot model relational hypothesis. The
biggest challenge of integrating these two machine learning frameworks is how
to estimate the probability of a logic clause only from the observation of
grounded logic atoms. Many current methods models a joint probability by
representing clause as graphical model and literals as vertices in it. This
model is still too complicate and only can be approximate by pseudo-likelihood.
We propose Inductive Logic Boosting framework to transform the relational
dataset into a feature-based dataset, induces logic rules by boosting Problog
Rule Trees and relaxes the independence constraint of pseudo-likelihood.
Experimental evaluation on benchmark datasets demonstrates that the AUC-PR and
AUC-ROC value of ILP learned rules are higher than current state-of-the-art SRL
methods.
| [
"['Wang-Zhou Dai' 'Zhi-Hua Zhou']",
"Wang-Zhou Dai and Zhi-Hua Zhou"
]
|
stat.ML cs.LG | null | 1402.6133 | null | null | http://arxiv.org/pdf/1402.6133v1 | 2014-02-25T11:11:28Z | 2014-02-25T11:11:28Z | Bayesian Sample Size Determination of Vibration Signals in Machine
Learning Approach to Fault Diagnosis of Roller Bearings | Sample size determination for a data set is an important statistical process
for analyzing the data to an optimum level of accuracy and using minimum
computational work. The applications of this process are credible in every
domain which deals with large data sets and high computational work. This study
uses Bayesian analysis for determination of minimum sample size of vibration
signals to be considered for fault diagnosis of a bearing using pre-defined
parameters such as the inverse standard probability and the acceptable margin
of error. Thus an analytical formula for sample size determination is
introduced. The fault diagnosis of the bearing is done using a machine learning
approach using an entropy-based J48 algorithm. The following method will help
researchers involved in fault diagnosis to determine minimum sample size of
data for analysis for a good statistical stability and precision.
| [
"['Siddhant Sahu' 'V. Sugumaran']",
"Siddhant Sahu, V. Sugumaran"
]
|
cs.IR cs.CL cs.LG | null | 1402.6238 | null | null | http://arxiv.org/pdf/1402.6238v1 | 2014-02-25T16:52:05Z | 2014-02-25T16:52:05Z | Improving Collaborative Filtering based Recommenders using Topic
Modelling | Standard Collaborative Filtering (CF) algorithms make use of interactions
between users and items in the form of implicit or explicit ratings alone for
generating recommendations. Similarity among users or items is calculated
purely based on rating overlap in this case,without considering explicit
properties of users or items involved, limiting their applicability in domains
with very sparse rating spaces. In many domains such as movies, news or
electronic commerce recommenders, considerable contextual data in text form
describing item properties is available along with the rating data, which could
be utilized to improve recommendation quality.In this paper, we propose a novel
approach to improve standard CF based recommenders by utilizing latent
Dirichlet allocation (LDA) to learn latent properties of items, expressed in
terms of topic proportions, derived from their textual description. We infer
user's topic preferences or persona in the same latent space,based on her
historical ratings. While computing similarity between users, we make use of a
combined similarity measure involving rating overlap as well as similarity in
the latent topic space. This approach alleviates sparsity problem as it allows
calculation of similarity between users even if they have not rated any items
in common. Our experiments on multiple public datasets indicate that the
proposed hybrid approach significantly outperforms standard user Based and item
Based CF recommenders in terms of classification accuracy metrics such as
precision, recall and f-measure.
| [
"['Jobin Wilson' 'Santanu Chaudhury' 'Brejesh Lall' 'Prateek Kapadia']",
"Jobin Wilson, Santanu Chaudhury, Brejesh Lall, Prateek Kapadia"
]
|
cs.DS cs.CC cs.LG | null | 1402.6278 | null | null | http://arxiv.org/pdf/1402.6278v4 | 2015-09-13T04:53:25Z | 2014-02-25T19:00:15Z | Sample Complexity Bounds on Differentially Private Learning via
Communication Complexity | In this work we analyze the sample complexity of classification by
differentially private algorithms. Differential privacy is a strong and
well-studied notion of privacy introduced by Dwork et al. (2006) that ensures
that the output of an algorithm leaks little information about the data point
provided by any of the participating individuals. Sample complexity of private
PAC and agnostic learning was studied in a number of prior works starting with
(Kasiviswanathan et al., 2008) but a number of basic questions still remain
open, most notably whether learning with privacy requires more samples than
learning without privacy.
We show that the sample complexity of learning with (pure) differential
privacy can be arbitrarily higher than the sample complexity of learning
without the privacy constraint or the sample complexity of learning with
approximate differential privacy. Our second contribution and the main tool is
an equivalence between the sample complexity of (pure) differentially private
learning of a concept class $C$ (or $SCDP(C)$) and the randomized one-way
communication complexity of the evaluation problem for concepts from $C$. Using
this equivalence we prove the following bounds:
1. $SCDP(C) = \Omega(LDim(C))$, where $LDim(C)$ is the Littlestone's (1987)
dimension characterizing the number of mistakes in the online-mistake-bound
learning model. Known bounds on $LDim(C)$ then imply that $SCDP(C)$ can be much
higher than the VC-dimension of $C$.
2. For any $t$, there exists a class $C$ such that $LDim(C)=2$ but $SCDP(C)
\geq t$.
3. For any $t$, there exists a class $C$ such that the sample complexity of
(pure) $\alpha$-differentially private PAC learning is $\Omega(t/\alpha)$ but
the sample complexity of the relaxed $(\alpha,\beta)$-differentially private
PAC learning is $O(\log(1/\beta)/\alpha)$. This resolves an open problem of
Beimel et al. (2013b).
| [
"Vitaly Feldman and David Xiao",
"['Vitaly Feldman' 'David Xiao']"
]
|
math.OC cs.LG | null | 1402.6361 | null | null | http://arxiv.org/pdf/1402.6361v1 | 2014-02-25T22:06:58Z | 2014-02-25T22:06:58Z | Oracle-Based Robust Optimization via Online Learning | Robust optimization is a common framework in optimization under uncertainty
when the problem parameters are not known, but it is rather known that the
parameters belong to some given uncertainty set. In the robust optimization
framework the problem solved is a min-max problem where a solution is judged
according to its performance on the worst possible realization of the
parameters. In many cases, a straightforward solution of the robust
optimization problem of a certain type requires solving an optimization problem
of a more complicated type, and in some cases even NP-hard. For example,
solving a robust conic quadratic program, such as those arising in robust SVM,
ellipsoidal uncertainty leads in general to a semidefinite program. In this
paper we develop a method for approximately solving a robust optimization
problem using tools from online convex optimization, where in every stage a
standard (non-robust) optimization program is solved. Our algorithms find an
approximate robust solution using a number of calls to an oracle that solves
the original (non-robust) problem that is inversely proportional to the square
of the target accuracy.
| [
"Aharon Ben-Tal, Elad Hazan, Tomer Koren, Shie Mannor",
"['Aharon Ben-Tal' 'Elad Hazan' 'Tomer Koren' 'Shie Mannor']"
]
|
cs.LG | null | 1402.6552 | null | null | http://arxiv.org/pdf/1402.6552v1 | 2014-02-26T14:29:33Z | 2014-02-26T14:29:33Z | Renewable Energy Prediction using Weather Forecasts for Optimal
Scheduling in HPC Systems | The objective of the GreenPAD project is to use green energy (wind, solar and
biomass) for powering data-centers that are used to run HPC jobs. As a part of
this it is important to predict the Renewable (Wind) energy for efficient
scheduling (executing jobs that require higher energy when there is more green
energy available and vice-versa). For predicting the wind energy we first
analyze the historical data to find a statistical model that gives relation
between wind energy and weather attributes. Then we use this model based on the
weather forecast data to predict the green energy availability in the future.
Using the green energy prediction obtained from the statistical model we are
able to precompute job schedules for maximizing the green energy utilization in
the future. We propose a model which uses live weather data in addition to
machine learning techniques (which can predict future deviations in weather
conditions based on current deviations from the forecast) to make on-the-fly
changes to the precomputed schedule (based on green energy prediction).
For this we first analyze the data using histograms and simple statistical
tools such as correlation. In addition we build (correlation) regression model
for finding the relation between wind energy availability and weather
attributes (temperature, cloud cover, air pressure, wind speed / direction,
precipitation and sunshine). We also analyze different algorithms and machine
learning techniques for optimizing the job schedules for maximizing the green
energy utilization.
| [
"Ankur Sahai",
"['Ankur Sahai']"
]
|
cs.LG cs.DS cs.GT | null | 1402.6779 | null | null | http://arxiv.org/pdf/1402.6779v6 | 2015-07-31T18:31:27Z | 2014-02-27T03:17:19Z | Resourceful Contextual Bandits | We study contextual bandits with ancillary constraints on resources, which
are common in real-world applications such as choosing ads or dynamic pricing
of items. We design the first algorithm for solving these problems that handles
constrained resources other than time, and improves over a trivial reduction to
the non-contextual case. We consider very general settings for both contextual
bandits (arbitrary policy sets, e.g. Dudik et al. (UAI'11)) and bandits with
resource constraints (bandits with knapsacks, Badanidiyuru et al. (FOCS'13)),
and prove a regret guarantee with near-optimal statistical properties.
| [
"['Ashwinkumar Badanidiyuru' 'John Langford' 'Aleksandrs Slivkins']",
"Ashwinkumar Badanidiyuru and John Langford and Aleksandrs Slivkins"
]
|
cs.LG cs.DB | null | 1402.6859 | null | null | http://arxiv.org/pdf/1402.6859v1 | 2014-02-27T11:07:00Z | 2014-02-27T11:07:00Z | Outlier Detection using Improved Genetic K-means | The outlier detection problem in some cases is similar to the classification
problem. For example, the main concern of clustering-based outlier detection
algorithms is to find clusters and outliers, which are often regarded as noise
that should be removed in order to make more reliable clustering. In this
article, we present an algorithm that provides outlier detection and data
clustering simultaneously. The algorithmimprovesthe estimation of centroids of
the generative distribution during the process of clustering and outlier
discovery. The proposed algorithm consists of two stages. The first stage
consists of improved genetic k-means algorithm (IGK) process, while the second
stage iteratively removes the vectors which are far from their cluster
centroids.
| [
"['M. H. Marghny' 'Ahmed I. Taloba']",
"M. H. Marghny, Ahmed I. Taloba"
]
|
cs.IR cs.LG cs.SD | 10.1109/TASLP.2014.2357676 | 1402.6926 | null | null | http://arxiv.org/abs/1402.6926v3 | 2014-09-28T23:33:44Z | 2014-02-27T14:51:48Z | Sequential Complexity as a Descriptor for Musical Similarity | We propose string compressibility as a descriptor of temporal structure in
audio, for the purpose of determining musical similarity. Our descriptors are
based on computing track-wise compression rates of quantised audio features,
using multiple temporal resolutions and quantisation granularities. To verify
that our descriptors capture musically relevant information, we incorporate our
descriptors into similarity rating prediction and song year prediction tasks.
We base our evaluation on a dataset of 15500 track excerpts of Western popular
music, for which we obtain 7800 web-sourced pairwise similarity ratings. To
assess the agreement among similarity ratings, we perform an evaluation under
controlled conditions, obtaining a rank correlation of 0.33 between intersected
sets of ratings. Combined with bag-of-features descriptors, we obtain
performance gains of 31.1% and 10.9% for similarity rating prediction and song
year prediction. For both tasks, analysis of selected descriptors reveals that
representing features at multiple time scales benefits prediction accuracy.
| [
"Peter Foster, Matthias Mauch and Simon Dixon",
"['Peter Foster' 'Matthias Mauch' 'Simon Dixon']"
]
|
cs.LG cs.DC cs.NA stat.ML | null | 1402.6964 | null | null | http://arxiv.org/pdf/1402.6964v1 | 2014-02-27T16:41:26Z | 2014-02-27T16:41:26Z | Scalable methods for nonnegative matrix factorizations of near-separable
tall-and-skinny matrices | Numerous algorithms are used for nonnegative matrix factorization under the
assumption that the matrix is nearly separable. In this paper, we show how to
make these algorithms efficient for data matrices that have many more rows than
columns, so-called "tall-and-skinny matrices". One key component to these
improved methods is an orthogonal matrix transformation that preserves the
separability of the NMF problem. Our final methods need a single pass over the
data matrix and are suitable for streaming, multi-core, and MapReduce
architectures. We demonstrate the efficacy of these algorithms on
terabyte-sized synthetic matrices and real-world matrices from scientific
computing and bioinformatics.
| [
"['Austin R. Benson' 'Jason D. Lee' 'Bartek Rajwa' 'David F. Gleich']",
"Austin R. Benson, Jason D. Lee, Bartek Rajwa, David F. Gleich"
]
|
cs.LG | null | 1402.7001 | null | null | http://arxiv.org/pdf/1402.7001v1 | 2014-02-27T18:31:33Z | 2014-02-27T18:31:33Z | Marginalizing Corrupted Features | The goal of machine learning is to develop predictors that generalize well to
test data. Ideally, this is achieved by training on an almost infinitely large
training data set that captures all variations in the data distribution. In
practical learning settings, however, we do not have infinite data and our
predictors may overfit. Overfitting may be combatted, for example, by adding a
regularizer to the training objective or by defining a prior over the model
parameters and performing Bayesian inference. In this paper, we propose a
third, alternative approach to combat overfitting: we extend the training set
with infinitely many artificial training examples that are obtained by
corrupting the original training data. We show that this approach is practical
and efficient for a range of predictors and corruption models. Our approach,
called marginalized corrupted features (MCF), trains robust predictors by
minimizing the expected value of the loss function under the corruption model.
We show empirically on a variety of data sets that MCF classifiers can be
trained efficiently, may generalize substantially better to test data, and are
also more robust to feature deletion at test time.
| [
"Laurens van der Maaten, Minmin Chen, Stephen Tyree and Kilian\n Weinberger",
"['Laurens van der Maaten' 'Minmin Chen' 'Stephen Tyree'\n 'Kilian Weinberger']"
]
|
stat.ML cs.LG | null | 1402.7005 | null | null | http://arxiv.org/pdf/1402.7005v1 | 2014-02-27T18:38:02Z | 2014-02-27T18:38:02Z | Bayesian Multi-Scale Optimistic Optimization | Bayesian optimization is a powerful global optimization technique for
expensive black-box functions. One of its shortcomings is that it requires
auxiliary optimization of an acquisition function at each iteration. This
auxiliary optimization can be costly and very hard to carry out in practice.
Moreover, it creates serious theoretical concerns, as most of the convergence
results assume that the exact optimum of the acquisition function can be found.
In this paper, we introduce a new technique for efficient global optimization
that combines Gaussian process confidence bounds and treed simultaneous
optimistic optimization to eliminate the need for auxiliary optimization of
acquisition functions. The experiments with global optimization benchmarks and
a novel application to automatic information extraction demonstrate that the
resulting technique is more efficient than the two approaches from which it
draws inspiration. Unlike most theoretical analyses of Bayesian optimization
with Gaussian processes, our finite-time convergence rate proofs do not require
exact optimization of an acquisition function. That is, our approach eliminates
the unsatisfactory assumption that a difficult, potentially NP-hard, problem
has to be solved in order to obtain vanishing regret rates.
| [
"Ziyu Wang, Babak Shakibi, Lin Jin, Nando de Freitas",
"['Ziyu Wang' 'Babak Shakibi' 'Lin Jin' 'Nando de Freitas']"
]
|
cs.CE cs.LG | 10.1016/j.neuroimage.2014.09.060 | 1402.7015 | null | null | http://arxiv.org/abs/1402.7015v6 | 2014-11-07T11:27:19Z | 2014-02-27T18:50:58Z | Data-driven HRF estimation for encoding and decoding models | Despite the common usage of a canonical, data-independent, hemodynamic
response function (HRF), it is known that the shape of the HRF varies across
brain regions and subjects. This suggests that a data-driven estimation of this
function could lead to more statistical power when modeling BOLD fMRI data.
However, unconstrained estimation of the HRF can yield highly unstable results
when the number of free parameters is large. We develop a method for the joint
estimation of activation and HRF using a rank constraint causing the estimated
HRF to be equal across events/conditions, yet permitting it to be different
across voxels. Model estimation leads to an optimization problem that we
propose to solve with an efficient quasi-Newton method exploiting fast gradient
computations. This model, called GLM with Rank-1 constraint (R1-GLM), can be
extended to the setting of GLM with separate designs which has been shown to
improve decoding accuracy in brain activity decoding experiments. We compare 10
different HRF modeling methods in terms of encoding and decoding score in two
different datasets. Our results show that the R1-GLM model significantly
outperforms competing methods in both encoding and decoding settings,
positioning it as an attractive method both from the points of view of accuracy
and computational efficiency.
| [
"Fabian Pedregosa (INRIA Saclay - Ile de France, INRIA Paris -\n Rocquencourt), Michael Eickenberg (INRIA Saclay - Ile de France, LNAO),\n Philippe Ciuciu (INRIA Saclay - Ile de France, NEUROSPIN), Bertrand Thirion\n (INRIA Saclay - Ile de France, NEUROSPIN), Alexandre Gramfort (LTCI)",
"['Fabian Pedregosa' 'Michael Eickenberg' 'Philippe Ciuciu'\n 'Bertrand Thirion' 'Alexandre Gramfort']"
]
|
cs.LG | null | 1402.7025 | null | null | http://arxiv.org/pdf/1402.7025v2 | 2014-03-04T21:12:43Z | 2014-02-26T10:47:09Z | Exploiting the Statistics of Learning and Inference | When dealing with datasets containing a billion instances or with simulations
that require a supercomputer to execute, computational resources become part of
the equation. We can improve the efficiency of learning and inference by
exploiting their inherent statistical nature. We propose algorithms that
exploit the redundancy of data relative to a model by subsampling data-cases
for every update and reasoning about the uncertainty created in this process.
In the context of learning we propose to test for the probability that a
stochastically estimated gradient points more than 180 degrees in the wrong
direction. In the context of MCMC sampling we use stochastic gradients to
improve the efficiency of MCMC updates, and hypothesis tests based on adaptive
mini-batches to decide whether to accept or reject a proposed parameter update.
Finally, we argue that in the context of likelihood free MCMC one needs to
store all the information revealed by all simulations, for instance in a
Gaussian process. We conclude that Bayesian methods will remain to play a
crucial role in the era of big data and big simulations, but only if we
overcome a number of computational challenges.
| [
"Max Welling",
"['Max Welling']"
]
|
cs.LG stat.ML | null | 1402.7344 | null | null | http://arxiv.org/pdf/1402.7344v2 | 2015-03-06T03:15:39Z | 2014-02-28T18:54:07Z | An Incidence Geometry approach to Dictionary Learning | We study the Dictionary Learning (aka Sparse Coding) problem of obtaining a
sparse representation of data points, by learning \emph{dictionary vectors}
upon which the data points can be written as sparse linear combinations. We
view this problem from a geometry perspective as the spanning set of a subspace
arrangement, and focus on understanding the case when the underlying hypergraph
of the subspace arrangement is specified. For this Fitted Dictionary Learning
problem, we completely characterize the combinatorics of the associated
subspace arrangements (i.e.\ their underlying hypergraphs). Specifically, a
combinatorial rigidity-type theorem is proven for a type of geometric incidence
system. The theorem characterizes the hypergraphs of subspace arrangements that
generically yield (a) at least one dictionary (b) a locally unique dictionary
(i.e.\ at most a finite number of isolated dictionaries) of the specified size.
We are unaware of prior application of combinatorial rigidity techniques in the
setting of Dictionary Learning, or even in machine learning. We also provide a
systematic classification of problems related to Dictionary Learning together
with various algorithms, their assumptions and performance.
| [
"['Meera Sitharam' 'Mohamad Tarifi' 'Menghan Wang']",
"Meera Sitharam, Mohamad Tarifi, Menghan Wang"
]
|
cs.SI cs.LG | null | 1403.0057 | null | null | http://arxiv.org/pdf/1403.0057v2 | 2014-11-21T08:23:40Z | 2014-03-01T07:18:42Z | Real-time Topic-aware Influence Maximization Using Preprocessing | Influence maximization is the task of finding a set of seed nodes in a social
network such that the influence spread of these seed nodes based on certain
influence diffusion model is maximized. Topic-aware influence diffusion models
have been recently proposed to address the issue that influence between a pair
of users are often topic-dependent and information, ideas, innovations etc.
being propagated in networks (referred collectively as items in this paper) are
typically mixtures of topics. In this paper, we focus on the topic-aware
influence maximization task. In particular, we study preprocessing methods for
these topics to avoid redoing influence maximization for each item from
scratch. We explore two preprocessing algorithms with theoretical
justifications. Our empirical results on data obtained in a couple of existing
studies demonstrate that one of our algorithms stands out as a strong candidate
providing microsecond online response time and competitive influence spread,
with reasonable preprocessing effort.
| [
"Wei Chen, Tian Lin, Cheng Yang",
"['Wei Chen' 'Tian Lin' 'Cheng Yang']"
]
|
cs.LG | null | 1403.0156 | null | null | http://arxiv.org/pdf/1403.0156v1 | 2014-03-02T04:14:23Z | 2014-03-02T04:14:23Z | Sleep Analytics and Online Selective Anomaly Detection | We introduce a new problem, the Online Selective Anomaly Detection (OSAD), to
model a specific scenario emerging from research in sleep science. Scientists
have segmented sleep into several stages and stage two is characterized by two
patterns (or anomalies) in the EEG time series recorded on sleep subjects.
These two patterns are sleep spindle (SS) and K-complex. The OSAD problem was
introduced to design a residual system, where all anomalies (known and unknown)
are detected but the system only triggers an alarm when non-SS anomalies
appear. The solution of the OSAD problem required us to combine techniques from
both machine learning and control theory. Experiments on data from real
subjects attest to the effectiveness of our approach.
| [
"Tahereh Babaie, Sanjay Chawla, Romesh Abeysuriya",
"['Tahereh Babaie' 'Sanjay Chawla' 'Romesh Abeysuriya']"
]
|
cs.LG cs.NI | null | 1403.0157 | null | null | http://arxiv.org/pdf/1403.0157v1 | 2014-03-02T04:18:00Z | 2014-03-02T04:18:00Z | Network Traffic Decomposition for Anomaly Detection | In this paper we focus on the detection of network anomalies like Denial of
Service (DoS) attacks and port scans in a unified manner. While there has been
an extensive amount of research in network anomaly detection, current state of
the art methods are only able to detect one class of anomalies at the cost of
others. The key tool we will use is based on the spectral decomposition of a
trajectory/hankel matrix which is able to detect deviations from both between
and within correlation present in the observed network traffic data. Detailed
experiments on synthetic and real network traces shows a significant
improvement in detection capability over competing approaches. In the process
we also address the issue of robustness of anomaly detection systems in a
principled fashion.
| [
"['Tahereh Babaie' 'Sanjay Chawla' 'Sebastien Ardon']",
"Tahereh Babaie, Sanjay Chawla, Sebastien Ardon"
]
|
stat.ML cs.LG | null | 1403.0388 | null | null | http://arxiv.org/pdf/1403.0388v4 | 2015-02-02T17:18:43Z | 2014-03-03T11:05:10Z | Cascading Randomized Weighted Majority: A New Online Ensemble Learning
Algorithm | With the increasing volume of data in the world, the best approach for
learning from this data is to exploit an online learning algorithm. Online
ensemble methods are online algorithms which take advantage of an ensemble of
classifiers to predict labels of data. Prediction with expert advice is a
well-studied problem in the online ensemble learning literature. The Weighted
Majority algorithm and the randomized weighted majority (RWM) are the most
well-known solutions to this problem, aiming to converge to the best expert.
Since among some expert, the best one does not necessarily have the minimum
error in all regions of data space, defining specific regions and converging to
the best expert in each of these regions will lead to a better result. In this
paper, we aim to resolve this defect of RWM algorithms by proposing a novel
online ensemble algorithm to the problem of prediction with expert advice. We
propose a cascading version of RWM to achieve not only better experimental
results but also a better error bound for sufficiently large datasets.
| [
"Mohammadzaman Zamani, Hamid Beigy, and Amirreza Shaban",
"['Mohammadzaman Zamani' 'Hamid Beigy' 'Amirreza Shaban']"
]
|
cs.LG stat.ML | null | 1403.0481 | null | null | http://arxiv.org/pdf/1403.0481v1 | 2014-03-03T16:34:38Z | 2014-03-03T16:34:38Z | Support Vector Machine Model for Currency Crisis Discrimination | Support Vector Machine (SVM) is powerful classification technique based on
the idea of structural risk minimization. Use of kernel function enables curse
of dimensionality to be addressed. However, proper kernel function for certain
problem is dependent on specific dataset and as such there is no good method on
choice of kernel function. In this paper, SVM is used to build empirical models
of currency crisis in Argentina. An estimation technique is developed by
training model on real life data set which provides reasonably accurate model
outputs and helps policy makers to identify situations in which currency crisis
may happen. The third and fourth order polynomial kernel is generally best
choice to achieve high generalization of classifier performance. SVM has high
level of maturity with algorithms that are simple, easy to implement, tolerates
curse of dimensionality and good empirical performance. The satisfactory
results show that currency crisis situation is properly emulated using only
small fraction of database and could be used as an evaluation tool as well as
an early warning system. To the best of knowledge this is the first work on SVM
approach for currency crisis evaluation of Argentina.
| [
"['Arindam Chaudhuri']",
"Arindam Chaudhuri"
]
|
cs.LG | null | 1403.0598 | null | null | http://arxiv.org/pdf/1403.0598v1 | 2014-03-03T21:20:14Z | 2014-03-03T21:20:14Z | The Structurally Smoothed Graphlet Kernel | A commonly used paradigm for representing graphs is to use a vector that
contains normalized frequencies of occurrence of certain motifs or sub-graphs.
This vector representation can be used in a variety of applications, such as,
for computing similarity between graphs. The graphlet kernel of Shervashidze et
al. [32] uses induced sub-graphs of k nodes (christened as graphlets by Przulj
[28]) as motifs in the vector representation, and computes the kernel via a dot
product between these vectors. One can easily show that this is a valid kernel
between graphs. However, such a vector representation suffers from a few
drawbacks. As k becomes larger we encounter the sparsity problem; most higher
order graphlets will not occur in a given graph. This leads to diagonal
dominance, that is, a given graph is similar to itself but not to any other
graph in the dataset. On the other hand, since lower order graphlets tend to be
more numerous, using lower values of k does not provide enough discrimination
ability. We propose a smoothing technique to tackle the above problems. Our
method is based on a novel extension of Kneser-Ney and Pitman-Yor smoothing
techniques from natural language processing to graphs. We use the relationships
between lower order and higher order graphlets in order to derive our method.
Consequently, our smoothing algorithm not only respects the dependency between
sub-graphs but also tackles the diagonal dominance problem by distributing the
probability mass across graphlets. In our experiments, the smoothed graphlet
kernel outperforms graph kernels based on raw frequency counts.
| [
"Pinar Yanardag, S.V.N. Vishwanathan",
"['Pinar Yanardag' 'S. V. N. Vishwanathan']"
]
|
cs.LG | null | 1403.0628 | null | null | http://arxiv.org/pdf/1403.0628v2 | 2014-05-21T16:17:09Z | 2014-03-03T23:06:24Z | Unconstrained Online Linear Learning in Hilbert Spaces: Minimax
Algorithms and Normal Approximations | We study algorithms for online linear optimization in Hilbert spaces,
focusing on the case where the player is unconstrained. We develop a novel
characterization of a large class of minimax algorithms, recovering, and even
improving, several previous results as immediate corollaries. Moreover, using
our tools, we develop an algorithm that provides a regret bound of
$\mathcal{O}\Big(U \sqrt{T \log(U \sqrt{T} \log^2 T +1)}\Big)$, where $U$ is
the $L_2$ norm of an arbitrary comparator and both $T$ and $U$ are unknown to
the player. This bound is optimal up to $\sqrt{\log \log T}$ terms. When $T$ is
known, we derive an algorithm with an optimal regret bound (up to constant
factors). For both the known and unknown $T$ case, a Normal approximation to
the conditional value of the game proves to be the key analysis tool.
| [
"['H. Brendan McMahan' 'Francesco Orabona']",
"H. Brendan McMahan and Francesco Orabona"
]
|
cs.GT cs.LG q-fin.TR stat.ML | null | 1403.0648 | null | null | http://arxiv.org/pdf/1403.0648v1 | 2014-03-04T01:14:40Z | 2014-03-04T01:14:40Z | Multi-period Trading Prediction Markets with Connections to Machine
Learning | We present a new model for prediction markets, in which we use risk measures
to model agents and introduce a market maker to describe the trading process.
This specific choice on modelling tools brings us mathematical convenience. The
analysis shows that the whole market effectively approaches a global objective,
despite that the market is designed such that each agent only cares about its
own goal. Additionally, the market dynamics provides a sensible algorithm for
optimising the global objective. An intimate connection between machine
learning and our markets is thus established, such that we could 1) analyse a
market by applying machine learning methods to the global objective, and 2)
solve machine learning problems by setting up and running certain markets.
| [
"['Jinli Hu' 'Amos Storkey']",
"Jinli Hu and Amos Storkey"
]
|
cs.LG stat.ML | null | 1403.0667 | null | null | http://arxiv.org/pdf/1403.0667v3 | 2016-05-04T18:10:13Z | 2014-03-04T02:48:20Z | The Hidden Convexity of Spectral Clustering | In recent years, spectral clustering has become a standard method for data
analysis used in a broad range of applications. In this paper we propose a new
class of algorithms for multiway spectral clustering based on optimization of a
certain "contrast function" over the unit sphere. These algorithms, partly
inspired by certain Independent Component Analysis techniques, are simple, easy
to implement and efficient.
Geometrically, the proposed algorithms can be interpreted as hidden basis
recovery by means of function optimization. We give a complete characterization
of the contrast functions admissible for provable basis recovery. We show how
these conditions can be interpreted as a "hidden convexity" of our optimization
problem on the sphere; interestingly, we use efficient convex maximization
rather than the more common convex minimization. We also show encouraging
experimental results on real and simulated data.
| [
"James Voss, Mikhail Belkin, Luis Rademacher",
"['James Voss' 'Mikhail Belkin' 'Luis Rademacher']"
]
|
stat.ML cs.LG | null | 1403.0736 | null | null | http://arxiv.org/pdf/1403.0736v3 | 2014-10-03T14:45:41Z | 2014-03-04T10:47:45Z | Fast Prediction with SVM Models Containing RBF Kernels | We present an approximation scheme for support vector machine models that use
an RBF kernel. A second-order Maclaurin series approximation is used for
exponentials of inner products between support vectors and test instances. The
approximation is applicable to all kernel methods featuring sums of kernel
evaluations and makes no assumptions regarding data normalization. The
prediction speed of approximated models no longer relates to the amount of
support vectors but is quadratic in terms of the number of input dimensions. If
the number of input dimensions is small compared to the amount of support
vectors, the approximated model is significantly faster in prediction and has a
smaller memory footprint. An optimized C++ implementation was made to assess
the gain in prediction speed in a set of practical tests. We additionally
provide a method to verify the approximation accuracy, prior to training models
or during run-time, to ensure the loss in accuracy remains acceptable and
within known bounds.
| [
"['Marc Claesen' 'Frank De Smet' 'Johan A. K. Suykens' 'Bart De Moor']",
"Marc Claesen, Frank De Smet, Johan A.K. Suykens, Bart De Moor"
]
|
stat.ML cs.LG | null | 1403.0745 | null | null | http://arxiv.org/pdf/1403.0745v1 | 2014-03-04T11:28:59Z | 2014-03-04T11:28:59Z | EnsembleSVM: A Library for Ensemble Learning Using Support Vector
Machines | EnsembleSVM is a free software package containing efficient routines to
perform ensemble learning with support vector machine (SVM) base models. It
currently offers ensemble methods based on binary SVM models. Our
implementation avoids duplicate storage and evaluation of support vectors which
are shared between constituent models. Experimental results show that using
ensemble approaches can drastically reduce training complexity while
maintaining high predictive accuracy. The EnsembleSVM software package is
freely available online at http://esat.kuleuven.be/stadius/ensemblesvm.
| [
"['Marc Claesen' 'Frank De Smet' 'Johan Suykens' 'Bart De Moor']",
"Marc Claesen, Frank De Smet, Johan Suykens, Bart De Moor"
]
|
cs.CV cs.LG stat.ML | null | 1403.0829 | null | null | http://arxiv.org/pdf/1403.0829v1 | 2014-03-03T01:11:40Z | 2014-03-03T01:11:40Z | Multiview Hessian regularized logistic regression for action recognition | With the rapid development of social media sharing, people often need to
manage the growing volume of multimedia data such as large scale video
classification and annotation, especially to organize those videos containing
human activities. Recently, manifold regularized semi-supervised learning
(SSL), which explores the intrinsic data probability distribution and then
improves the generalization ability with only a small number of labeled data,
has emerged as a promising paradigm for semiautomatic video classification. In
addition, human action videos often have multi-modal content and different
representations. To tackle the above problems, in this paper we propose
multiview Hessian regularized logistic regression (mHLR) for human action
recognition. Compared with existing work, the advantages of mHLR lie in three
folds: (1) mHLR combines multiple Hessian regularization, each of which
obtained from a particular representation of instance, to leverage the
exploring of local geometry; (2) mHLR naturally handle multi-view instances
with multiple representations; (3) mHLR employs a smooth loss function and then
can be effectively optimized. We carefully conduct extensive experiments on the
unstructured social activity attribute (USAA) dataset and the experimental
results demonstrate the effectiveness of the proposed multiview Hessian
regularized logistic regression for human action recognition.
| [
"W. Liu, H. Liu, D. Tao, Y. Wang, Ke Lu",
"['W. Liu' 'H. Liu' 'D. Tao' 'Y. Wang' 'Ke Lu']"
]
|
math.ST cs.DM cs.LG stat.ME stat.ML stat.TH | null | 1403.0873 | null | null | http://arxiv.org/pdf/1403.0873v1 | 2014-03-04T17:54:37Z | 2014-03-04T17:54:37Z | Matroid Regression | We propose an algebraic combinatorial method for solving large sparse linear
systems of equations locally - that is, a method which can compute single
evaluations of the signal without computing the whole signal. The method scales
only in the sparsity of the system and not in its size, and allows to provide
error estimates for any solution method. At the heart of our approach is the
so-called regression matroid, a combinatorial object associated to sparsity
patterns, which allows to replace inversion of the large matrix with the
inversion of a kernel matrix that is constant size. We show that our method
provides the best linear unbiased estimator (BLUE) for this setting and the
minimum variance unbiased estimator (MVUE) under Gaussian noise assumptions,
and furthermore we show that the size of the kernel matrix which is to be
inverted can be traded off with accuracy.
| [
"['Franz J Király' 'Louis Theran']",
"Franz J Kir\\'aly and Louis Theran"
]
|
cs.SI cs.LG physics.soc-ph stat.ME | 10.1109/JSTSP.2014.2310294 | 1403.0921 | null | null | http://arxiv.org/abs/1403.0921v1 | 2014-03-04T19:54:07Z | 2014-03-04T19:54:07Z | Dynamic stochastic blockmodels for time-evolving social networks | Significant efforts have gone into the development of statistical models for
analyzing data in the form of networks, such as social networks. Most existing
work has focused on modeling static networks, which represent either a single
time snapshot or an aggregate view over time. There has been recent interest in
statistical modeling of dynamic networks, which are observed at multiple points
in time and offer a richer representation of many complex phenomena. In this
paper, we present a state-space model for dynamic networks that extends the
well-known stochastic blockmodel for static networks to the dynamic setting. We
fit the model in a near-optimal manner using an extended Kalman filter (EKF)
augmented with a local search. We demonstrate that the EKF-based algorithm
performs competitively with a state-of-the-art algorithm based on Markov chain
Monte Carlo sampling but is significantly less computationally demanding.
| [
"['Kevin S. Xu' 'Alfred O. Hero III']",
"Kevin S. Xu and Alfred O. Hero III"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.