title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Inverse Density as an Inverse Problem: The Fredholm Equation Approach | cs.LG stat.ML | In this paper we address the problem of estimating the ratio $\frac{q}{p}$
where $p$ is a density function and $q$ is another density, or, more generally
an arbitrary function. Knowing or approximating this ratio is needed in various
problems of inference and integration, in particular, when one needs to average
a function with respect to one probability distribution, given a sample from
another. It is often referred as {\it importance sampling} in statistical
inference and is also closely related to the problem of {\it covariate shift}
in transfer learning as well as to various MCMC methods. It may also be useful
for separating the underlying geometry of a space, say a manifold, from the
density function defined on it.
Our approach is based on reformulating the problem of estimating
$\frac{q}{p}$ as an inverse problem in terms of an integral operator
corresponding to a kernel, and thus reducing it to an integral equation, known
as the Fredholm problem of the first kind. This formulation, combined with the
techniques of regularization and kernel methods, leads to a principled
kernel-based framework for constructing algorithms and for analyzing them
theoretically.
The resulting family of algorithms (FIRE, for Fredholm Inverse Regularized
Estimator) is flexible, simple and easy to implement.
We provide detailed theoretical analysis including concentration bounds and
convergence rates for the Gaussian kernel in the case of densities defined on
$\R^d$, compact domains in $\R^d$ and smooth $d$-dimensional sub-manifolds of
the Euclidean space.
We also show experimental results including applications to classification
and semi-supervised learning within the covariate shift framework and
demonstrate some encouraging experimental comparisons. We also show how the
parameters of our algorithms can be chosen in a completely unsupervised manner.
| Qichao Que and Mikhail Belkin | null | 1304.5575 | null | null |
Distributed Low-rank Subspace Segmentation | cs.CV cs.DC cs.LG stat.ML | Vision problems ranging from image clustering to motion segmentation to
semi-supervised learning can naturally be framed as subspace segmentation
problems, in which one aims to recover multiple low-dimensional subspaces from
noisy and corrupted input data. Low-Rank Representation (LRR), a convex
formulation of the subspace segmentation problem, is provably and empirically
accurate on small problems but does not scale to the massive sizes of modern
vision datasets. Moreover, past work aimed at scaling up low-rank matrix
factorization is not applicable to LRR given its non-decomposable constraints.
In this work, we propose a novel divide-and-conquer algorithm for large-scale
subspace segmentation that can cope with LRR's non-decomposable constraints and
maintains LRR's strong recovery guarantees. This has immediate implications for
the scalability of subspace segmentation, which we demonstrate on a benchmark
face recognition dataset and in simulations. We then introduce novel
applications of LRR-based subspace segmentation to large-scale semi-supervised
learning for multimedia event detection, concept detection, and image tagging.
In each case, we obtain state-of-the-art results and order-of-magnitude speed
ups.
| Ameet Talwalkar, Lester Mackey, Yadong Mu, Shih-Fu Chang, Michael I.
Jordan | null | 1304.5583 | null | null |
A Survey on Multi-view Learning | cs.LG | In recent years, a great many methods of learning from multi-view data by
considering the diversity of different views have been proposed. These views
may be obtained from multiple sources or different feature subsets. In trying
to organize and highlight similarities and differences between the variety of
multi-view learning approaches, we review a number of representative multi-view
learning algorithms in different areas and classify them into three groups: 1)
co-training, 2) multiple kernel learning, and 3) subspace learning. Notably,
co-training style algorithms train alternately to maximize the mutual agreement
on two distinct views of the data; multiple kernel learning algorithms exploit
kernels that naturally correspond to different views and combine kernels either
linearly or non-linearly to improve learning performance; and subspace learning
algorithms aim to obtain a latent subspace shared by multiple views by assuming
that the input views are generated from this latent subspace. Though there is
significant variance in the approaches to integrating multiple views to improve
learning performance, they mainly exploit either the consensus principle or the
complementary principle to ensure the success of multi-view learning. Since
accessing multiple views is the fundament of multi-view learning, with the
exception of study on learning a model from multiple views, it is also valuable
to study how to construct multiple views and how to evaluate these views.
Overall, by exploring the consistency and complementary properties of different
views, multi-view learning is rendered more effective, more promising, and has
better generalization ability than single-view learning.
| Chang Xu, Dacheng Tao, Chao Xu | null | 1304.5634 | null | null |
Analytic Feature Selection for Support Vector Machines | cs.LG stat.ML | Support vector machines (SVMs) rely on the inherent geometry of a data set to
classify training data. Because of this, we believe SVMs are an excellent
candidate to guide the development of an analytic feature selection algorithm,
as opposed to the more commonly used heuristic methods. We propose a
filter-based feature selection algorithm based on the inherent geometry of a
feature set. Through observation, we identified six geometric properties that
differ between optimal and suboptimal feature sets, and have statistically
significant correlations to classifier performance. Our algorithm is based on
logistic and linear regression models using these six geometric properties as
predictor variables. The proposed algorithm achieves excellent results on high
dimensional text data sets, with features that can be organized into a handful
of feature types; for example, unigrams, bigrams or semantic structural
features. We believe this algorithm is a novel and effective approach to
solving the feature selection problem for linear SVMs.
| Carly Stambaugh, Hui Yang, Felix Breuer | null | 1304.5678 | null | null |
Prior-free and prior-dependent regret bounds for Thompson Sampling | stat.ML cs.LG | We consider the stochastic multi-armed bandit problem with a prior
distribution on the reward distributions. We are interested in studying
prior-free and prior-dependent regret bounds, very much in the same spirit as
the usual distribution-free and distribution-dependent bounds for the
non-Bayesian stochastic bandit. Building on the techniques of Audibert and
Bubeck [2009] and Russo and Roy [2013] we first show that Thompson Sampling
attains an optimal prior-free bound in the sense that for any prior
distribution its Bayesian regret is bounded from above by $14 \sqrt{n K}$. This
result is unimprovable in the sense that there exists a prior distribution such
that any algorithm has a Bayesian regret bounded from below by $\frac{1}{20}
\sqrt{n K}$. We also study the case of priors for the setting of Bubeck et al.
[2013] (where the optimal mean is known as well as a lower bound on the
smallest gap) and we show that in this case the regret of Thompson Sampling is
in fact uniformly bounded over time, thus showing that Thompson Sampling can
greatly take advantage of the nice properties of these priors.
| S\'ebastien Bubeck and Che-Yu Liu | null | 1304.5758 | null | null |
Continuum armed bandit problem of few variables in high dimensions | cs.LG | We consider the stochastic and adversarial settings of continuum armed
bandits where the arms are indexed by [0,1]^d. The reward functions r:[0,1]^d
-> R are assumed to intrinsically depend on at most k coordinate variables
implying r(x_1,..,x_d) = g(x_{i_1},..,x_{i_k}) for distinct and unknown
i_1,..,i_k from {1,..,d} and some locally Holder continuous g:[0,1]^k -> R with
exponent 0 < alpha <= 1. Firstly, assuming (i_1,..,i_k) to be fixed across
time, we propose a simple modification of the CAB1 algorithm where we construct
the discrete set of sampling points to obtain a bound of
O(n^((alpha+k)/(2*alpha+k)) (log n)^((alpha)/(2*alpha+k)) C(k,d)) on the
regret, with C(k,d) depending at most polynomially in k and sub-logarithmically
in d. The construction is based on creating partitions of {1,..,d} into k
disjoint subsets and is probabilistic, hence our result holds with high
probability. Secondly we extend our results to also handle the more general
case where (i_1,...,i_k) can change over time and derive regret bounds for the
same.
| Hemant Tyagi and Bernd G\"artner | null | 1304.5793 | null | null |
Multi-Label Classifier Chains for Bird Sound | cs.LG cs.SD stat.ML | Bird sound data collected with unattended microphones for automatic surveys,
or mobile devices for citizen science, typically contain multiple
simultaneously vocalizing birds of different species. However, few works have
considered the multi-label structure in birdsong. We propose to use an ensemble
of classifier chains combined with a histogram-of-segments representation for
multi-label classification of birdsong. The proposed method is compared with
binary relevance and three multi-instance multi-label learning (MIML)
algorithms from prior work (which focus more on structure in the sound, and
less on structure in the label sets). Experiments are conducted on two
real-world birdsong datasets, and show that the proposed method usually
outperforms binary relevance (using the same features and base-classifier), and
is better in some cases and worse in others compared to the MIML algorithms.
| Forrest Briggs, Xiaoli Z. Fern, Jed Irvine | null | 1304.5862 | null | null |
Bayesian crack detection in ultra high resolution multimodal images of
paintings | cs.CV cs.LG | The preservation of our cultural heritage is of paramount importance. Thanks
to recent developments in digital acquisition techniques, powerful image
analysis algorithms are developed which can be useful non-invasive tools to
assist in the restoration and preservation of art. In this paper we propose a
semi-supervised crack detection method that can be used for high-dimensional
acquisitions of paintings coming from different modalities. Our dataset
consists of a recently acquired collection of images of the Ghent Altarpiece
(1432), one of Northern Europe's most important art masterpieces. Our goal is
to build a classifier that is able to discern crack pixels from the background
consisting of non-crack pixels, making optimal use of the information that is
provided by each modality. To accomplish this we employ a recently developed
non-parametric Bayesian classifier, that uses tensor factorizations to
characterize any conditional probability. A prior is placed on the parameters
of the factorization such that every possible interaction between predictors is
allowed while still identifying a sparse subset among these predictors. The
proposed Bayesian classifier, which we will refer to as conditional Bayesian
tensor factorization or CBTF, is assessed by visually comparing classification
results with the Random Forest (RF) algorithm.
| Bruno Cornelis, Yun Yang, Joshua T. Vogelstein, Ann Dooms, Ingrid
Daubechies, David Dunson | null | 1304.5894 | null | null |
Dynamic stochastic blockmodels: Statistical models for time-evolving
networks | cs.SI cs.LG physics.soc-ph stat.ME | Significant efforts have gone into the development of statistical models for
analyzing data in the form of networks, such as social networks. Most existing
work has focused on modeling static networks, which represent either a single
time snapshot or an aggregate view over time. There has been recent interest in
statistical modeling of dynamic networks, which are observed at multiple points
in time and offer a richer representation of many complex phenomena. In this
paper, we propose a state-space model for dynamic networks that extends the
well-known stochastic blockmodel for static networks to the dynamic setting. We
then propose a procedure to fit the model using a modification of the extended
Kalman filter augmented with a local search. We apply the procedure to analyze
a dynamic social network of email communication.
| Kevin S. Xu and Alfred O. Hero III | 10.1007/978-3-642-37210-0_22 | 1304.5974 | null | null |
The Stochastic Gradient Descent for the Primal L1-SVM Optimization
Revisited | cs.LG cs.AI | We reconsider the stochastic (sub)gradient approach to the unconstrained
primal L1-SVM optimization. We observe that if the learning rate is inversely
proportional to the number of steps, i.e., the number of times any training
pattern is presented to the algorithm, the update rule may be transformed into
the one of the classical perceptron with margin in which the margin threshold
increases linearly with the number of steps. Moreover, if we cycle repeatedly
through the possibly randomly permuted training set the dual variables defined
naturally via the expansion of the weight vector as a linear combination of the
patterns on which margin errors were made are shown to obey at the end of each
complete cycle automatically the box constraints arising in dual optimization.
This renders the dual Lagrangian a running lower bound on the primal objective
tending to it at the optimum and makes available an upper bound on the relative
accuracy achieved which provides a meaningful stopping criterion. In addition,
we propose a mechanism of presenting the same pattern repeatedly to the
algorithm which maintains the above properties. Finally, we give experimental
evidence that algorithms constructed along these lines exhibit a considerably
improved performance.
| Constantinos Panagiotakopoulos and Petroula Tsampouka | null | 1304.6383 | null | null |
The K-modes algorithm for clustering | cs.LG stat.ME stat.ML | Many clustering algorithms exist that estimate a cluster centroid, such as
K-means, K-medoids or mean-shift, but no algorithm seems to exist that clusters
data by returning exactly K meaningful modes. We propose a natural definition
of a K-modes objective function by combining the notions of density and cluster
assignment. The algorithm becomes K-means and K-medoids in the limit of very
large and very small scales. Computationally, it is slightly slower than
K-means but much faster than mean-shift or K-medoids. Unlike K-means, it is
able to find centroids that are valid patterns, truly representative of a
cluster, even with nonconvex clusters, and appears robust to outliers and
misspecification of the scale and number of clusters.
| Miguel \'A. Carreira-Perpi\~n\'an and Weiran Wang | null | 1304.6478 | null | null |
A Theoretical Analysis of NDCG Type Ranking Measures | cs.LG cs.IR stat.ML | A central problem in ranking is to design a ranking measure for evaluation of
ranking functions. In this paper we study, from a theoretical perspective, the
widely used Normalized Discounted Cumulative Gain (NDCG)-type ranking measures.
Although there are extensive empirical studies of NDCG, little is known about
its theoretical properties. We first show that, whatever the ranking function
is, the standard NDCG which adopts a logarithmic discount, converges to 1 as
the number of items to rank goes to infinity. On the first sight, this result
is very surprising. It seems to imply that NDCG cannot differentiate good and
bad ranking functions, contradicting to the empirical success of NDCG in many
applications. In order to have a deeper understanding of ranking measures in
general, we propose a notion referred to as consistent distinguishability. This
notion captures the intuition that a ranking measure should have such a
property: For every pair of substantially different ranking functions, the
ranking measure can decide which one is better in a consistent manner on almost
all datasets. We show that NDCG with logarithmic discount has consistent
distinguishability although it converges to the same limit for all ranking
functions. We next characterize the set of all feasible discount functions for
NDCG according to the concept of consistent distinguishability. Specifically we
show that whether NDCG has consistent distinguishability depends on how fast
the discount decays, and 1/r is a critical point. We then turn to the cut-off
version of NDCG, i.e., NDCG@k. We analyze the distinguishability of NDCG@k for
various choices of k and the discount functions. Experimental results on real
Web search datasets agree well with the theory.
| Yining Wang, Liwei Wang, Yuanzhi Li, Di He, Tie-Yan Liu, Wei Chen | null | 1304.6480 | null | null |
Locally linear representation for image clustering | cs.LG stat.ML | It is a key to construct a similarity graph in graph-oriented subspace
learning and clustering. In a similarity graph, each vertex denotes a data
point and the edge weight represents the similarity between two points. There
are two popular schemes to construct a similarity graph, i.e., pairwise
distance based scheme and linear representation based scheme. Most existing
works have only involved one of the above schemes and suffered from some
limitations. Specifically, pairwise distance based methods are sensitive to the
noises and outliers compared with linear representation based methods. On the
other hand, there is the possibility that linear representation based
algorithms wrongly select inter-subspaces points to represent a point, which
will degrade the performance. In this paper, we propose an algorithm, called
Locally Linear Representation (LLR), which integrates pairwise distance with
linear representation together to address the problems. The proposed algorithm
can automatically encode each data point over a set of points that not only
could denote the objective point with less residual error, but also are close
to the point in Euclidean space. The experimental results show that our
approach is promising in subspace learning and subspace clustering.
| Liangli Zhen, Zhang Yi, Xi Peng, Dezhong Peng | null | 1304.6487 | null | null |
Low-rank optimization for distance matrix completion | math.OC cs.LG stat.ML | This paper addresses the problem of low-rank distance matrix completion. This
problem amounts to recover the missing entries of a distance matrix when the
dimension of the data embedding space is possibly unknown but small compared to
the number of considered data points. The focus is on high-dimensional
problems. We recast the considered problem into an optimization problem over
the set of low-rank positive semidefinite matrices and propose two efficient
algorithms for low-rank distance matrix completion. In addition, we propose a
strategy to determine the dimension of the embedding space. The resulting
algorithms scale to high-dimensional problems and monotonically converge to a
global solution of the problem. Finally, numerical experiments illustrate the
good performance of the proposed algorithms on benchmarks.
| B. Mishra, G. Meyer and R. Sepulchre | 10.1109/CDC.2011.6160810 | 1304.6663 | null | null |
Inference and learning in probabilistic logic programs using weighted
Boolean formulas | cs.AI cs.LG cs.LO | Probabilistic logic programs are logic programs in which some of the facts
are annotated with probabilities. This paper investigates how classical
inference and learning tasks known from the graphical model community can be
tackled for probabilistic logic programs. Several such tasks such as computing
the marginals given evidence and learning from (partial) interpretations have
not really been addressed for probabilistic logic programs before.
The first contribution of this paper is a suite of efficient algorithms for
various inference tasks. It is based on a conversion of the program and the
queries and evidence to a weighted Boolean formula. This allows us to reduce
the inference tasks to well-studied tasks such as weighted model counting,
which can be solved using state-of-the-art methods known from the graphical
model and knowledge compilation literature. The second contribution is an
algorithm for parameter estimation in the learning from interpretations
setting. The algorithm employs Expectation Maximization, and is built on top of
the developed inference algorithms.
The proposed approach is experimentally evaluated. The results show that the
inference algorithms improve upon the state-of-the-art in probabilistic logic
programming and that it is indeed possible to learn the parameters of a
probabilistic logic program from interpretations.
| Daan Fierens, Guy Van den Broeck, Joris Renkens, Dimitar Shterionov,
Bernd Gutmann, Ingo Thon, Gerda Janssens, Luc De Raedt | 10.1017/S1471068414000076 | 1304.6810 | null | null |
An implementation of the relational k-means algorithm | cs.LG cs.CV cs.MS | A C# implementation of a generalized k-means variant called relational
k-means is described here. Relational k-means is a generalization of the
well-known k-means clustering method which works for non-Euclidean scenarios as
well. The input is an arbitrary distance matrix, as opposed to the traditional
k-means method, where the clustered objects need to be identified with vectors.
| Bal\'azs Szalkai | null | 1304.6899 | null | null |
An Algorithm for Training Polynomial Networks | cs.LG cs.AI stat.ML | We consider deep neural networks, in which the output of each node is a
quadratic function of its inputs. Similar to other deep architectures, these
networks can compactly represent any function on a finite training set. The
main goal of this paper is the derivation of an efficient layer-by-layer
algorithm for training such networks, which we denote as the \emph{Basis
Learner}. The algorithm is a universal learner in the sense that the training
error is guaranteed to decrease at every iteration, and can eventually reach
zero under mild conditions. We present practical implementations of this
algorithm, as well as preliminary experimental results. We also compare our
deep architecture to other shallow architectures for learning polynomials, in
particular kernel learning.
| Roi Livni, Shai Shalev-Shwartz, Ohad Shamir | null | 1304.7045 | null | null |
Irreflexive and Hierarchical Relations as Translations | cs.LG | We consider the problem of embedding entities and relations of knowledge
bases in low-dimensional vector spaces. Unlike most existing approaches, which
are primarily efficient for modeling equivalence relations, our approach is
designed to explicitly model irreflexive relations, such as hierarchies, by
interpreting them as translations operating on the low-dimensional embeddings
of the entities. Preliminary experiments show that, despite its simplicity and
a smaller number of parameters than previous approaches, our approach achieves
state-of-the-art performance according to standard evaluation protocols on data
from WordNet and Freebase.
| Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston
and Oksana Yakhnenko | null | 1304.7158 | null | null |
Learning Densities Conditional on Many Interacting Features | stat.ML cs.LG | Learning a distribution conditional on a set of discrete-valued features is a
commonly encountered task. This becomes more challenging with a
high-dimensional feature set when there is the possibility of interaction
between the features. In addition, many frequently applied techniques consider
only prediction of the mean, but the complete conditional density is needed to
answer more complex questions. We demonstrate a novel nonparametric Bayes
method based upon a tensor factorization of feature-dependent weights for
Gaussian kernels. The method makes use of multistage feature selection for
dimension reduction. The resulting conditional density morphs flexibly with the
selected features.
| David C. Kessler and Jack Taylor and David B. Dunson | null | 1304.7230 | null | null |
Supervised Heterogeneous Multiview Learning for Joint Association Study
and Disease Diagnosis | cs.LG cs.CE stat.ML | Given genetic variations and various phenotypical traits, such as Magnetic
Resonance Imaging (MRI) features, we consider two important and related tasks
in biomedical research: i)to select genetic and phenotypical markers for
disease diagnosis and ii) to identify associations between genetic and
phenotypical data. These two tasks are tightly coupled because underlying
associations between genetic variations and phenotypical features contain the
biological basis for a disease. While a variety of sparse models have been
applied for disease diagnosis and canonical correlation analysis and its
extensions have bee widely used in association studies (e.g., eQTL analysis),
these two tasks have been treated separately. To unify these two tasks, we
present a new sparse Bayesian approach for joint association study and disease
diagnosis. In this approach, common latent features are extracted from
different data sources based on sparse projection matrices and used to predict
multiple disease severity levels based on Gaussian process ordinal regression;
in return, the disease status is used to guide the discovery of relationships
between the data sources. The sparse projection matrices not only reveal
interactions between data sources but also select groups of biomarkers related
to the disease. To learn the model from data, we develop an efficient
variational expectation maximization algorithm. Simulation results demonstrate
that our approach achieves higher accuracy in both predicting ordinal labels
and discovering associations between data sources than alternative methods. We
apply our approach to an imaging genetics dataset for the study of Alzheimer's
Disease (AD). Our method identifies biologically meaningful relationships
between genetic variations, MRI features, and AD status, and achieves
significantly higher accuracy for predicting ordinal AD stages than the
competing methods.
| Shandian Zhe, Zenglin Xu, and Yuan Qi | null | 1304.7284 | null | null |
Deterministic Initialization of the K-Means Algorithm Using Hierarchical
Clustering | cs.LG cs.CV | K-means is undoubtedly the most widely used partitional clustering algorithm.
Unfortunately, due to its gradient descent nature, this algorithm is highly
sensitive to the initial placement of the cluster centers. Numerous
initialization methods have been proposed to address this problem. Many of
these methods, however, have superlinear complexity in the number of data
points, making them impractical for large data sets. On the other hand, linear
methods are often random and/or order-sensitive, which renders their results
unrepeatable. Recently, Su and Dy proposed two highly successful hierarchical
initialization methods named Var-Part and PCA-Part that are not only linear,
but also deterministic (non-random) and order-invariant. In this paper, we
propose a discriminant analysis based approach that addresses a common
deficiency of these two methods. Experiments on a large and diverse collection
of data sets from the UCI Machine Learning Repository demonstrate that Var-Part
and PCA-Part are highly competitive with one of the best random initialization
methods to date, i.e., k-means++, and that the proposed approach significantly
improves the performance of both hierarchical methods.
| M. Emre Celebi and Hassan A. Kingravi | 10.1142/S0218001412500188 | 1304.7465 | null | null |
Semi-supervised Eigenvectors for Large-scale Locally-biased Learning | cs.LG math.SP stat.ML | In many applications, one has side information, e.g., labels that are
provided in a semi-supervised manner, about a specific target region of a large
data set, and one wants to perform machine learning and data analysis tasks
"nearby" that prespecified target region. For example, one might be interested
in the clustering structure of a data graph near a prespecified "seed set" of
nodes, or one might be interested in finding partitions in an image that are
near a prespecified "ground truth" set of pixels. Locally-biased problems of
this sort are particularly challenging for popular eigenvector-based machine
learning and data analysis tools. At root, the reason is that eigenvectors are
inherently global quantities, thus limiting the applicability of
eigenvector-based methods in situations where one is interested in very local
properties of the data.
In this paper, we address this issue by providing a methodology to construct
semi-supervised eigenvectors of a graph Laplacian, and we illustrate how these
locally-biased eigenvectors can be used to perform locally-biased machine
learning. These semi-supervised eigenvectors capture
successively-orthogonalized directions of maximum variance, conditioned on
being well-correlated with an input seed set of nodes that is assumed to be
provided in a semi-supervised manner. We show that these semi-supervised
eigenvectors can be computed quickly as the solution to a system of linear
equations; and we also describe several variants of our basic method that have
improved scaling properties. We provide several empirical examples
demonstrating how these semi-supervised eigenvectors can be used to perform
locally-biased learning; and we discuss the relationship between our results
and recent machine learning algorithms that use global eigenvectors of the
graph Laplacian.
| Toke J. Hansen and Michael W. Mahoney | null | 1304.7528 | null | null |
Fractal structures in Adversarial Prediction | cs.LG | Fractals are self-similar recursive structures that have been used in
modeling several real world processes. In this work we study how "fractal-like"
processes arise in a prediction game where an adversary is generating a
sequence of bits and an algorithm is trying to predict them. We will see that
under a certain formalization of the predictive payoff for the algorithm it is
most optimal for the adversary to produce a fractal-like sequence to minimize
the algorithm's ability to predict. Indeed it has been suggested before that
financial markets exhibit a fractal-like behavior. We prove that a fractal-like
distribution arises naturally out of an optimization from the adversary's
perspective.
In addition, we give optimal trade-offs between predictability and expected
deviation (i.e. sum of bits) for our formalization of predictive payoff. This
result is motivated by the observation that several time series data exhibit
higher deviations than expected for a completely random walk.
| Rina Panigrahy and Preyas Popat | null | 1304.7576 | null | null |
Optimal amortized regret in every interval | cs.LG cs.DS stat.ML | Consider the classical problem of predicting the next bit in a sequence of
bits. A standard performance measure is {\em regret} (loss in payoff) with
respect to a set of experts. For example if we measure performance with respect
to two constant experts one that always predicts 0's and another that always
predicts 1's it is well known that one can get regret $O(\sqrt T)$ with respect
to the best expert by using, say, the weighted majority algorithm. But this
algorithm does not provide performance guarantee in any interval. There are
other algorithms that ensure regret $O(\sqrt {x \log T})$ in any interval of
length $x$. In this paper we show a randomized algorithm that in an amortized
sense gets a regret of $O(\sqrt x)$ for any interval when the sequence is
partitioned into intervals arbitrarily. We empirically estimated the constant
in the $O()$ for $T$ upto 2000 and found it to be small -- around 2.1. We also
experimentally evaluate the efficacy of this algorithm in predicting high
frequency stock data.
| Rina Panigrahy and Preyas Popat | null | 1304.7577 | null | null |
Learning Geo-Temporal Non-Stationary Failure and Recovery of Power
Distribution | cs.SY cs.LG physics.soc-ph | Smart energy grid is an emerging area for new applications of machine
learning in a non-stationary environment. Such a non-stationary environment
emerges when large-scale failures occur at power distribution networks due to
external disturbances such as hurricanes and severe storms. Power distribution
networks lie at the edge of the grid, and are especially vulnerable to external
disruptions. Quantifiable approaches are lacking and needed to learn
non-stationary behaviors of large-scale failure and recovery of power
distribution. This work studies such non-stationary behaviors in three aspects.
First, a novel formulation is derived for an entire life cycle of large-scale
failure and recovery of power distribution. Second, spatial-temporal models of
failure and recovery of power distribution are developed as geo-location based
multivariate non-stationary GI(t)/G(t)/Infinity queues. Third, the
non-stationary spatial-temporal models identify a small number of parameters to
be learned. Learning is applied to two real-life examples of large-scale
disruptions. One is from Hurricane Ike, where data from an operational network
is exact on failures and recoveries. The other is from Hurricane Sandy, where
aggregated data is used for inferring failure and recovery processes at one of
the impacted areas. Model parameters are learned using real data. Two findings
emerge as results of learning: (a) Failure rates behave similarly at the two
different provider networks for two different hurricanes but differently at the
geographical regions. (b) Both rapid- and slow-recovery are present for
Hurricane Ike but only slow recovery is shown for a regional distribution
network from Hurricane Sandy.
| Yun Wei and Chuanyi Ji and Floyd Galvan and Stephen Couvillon and
George Orellana and James Momoh | null | 1304.7710 | null | null |
North Atlantic Right Whale Contact Call Detection | cs.LG cs.SD | The North Atlantic right whale (Eubalaena glacialis) is an endangered
species. These whales continuously suffer from deadly vessel impacts alongside
the eastern coast of North America. There have been countless efforts to save
the remaining 350 - 400 of them. One of the most prominent works is done by
Marinexplore and Cornell University. A system of hydrophones linked to
satellite connected-buoys has been deployed in the whales habitat. These
hydrophones record and transmit live sounds to a base station. These recording
might contain the right whale contact call as well as many other noises. The
noise rate increases rapidly in vessel-busy areas such as by the Boston harbor.
This paper presents and studies the problem of detecting the North Atlantic
right whale contact call with the presence of noise and other marine life
sounds. A novel algorithm was developed to preprocess the sound waves before a
tree based hierarchical classifier is used to classify the data and provide a
score. The developed model was trained with 30,000 data points made available
through the Cornell University Whale Detection Challenge program. Results
showed that the developed algorithm had close to 85% success rate in detecting
the presence of the North Atlantic right whale.
| Rami Abousleiman, Guangzhi Qu, Osamah Rawashdeh | null | 1304.7851 | null | null |
Semi-Supervised Information-Maximization Clustering | cs.LG stat.ML | Semi-supervised clustering aims to introduce prior knowledge in the decision
process of a clustering algorithm. In this paper, we propose a novel
semi-supervised clustering algorithm based on the information-maximization
principle. The proposed method is an extension of a previous unsupervised
information-maximization clustering algorithm based on squared-loss mutual
information to effectively incorporate must-links and cannot-links. The
proposed method is computationally efficient because the clustering solution
can be obtained analytically via eigendecomposition. Furthermore, the proposed
method allows systematic optimization of tuning parameters such as the kernel
width, given the degree of belief in the must-links and cannot-links. The
usefulness of the proposed method is demonstrated through experiments.
| Daniele Calandriello, Gang Niu, Masashi Sugiyama | null | 1304.8020 | null | null |
Uniqueness of Tensor Decompositions with Applications to Polynomial
Identifiability | cs.DS cs.LG math.ST stat.TH | We give a robust version of the celebrated result of Kruskal on the
uniqueness of tensor decompositions: we prove that given a tensor whose
decomposition satisfies a robust form of Kruskal's rank condition, it is
possible to approximately recover the decomposition if the tensor is known up
to a sufficiently small (inverse polynomial) error.
Kruskal's theorem has found many applications in proving the identifiability
of parameters for various latent variable models and mixture models such as
Hidden Markov models, topic models etc. Our robust version immediately implies
identifiability using only polynomially many samples in many of these settings.
This polynomial identifiability is an essential first step towards efficient
learning algorithms for these models.
Recently, algorithms based on tensor decompositions have been used to
estimate the parameters of various hidden variable models efficiently in
special cases as long as they satisfy certain "non-degeneracy" properties. Our
methods give a way to go beyond this non-degeneracy barrier, and establish
polynomial identifiability of the parameters under much milder conditions.
Given the importance of Kruskal's theorem in the tensor literature, we expect
that this robust version will have several applications beyond the settings we
explore in this work.
| Aditya Bhaskara, Moses Charikar, Aravindan Vijayaraghavan | null | 1304.8087 | null | null |
Inferring ground truth from multi-annotator ordinal data: a
probabilistic approach | stat.ML cs.LG | A popular approach for large scale data annotation tasks is crowdsourcing,
wherein each data point is labeled by multiple noisy annotators. We consider
the problem of inferring ground truth from noisy ordinal labels obtained from
multiple annotators of varying and unknown expertise levels. Annotation models
for ordinal data have been proposed mostly as extensions of their
binary/categorical counterparts and have received little attention in the
crowdsourcing literature. We propose a new model for crowdsourced ordinal data
that accounts for instance difficulty as well as annotator expertise, and
derive a variational Bayesian inference algorithm for parameter estimation. We
analyze the ordinal extensions of several state-of-the-art annotator models for
binary/categorical labels and evaluate the performance of all the models on two
real world datasets containing ordinal query-URL relevance scores, collected
through Amazon's Mechanical Turk. Our results indicate that the proposed model
performs better or as well as existing state-of-the-art methods and is more
resistant to `spammy' annotators (i.e., annotators who assign labels randomly
without actually looking at the instance) than popular baselines such as mean,
median, and majority vote which do not account for annotator expertise.
| Balaji Lakshminarayanan and Yee Whye Teh | null | 1305.0015 | null | null |
Revealing social networks of spammers through spectral clustering | cs.SI cs.LG physics.soc-ph stat.ML | To date, most studies on spam have focused only on the spamming phase of the
spam cycle and have ignored the harvesting phase, which consists of the mass
acquisition of email addresses. It has been observed that spammers conceal
their identity to a lesser degree in the harvesting phase, so it may be
possible to gain new insights into spammers' behavior by studying the behavior
of harvesters, which are individuals or bots that collect email addresses. In
this paper, we reveal social networks of spammers by identifying communities of
harvesters with high behavioral similarity using spectral clustering. The data
analyzed was collected through Project Honey Pot, a distributed system for
monitoring harvesting and spamming. Our main findings are (1) that most
spammers either send only phishing emails or no phishing emails at all, (2)
that most communities of spammers also send only phishing emails or no phishing
emails at all, and (3) that several groups of spammers within communities
exhibit coherent temporal behavior and have similar IP addresses. Our findings
reveal some previously unknown behavior of spammers and suggest that there is
indeed social structure between spammers to be discovered.
| Kevin S. Xu, Mark Kliger, Yilun Chen, Peter J. Woolf, and Alfred O.
Hero III | 10.1109/ICC.2009.5199418 | 1305.0051 | null | null |
Clustering Unclustered Data: Unsupervised Binary Labeling of Two
Datasets Having Different Class Balances | cs.LG | We consider the unsupervised learning problem of assigning labels to
unlabeled data. A naive approach is to use clustering methods, but this works
well only when data is properly clustered and each cluster corresponds to an
underlying class. In this paper, we first show that this unsupervised labeling
problem in balanced binary cases can be solved if two unlabeled datasets having
different class balances are available. More specifically, estimation of the
sign of the difference between probability densities of two unlabeled datasets
gives the solution. We then introduce a new method to directly estimate the
sign of the density difference without density estimation. Finally, we
demonstrate the usefulness of the proposed method against several clustering
methods on various toy problems and real-world datasets.
| Marthinus Christoffel du Plessis and Masashi Sugiyama | null | 1305.0103 | null | null |
Perceptron Mistake Bounds | cs.LG | We present a brief survey of existing mistake bounds and introduce novel
bounds for the Perceptron or the kernel Perceptron algorithm. Our novel bounds
generalize beyond standard margin-loss type bounds, allow for any convex and
Lipschitz loss function, and admit a very simple proof.
| Mehryar Mohri, Afshin Rostamizadeh | null | 1305.0208 | null | null |
Model Selection for High-Dimensional Regression under the Generalized
Irrepresentability Condition | math.ST cs.IT cs.LG math.IT stat.ME stat.ML stat.TH | In the high-dimensional regression model a response variable is linearly
related to $p$ covariates, but the sample size $n$ is smaller than $p$. We
assume that only a small subset of covariates is `active' (i.e., the
corresponding coefficients are non-zero), and consider the model-selection
problem of identifying the active covariates. A popular approach is to estimate
the regression coefficients through the Lasso ($\ell_1$-regularized least
squares). This is known to correctly identify the active set only if the
irrelevant covariates are roughly orthogonal to the relevant ones, as
quantified through the so called `irrepresentability' condition. In this paper
we study the `Gauss-Lasso' selector, a simple two-stage method that first
solves the Lasso, and then performs ordinary least squares restricted to the
Lasso active set. We formulate `generalized irrepresentability condition'
(GIC), an assumption that is substantially weaker than irrepresentability. We
prove that, under GIC, the Gauss-Lasso correctly recovers the active set.
| Adel Javanmard and Andrea Montanari | null | 1305.0355 | null | null |
Tensor Decompositions: A New Concept in Brain Data Analysis? | cs.NA cs.LG q-bio.NC stat.ML | Matrix factorizations and their extensions to tensor factorizations and
decompositions have become prominent techniques for linear and multilinear
blind source separation (BSS), especially multiway Independent Component
Analysis (ICA), NonnegativeMatrix and Tensor Factorization (NMF/NTF), Smooth
Component Analysis (SmoCA) and Sparse Component Analysis (SCA). Moreover,
tensor decompositions have many other potential applications beyond multilinear
BSS, especially feature extraction, classification, dimensionality reduction
and multiway clustering. In this paper, we briefly overview new and emerging
models and approaches for tensor decompositions in applications to group and
linked multiway BSS/ICA, feature extraction, classification andMultiway Partial
Least Squares (MPLS) regression problems. Keywords: Multilinear BSS, linked
multiway BSS/ICA, tensor factorizations and decompositions, constrained Tucker
and CP models, Penalized Tensor Decompositions (PTD), feature extraction,
classification, multiway PLS and CCA.
| Andrzej Cichocki | null | 1305.0395 | null | null |
Testing Hypotheses by Regularized Maximum Mean Discrepancy | cs.LG cs.AI stat.ML | Do two data samples come from different distributions? Recent studies of this
fundamental problem focused on embedding probability distributions into
sufficiently rich characteristic Reproducing Kernel Hilbert Spaces (RKHSs), to
compare distributions by the distance between their embeddings. We show that
Regularized Maximum Mean Discrepancy (RMMD), our novel measure for kernel-based
hypothesis testing, yields substantial improvements even when sample sizes are
small, and excels at hypothesis tests involving multiple comparisons with power
control. We derive asymptotic distributions under the null and alternative
hypotheses, and assess power control. Outstanding results are obtained on:
challenging EEG data, MNIST, the Berkley Covertype, and the Flare-Solar
dataset.
| Somayeh Danafar, Paola M.V. Rancoita, Tobias Glasmachers, Kevin
Whittingstall, Juergen Schmidhuber | null | 1305.0423 | null | null |
Deep Learning of Representations: Looking Forward | cs.LG | Deep learning research aims at discovering learning algorithms that discover
multiple levels of distributed representations, with higher levels representing
more abstract concepts. Although the study of deep learning has already led to
impressive theoretical results, learning algorithms and breakthrough
experiments, several challenges lie ahead. This paper proposes to examine some
of these challenges, centering on the questions of scaling deep learning
algorithms to much larger models and datasets, reducing optimization
difficulties due to ill-conditioning or local minima, designing more efficient
and powerful inference and sampling procedures, and learning to disentangle the
factors of variation underlying the observed data. It also proposes a few
forward-looking research directions aimed at overcoming these challenges.
| Yoshua Bengio | null | 1305.0445 | null | null |
An Improved EM algorithm | cs.LG cs.AI stat.ML | In this paper, we firstly give a brief introduction of expectation
maximization (EM) algorithm, and then discuss the initial value sensitivity of
expectation maximization algorithm. Subsequently, we give a short proof of EM's
convergence. Then, we implement experiments with the expectation maximization
algorithm (We implement all the experiments on Gaussion mixture model (GMM)).
Our experiment with expectation maximization is performed in the following
three cases: initialize randomly; initialize with result of K-means; initialize
with result of K-medoids. The experiment result shows that expectation
maximization algorithm depend on its initial state or parameters. And we found
that EM initialized with K-medoids performed better than both the one
initialized with K-means and the one initialized randomly.
| Fuqiang Chen | null | 1305.0626 | null | null |
Feature Selection Based on Term Frequency and T-Test for Text
Categorization | cs.LG cs.IR stat.ML | Much work has been done on feature selection. Existing methods are based on
document frequency, such as Chi-Square Statistic, Information Gain etc.
However, these methods have two shortcomings: one is that they are not reliable
for low-frequency terms, and the other is that they only count whether one term
occurs in a document and ignore the term frequency. Actually, high-frequency
terms within a specific category are often regards as discriminators.
This paper focuses on how to construct the feature selection function based
on term frequency, and proposes a new approach based on $t$-test, which is used
to measure the diversity of the distributions of a term between the specific
category and the entire corpus. Extensive comparative experiments on two text
corpora using three classifiers show that our new approach is comparable to or
or slightly better than the state-of-the-art feature selection methods (i.e.,
$\chi^2$, and IG) in terms of macro-$F_1$ and micro-$F_1$.
| Deqing Wang, Hui Zhang, Rui Liu, Weifeng Lv | null | 1305.0638 | null | null |
Spectral Classification Using Restricted Boltzmann Machine | cs.LG | In this study, a novel machine learning algorithm, restricted Boltzmann
machine (RBM), is introduced. The algorithm is applied for the spectral
classification in astronomy. RBM is a bipartite generative graphical model with
two separate layers (one visible layer and one hidden layer), which can extract
higher level features to represent the original data. Despite generative, RBM
can be used for classification when modified with a free energy and a soft-max
function. Before spectral classification, the original data is binarized
according to some rule. Then we resort to the binary RBM to classify
cataclysmic variables (CVs) and non-CVs (one half of all the given data for
training and the other half for testing). The experiment result shows
state-of-the-art accuracy of 100%, which indicates the efficiency of the binary
RBM algorithm.
| Fuqiang Chen, Yan Wu, Yude Bu, Guodong Zhao | null | 1305.0665 | null | null |
Learning from Imprecise and Fuzzy Observations: Data Disambiguation
through Generalized Loss Minimization | cs.LG | Methods for analyzing or learning from "fuzzy data" have attracted increasing
attention in recent years. In many cases, however, existing methods (for
precise, non-fuzzy data) are extended to the fuzzy case in an ad-hoc manner,
and without carefully considering the interpretation of a fuzzy set when being
used for modeling data. Distinguishing between an ontic and an epistemic
interpretation of fuzzy set-valued data, and focusing on the latter, we argue
that a "fuzzification" of learning algorithms based on an application of the
generic extension principle is not appropriate. In fact, the extension
principle fails to properly exploit the inductive bias underlying statistical
and machine learning methods, although this bias, at least in principle, offers
a means for "disambiguating" the fuzzy data. Alternatively, we therefore
propose a method which is based on the generalization of loss functions in
empirical risk minimization, and which performs model identification and data
disambiguation simultaneously. Elaborating on the fuzzification of specific
types of losses, we establish connections to well-known loss functions in
regression and classification. We compare our approach with related methods and
illustrate its use in logistic regression for binary classification.
| Eyke H\"ullermeier | null | 1305.0698 | null | null |
On Comparison between Evolutionary Programming Network-based Learning
and Novel Evolution Strategy Algorithm-based Learning | cs.NE cs.LG | This paper presents two different evolutionary systems - Evolutionary
Programming Network (EPNet) and Novel Evolutions Strategy (NES) Algorithm.
EPNet does both training and architecture evolution simultaneously, whereas NES
does a fixed network and only trains the network. Five mutation operators
proposed in EPNet to reflect the emphasis on evolving ANNs behaviors. Close
behavioral links between parents and their offspring are maintained by various
mutations, such as partial training and node splitting. On the other hand, NES
uses two new genetic operators - subpopulation-based max-mean arithmetical
crossover and time-variant mutation. The above-mentioned two algorithms have
been tested on a number of benchmark problems, such as the medical diagnosis
problems (breast cancer, diabetes, and heart disease). The results and the
comparison between them are also presented in this paper.
| M.A. Khayer Azad, Md. Shafiqul Islam and M.M.A. Hashem | null | 1305.0922 | null | null |
Efficient Estimation of the number of neighbours in Probabilistic K
Nearest Neighbour Classification | cs.LG stat.ML | Probabilistic k-nearest neighbour (PKNN) classification has been introduced
to improve the performance of original k-nearest neighbour (KNN) classification
algorithm by explicitly modelling uncertainty in the classification of each
feature vector. However, an issue common to both KNN and PKNN is to select the
optimal number of neighbours, $k$. The contribution of this paper is to
incorporate the uncertainty in $k$ into the decision making, and in so doing
use Bayesian model averaging to provide improved classification. Indeed the
problem of assessing the uncertainty in $k$ can be viewed as one of statistical
model selection which is one of the most important technical issues in the
statistics and machine learning domain. In this paper, a new functional
approximation algorithm is proposed to reconstruct the density of the model
(order) without relying on time consuming Monte Carlo simulations. In addition,
this algorithm avoids cross validation by adopting Bayesian framework. The
performance of this algorithm yielded very good performance on several real
experimental datasets.
| Ji Won Yoon and Nial Friel | null | 1305.1002 | null | null |
Simple Deep Random Model Ensemble | cs.LG | Representation learning and unsupervised learning are two central topics of
machine learning and signal processing. Deep learning is one of the most
effective unsupervised representation learning approach. The main contributions
of this paper to the topics are as follows. (i) We propose to view the
representative deep learning approaches as special cases of the knowledge reuse
framework of clustering ensemble. (ii) We propose to view sparse coding when
used as a feature encoder as the consensus function of clustering ensemble, and
view dictionary learning as the training process of the base clusterings of
clustering ensemble. (ii) Based on the above two views, we propose a very
simple deep learning algorithm, named deep random model ensemble (DRME). It is
a stack of random model ensembles. Each random model ensemble is a special
k-means ensemble that discards the expectation-maximization optimization of
each base k-means but only preserves the default initialization method of the
base k-means. (iv) We propose to select the most powerful representation among
the layers by applying DRME to clustering where the single-linkage is used as
the clustering algorithm. Moreover, the DRME based clustering can also detect
the number of the natural clusters accurately. Extensive experimental
comparisons with 5 representation learning methods on 19 benchmark data sets
demonstrate the effectiveness of DRME.
| Xiao-Lei Zhang, Ji Wu | null | 1305.1019 | null | null |
Regret Bounds for Reinforcement Learning with Policy Advice | stat.ML cs.LG | In some reinforcement learning problems an agent may be provided with a set
of input policies, perhaps learned from prior experience or provided by
advisors. We present a reinforcement learning with policy advice (RLPA)
algorithm which leverages this input set and learns to use the best policy in
the set for the reinforcement learning task at hand. We prove that RLPA has a
sub-linear regret of \tilde O(\sqrt{T}) relative to the best input policy, and
that both this regret and its computational complexity are independent of the
size of the state and action space. Our empirical simulations support our
theoretical analysis. This suggests RLPA may offer significant advantages in
large domains where some prior good policies are provided.
| Mohammad Gheshlaghi Azar and Alessandro Lazaric and Emma Brunskill | null | 1305.1027 | null | null |
On the Convergence and Consistency of the Blurring Mean-Shift Process | stat.ML cs.LG | The mean-shift algorithm is a popular algorithm in computer vision and image
processing. It can also be cast as a minimum gamma-divergence estimation. In
this paper we focus on the "blurring" mean shift algorithm, which is one
version of the mean-shift process that successively blurs the dataset. The
analysis of the blurring mean-shift is relatively more complicated compared to
the nonblurring version, yet the algorithm convergence and the estimation
consistency have not been well studied in the literature. In this paper we
prove both the convergence and the consistency of the blurring mean-shift. We
also perform simulation studies to compare the efficiency of the blurring and
the nonblurring versions of the mean-shift algorithms. Our results show that
the blurring mean-shift has more efficiency.
| Ting-Li Chen | null | 1305.1040 | null | null |
Gromov-Hausdorff Approximation of Metric Spaces with Linear Structure | cs.CG cs.LG math.MG | In many real-world applications data come as discrete metric spaces sampled
around 1-dimensional filamentary structures that can be seen as metric graphs.
In this paper we address the metric reconstruction problem of such filamentary
structures from data sampled around them. We prove that they can be
approximated, with respect to the Gromov-Hausdorff distance by well-chosen Reeb
graphs (and some of their variants) and we provide an efficient and easy to
implement algorithm to compute such approximations in almost linear time. We
illustrate the performances of our algorithm on a few synthetic and real data
sets.
| Fr\'ed\'eric Chazal (INRIA Sophia Antipolis / INRIA Saclay - Ile de
France), Jian Sun | null | 1305.1172 | null | null |
A Differential Equations Approach to Optimizing Regret Trade-offs | cs.LG | We consider the classical question of predicting binary sequences and study
the {\em optimal} algorithms for obtaining the best possible regret and payoff
functions for this problem. The question turns out to be also equivalent to the
problem of optimal trade-offs between the regrets of two experts in an "experts
problem", studied before by \cite{kearns-regret}. While, say, a regret of
$\Theta(\sqrt{T})$ is known, we argue that it important to ask what is the
provably optimal algorithm for this problem --- both because it leads to
natural algorithms, as well as because regret is in fact often comparable in
magnitude to the final payoffs and hence is a non-negligible term.
In the basic setting, the result essentially follows from a classical result
of Cover from '65. Here instead, we focus on another standard setting, of
time-discounted payoffs, where the final "stopping time" is not specified. We
exhibit an explicit characterization of the optimal regret for this setting.
To obtain our main result, we show that the optimal payoff functions have to
satisfy the Hermite differential equation, and hence are given by the solutions
to this equation. It turns out that characterization of the payoff function is
qualitatively different from the classical (non-discounted) setting, and,
namely, there's essentially a unique optimal solution.
| Alexandr Andoni and Rina Panigrahy | null | 1305.1359 | null | null |
One-Pass AUC Optimization | cs.LG | AUC is an important performance measure and many algorithms have been devoted
to AUC optimization, mostly by minimizing a surrogate convex loss on a training
data set. In this work, we focus on one-pass AUC optimization that requires
only going through the training data once without storing the entire training
dataset, where conventional online learning algorithms cannot be applied
directly because AUC is measured by a sum of losses defined over pairs of
instances from different classes. We develop a regression-based algorithm which
only needs to maintain the first and second order statistics of training data
in memory, resulting a storage requirement independent from the size of
training data. To efficiently handle high dimensional data, we develop a
randomized algorithm that approximates the covariance matrices by low rank
matrices. We verify, both theoretically and empirically, the effectiveness of
the proposed algorithm.
| Wei Gao and Rong Jin and Shenghuo Zhu and Zhi-Hua Zhou | null | 1305.1363 | null | null |
A new framework for optimal classifier design | cs.CV cs.LG stat.ML | The use of alternative measures to evaluate classifier performance is gaining
attention, specially for imbalanced problems. However, the use of these
measures in the classifier design process is still unsolved. In this work we
propose a classifier designed specifically to optimize one of these alternative
measures, namely, the so-called F-measure. Nevertheless, the technique is
general, and it can be used to optimize other evaluation measures. An algorithm
to train the novel classifier is proposed, and the numerical scheme is tested
with several databases, showing the optimality and robustness of the presented
classifier.
| Mat\'ias Di Martino, Guzman Hern\'andez, Marcelo Fiori, Alicia
Fern\'andez | 10.1016/j.patcog.2013.01.006 | 1305.1396 | null | null |
High Level Pattern Classification via Tourist Walks in Networks | cs.AI cs.LG | Complex networks refer to large-scale graphs with nontrivial connection
patterns. The salient and interesting features that the complex network study
offer in comparison to graph theory are the emphasis on the dynamical
properties of the networks and the ability of inherently uncovering pattern
formation of the vertices. In this paper, we present a hybrid data
classification technique combining a low level and a high level classifier. The
low level term can be equipped with any traditional classification techniques,
which realize the classification task considering only physical features (e.g.,
geometrical or statistical features) of the input data. On the other hand, the
high level term has the ability of detecting data patterns with semantic
meanings. In this way, the classification is realized by means of the
extraction of the underlying network's features constructed from the input
data. As a result, the high level classification process measures the
compliance of the test instances with the pattern formation of the training
data. Out of various high level perspectives that can be utilized to capture
semantic meaning, we utilize the dynamical features that are generated from a
tourist walker in a networked environment. Specifically, a weighted combination
of transient and cycle lengths generated by the tourist walk is employed for
that end. Interestingly, our study shows that the proposed technique is able to
further improve the already optimized performance of traditional classification
techniques.
| Thiago Christiano Silva and Liang Zhao | null | 1305.1679 | null | null |
Class Imbalance Problem in Data Mining Review | cs.LG | In last few years there are major changes and evolution has been done on
classification of data. As the application area of technology is increases the
size of data also increases. Classification of data becomes difficult because
of unbounded size and imbalance nature of data. Class imbalance problem become
greatest issue in data mining. Imbalance problem occur where one of the two
classes having more sample than other classes. The most of algorithm are more
focusing on classification of major sample while ignoring or misclassifying
minority sample. The minority samples are those that rarely occur but very
important. There are different methods available for classification of
imbalance data set which is divided into three main categories, the algorithmic
approach, data-preprocessing approach and feature selection approach. Each of
this technique has their own advantages and disadvantages. In this paper
systematic study of each approach is define which gives the right direction for
research in class imbalance problem.
| Rushi Longadge and Snehalata Dongre | null | 1305.1707 | null | null |
Cover Tree Bayesian Reinforcement Learning | stat.ML cs.LG | This paper proposes an online tree-based Bayesian approach for reinforcement
learning. For inference, we employ a generalised context tree model. This
defines a distribution on multivariate Gaussian piecewise-linear models, which
can be updated in closed form. The tree structure itself is constructed using
the cover tree method, which remains efficient in high dimensional spaces. We
combine the model with Thompson sampling and approximate dynamic programming to
obtain effective exploration policies in unknown environments. The flexibility
and computational simplicity of the model render it suitable for many
reinforcement learning problems in continuous state spaces. We demonstrate this
in an experimental comparison with least squares policy iteration.
| Nikolaos Tziortziotis and Christos Dimitrakakis and Konstantinos
Blekas | null | 1305.1809 | null | null |
Joint Topic Modeling and Factor Analysis of Textual Information and
Graded Response Data | stat.ML cs.LG | Modern machine learning methods are critical to the development of
large-scale personalized learning systems that cater directly to the needs of
individual learners. The recently developed SPARse Factor Analysis (SPARFA)
framework provides a new statistical model and algorithms for machine
learning-based learning analytics, which estimate a learner's knowledge of the
latent concepts underlying a domain, and content analytics, which estimate the
relationships among a collection of questions and the latent concepts. SPARFA
estimates these quantities given only the binary-valued graded responses to a
collection of questions. In order to better interpret the estimated latent
concepts, SPARFA relies on a post-processing step that utilizes user-defined
tags (e.g., topics or keywords) available for each question. In this paper, we
relax the need for user-defined tags by extending SPARFA to jointly process
both graded learner responses and the text of each question and its associated
answer(s) or other feedback. Our purely data-driven approach (i) enhances the
interpretability of the estimated latent concepts without the need of
explicitly generating a set of tags or performing a post-processing step, (ii)
improves the prediction performance of SPARFA, and (iii) scales to large
test/assessments where human annotation would prove burdensome. We demonstrate
the efficacy of the proposed approach on two real educational datasets.
| Andrew S. Lan, Christoph Studer, Andrew E. Waters and Richard G.
Baraniuk | null | 1305.1956 | null | null |
Calibrated Multivariate Regression with Application to Neural Semantic
Basis Discovery | stat.ML cs.LG | We propose a calibrated multivariate regression method named CMR for fitting
high dimensional multivariate regression models. Compared with existing
methods, CMR calibrates regularization for each regression task with respect to
its noise level so that it simultaneously attains improved finite-sample
performance and tuning insensitiveness. Theoretically, we provide sufficient
conditions under which CMR achieves the optimal rate of convergence in
parameter estimation. Computationally, we propose an efficient smoothed
proximal gradient algorithm with a worst-case numerical rate of convergence
$\cO(1/\epsilon)$, where $\epsilon$ is a pre-specified accuracy of the
objective function value. We conduct thorough numerical simulations to
illustrate that CMR consistently outperforms other high dimensional
multivariate regression methods. We also apply CMR to solve a brain activity
prediction problem and find that it is as competitive as a handcrafted model
created by human experts. The R package \texttt{camel} implementing the
proposed method is available on the Comprehensive R Archive Network
\url{http://cran.r-project.org/web/packages/camel/}.
| Han Liu and Lie Wang and Tuo Zhao | null | 1305.2238 | null | null |
Revisiting Bayesian Blind Deconvolution | cs.CV cs.LG stat.ML | Blind deconvolution involves the estimation of a sharp signal or image given
only a blurry observation. Because this problem is fundamentally ill-posed,
strong priors on both the sharp image and blur kernel are required to
regularize the solution space. While this naturally leads to a standard MAP
estimation framework, performance is compromised by unknown trade-off parameter
settings, optimization heuristics, and convergence issues stemming from
non-convexity and/or poor prior selections. To mitigate some of these problems,
a number of authors have recently proposed substituting a variational Bayesian
(VB) strategy that marginalizes over the high-dimensional image space leading
to better estimates of the blur kernel. However, the underlying cost function
now involves both integrals with no closed-form solution and complex,
function-valued arguments, thus losing the transparency of MAP. Beyond standard
Bayesian-inspired intuitions, it thus remains unclear by exactly what mechanism
these methods are able to operate, rendering understanding, improvements and
extensions more difficult. To elucidate these issues, we demonstrate that the
VB methodology can be recast as an unconventional MAP problem with a very
particular penalty/prior that couples the image, blur kernel, and noise level
in a principled way. This unique penalty has a number of useful characteristics
pertaining to relative concavity, local minima avoidance, and scale-invariance
that allow us to rigorously explain the success of VB including its existing
implementational heuristics and approximations. It also provides strict
criteria for choosing the optimal image prior that, perhaps
counter-intuitively, need not reflect the statistics of natural scenes. In so
doing we challenge the prevailing notion of why VB is successful for blind
deconvolution while providing a transparent platform for introducing
enhancements.
| David Wipf and Haichao Zhang | null | 1305.2362 | null | null |
Fast Feature Reduction in intrusion detection datasets | cs.CR cs.LG | In the most intrusion detection systems (IDS), a system tries to learn
characteristics of different type of attacks by analyzing packets that sent or
received in network. These packets have a lot of features. But not all of them
is required to be analyzed to detect that specific type of attack. Detection
speed and computational cost is another vital matter here, because in these
types of problems, datasets are very huge regularly. In this paper we tried to
propose a very simple and fast feature selection method to eliminate features
with no helpful information on them. Result faster learning in process of
redundant feature omission. We compared our proposed method with three most
successful similarity based feature selection algorithm including Correlation
Coefficient, Least Square Regression Error and Maximal Information Compression
Index. After that we used recommended features by each of these algorithms in
two popular classifiers including: Bayes and KNN classifier to measure the
quality of the recommendations. Experimental result shows that although the
proposed method can't outperform evaluated algorithms with high differences in
accuracy, but in computational cost it has huge superiority over them.
| Shafigh Parsazad, Ehsan Saboori, Amin Allahyar | null | 1305.2388 | null | null |
Stochastic Collapsed Variational Bayesian Inference for Latent Dirichlet
Allocation | cs.LG | In the internet era there has been an explosion in the amount of digital text
information available, leading to difficulties of scale for traditional
inference algorithms for topic models. Recent advances in stochastic
variational inference algorithms for latent Dirichlet allocation (LDA) have
made it feasible to learn topic models on large-scale corpora, but these
methods do not currently take full advantage of the collapsed representation of
the model. We propose a stochastic algorithm for collapsed variational Bayesian
inference for LDA, which is simpler and more efficient than the state of the
art method. We show connections between collapsed variational Bayesian
inference and MAP estimation for LDA, and leverage these connections to prove
convergence properties of the proposed algorithm. In experiments on large-scale
text corpora, the algorithm was found to converge faster and often to a better
solution than the previous method. Human-subject experiments also demonstrated
that the method can learn coherent topics in seconds on small corpora,
facilitating the use of topic models in interactive document analysis software.
| James Foulds, Levi Boyles, Christopher Dubois, Padhraic Smyth, Max
Welling | null | 1305.2452 | null | null |
On the Generalization Ability of Online Learning Algorithms for Pairwise
Loss Functions | cs.LG stat.ML | In this paper, we study the generalization properties of online learning
based stochastic methods for supervised learning problems where the loss
function is dependent on more than one training sample (e.g., metric learning,
ranking). We present a generic decoupling technique that enables us to provide
Rademacher complexity-based generalization error bounds. Our bounds are in
general tighter than those obtained by Wang et al (COLT 2012) for the same
problem. Using our decoupling technique, we are further able to obtain fast
convergence rates for strongly convex pairwise loss functions. We are also able
to analyze a class of memory efficient online learning algorithms for pairwise
learning problems that use only a bounded subset of past training samples to
update the hypothesis at each step. Finally, in order to complement our
generalization bounds, we propose a novel memory efficient online learning
algorithm for higher order learning problems with bounded regret guarantees.
| Purushottam Kar, Bharath K Sriperumbudur, Prateek Jain and Harish C
Karnick | null | 1305.2505 | null | null |
Learning Policies for Contextual Submodular Prediction | cs.LG stat.ML | Many prediction domains, such as ad placement, recommendation, trajectory
prediction, and document summarization, require predicting a set or list of
options. Such lists are often evaluated using submodular reward functions that
measure both quality and diversity. We propose a simple, efficient, and
provably near-optimal approach to optimizing such prediction problems based on
no-regret learning. Our method leverages a surprising result from online
submodular optimization: a single no-regret online learner can compete with an
optimal sequence of predictions. Compared to previous work, which either learn
a sequence of classifiers or rely on stronger assumptions such as
realizability, we ensure both data-efficiency as well as performance guarantees
in the fully agnostic setting. Experiments validate the efficiency and
applicability of the approach on a wide range of problems including manipulator
trajectory optimization, news recommendation and document summarization.
| Stephane Ross, Jiaji Zhou, Yisong Yue, Debadeepta Dey, J. Andrew
Bagnell | null | 1305.2532 | null | null |
Bandits with Knapsacks | cs.DS cs.LG | Multi-armed bandit problems are the predominant theoretical model of
exploration-exploitation tradeoffs in learning, and they have countless
applications ranging from medical trials, to communication networks, to Web
search and advertising. In many of these application domains the learner may be
constrained by one or more supply (or budget) limits, in addition to the
customary limitation on the time horizon. The literature lacks a general model
encompassing these sorts of problems. We introduce such a model, called
"bandits with knapsacks", that combines aspects of stochastic integer
programming with online learning. A distinctive feature of our problem, in
comparison to the existing regret-minimization literature, is that the optimal
policy for a given latent distribution may significantly outperform the policy
that plays the optimal fixed arm. Consequently, achieving sublinear regret in
the bandits-with-knapsacks problem is significantly more challenging than in
conventional bandit problems.
We present two algorithms whose reward is close to the information-theoretic
optimum: one is based on a novel "balanced exploration" paradigm, while the
other is a primal-dual algorithm that uses multiplicative updates. Further, we
prove that the regret achieved by both algorithms is optimal up to
polylogarithmic factors. We illustrate the generality of the problem by
presenting applications in a number of different domains including electronic
commerce, routing, and scheduling. As one example of a concrete application, we
consider the problem of dynamic posted pricing with limited supply and obtain
the first algorithm whose regret, with respect to the optimal dynamic policy,
is sublinear in the supply.
| Ashwinkumar Badanidiyuru, Robert Kleinberg and Aleksandrs Slivkins | 10.1109/FOCS.2013.30 | 1305.2545 | null | null |
Accelerated Mini-Batch Stochastic Dual Coordinate Ascent | stat.ML cs.LG | Stochastic dual coordinate ascent (SDCA) is an effective technique for
solving regularized loss minimization problems in machine learning. This paper
considers an extension of SDCA under the mini-batch setting that is often used
in practice. Our main contribution is to introduce an accelerated mini-batch
version of SDCA and prove a fast convergence rate for this method. We discuss
an implementation of our method over a parallel computing system, and compare
the results to both the vanilla stochastic dual coordinate ascent and to the
accelerated deterministic gradient descent method of
\cite{nesterov2007gradient}.
| Shai Shalev-Shwartz and Tong Zhang | null | 1305.2581 | null | null |
Boosting with the Logistic Loss is Consistent | cs.LG stat.ML | This manuscript provides optimization guarantees, generalization bounds, and
statistical consistency results for AdaBoost variants which replace the
exponential loss with the logistic and similar losses (specifically, twice
differentiable convex losses which are Lipschitz and tend to zero on one side).
The heart of the analysis is to show that, in lieu of explicit regularization
and constraints, the structure of the problem is fairly rigidly controlled by
the source distribution itself. The first control of this type is in the
separable case, where a distribution-dependent relaxed weak learning rate
induces speedy convergence with high probability over any sample. Otherwise, in
the nonseparable case, the convex surrogate risk itself exhibits
distribution-dependent levels of curvature, and consequently the algorithm's
output has small norm with high probability.
| Matus Telgarsky | null | 1305.2648 | null | null |
An efficient algorithm for learning with semi-bandit feedback | cs.LG | We consider the problem of online combinatorial optimization under
semi-bandit feedback. The goal of the learner is to sequentially select its
actions from a combinatorial decision set so as to minimize its cumulative
loss. We propose a learning algorithm for this problem based on combining the
Follow-the-Perturbed-Leader (FPL) prediction method with a novel loss
estimation procedure called Geometric Resampling (GR). Contrary to previous
solutions, the resulting algorithm can be efficiently implemented for any
decision set where efficient offline combinatorial optimization is possible at
all. Assuming that the elements of the decision set can be described with
d-dimensional binary vectors with at most m non-zero entries, we show that the
expected regret of our algorithm after T rounds is O(m sqrt(dT log d)). As a
side result, we also improve the best known regret bounds for FPL in the full
information setting to O(m^(3/2) sqrt(T log d)), gaining a factor of sqrt(d/m)
over previous bounds for this algorithm.
| Gergely Neu and G\'abor Bart\'ok | null | 1305.2732 | null | null |
HRF estimation improves sensitivity of fMRI encoding and decoding models | cs.LG stat.AP | Extracting activation patterns from functional Magnetic Resonance Images
(fMRI) datasets remains challenging in rapid-event designs due to the inherent
delay of blood oxygen level-dependent (BOLD) signal. The general linear model
(GLM) allows to estimate the activation from a design matrix and a fixed
hemodynamic response function (HRF). However, the HRF is known to vary
substantially between subjects and brain regions. In this paper, we propose a
model for jointly estimating the hemodynamic response function (HRF) and the
activation patterns via a low-rank representation of task effects.This model is
based on the linearity assumption behind the GLM and can be computed using
standard gradient-based solvers. We use the activation patterns computed by our
model as input data for encoding and decoding studies and report performance
improvement in both settings.
| Fabian Pedregosa (INRIA Paris - Rocquencourt, INRIA Saclay - Ile de
France), Michael Eickenberg (INRIA Saclay - Ile de France, LNAO), Bertrand
Thirion (INRIA Saclay - Ile de France, LNAO), Alexandre Gramfort (LTCI) | null | 1305.2788 | null | null |
Estimating or Propagating Gradients Through Stochastic Neurons | cs.LG | Stochastic neurons can be useful for a number of reasons in deep learning
models, but in many cases they pose a challenging problem: how to estimate the
gradient of a loss function with respect to the input of such stochastic
neurons, i.e., can we "back-propagate" through these stochastic neurons? We
examine this question, existing approaches, and present two novel families of
solutions, applicable in different settings. In particular, it is demonstrated
that a simple biologically plausible formula gives rise to an an unbiased (but
noisy) estimator of the gradient with respect to a binary stochastic neuron
firing probability. Unlike other estimators which view the noise as a small
perturbation in order to estimate gradients by finite differences, this
estimator is unbiased even without assuming that the stochastic perturbation is
small. This estimator is also interesting because it can be applied in very
general settings which do not allow gradient back-propagation, including the
estimation of the gradient with respect to future rewards, as required in
reinforcement learning setups. We also propose an approach to approximating
this unbiased but high-variance estimator by learning to predict it using a
biased estimator. The second approach we propose assumes that an estimator of
the gradient can be back-propagated and it provides an unbiased estimator of
the gradient, but can only work with non-linearities unlike the hard threshold,
but like the rectifier, that are not flat for all of their range. This is
similar to traditional sigmoidal units but has the advantage that for many
inputs, a hard decision (e.g., a 0 output) can be produced, which would be
convenient for conditional computation and achieving sparse representations and
sparse gradients.
| Yoshua Bengio | null | 1305.2982 | null | null |
Real Time Bid Optimization with Smooth Budget Delivery in Online
Advertising | cs.GT cs.LG | Today, billions of display ad impressions are purchased on a daily basis
through a public auction hosted by real time bidding (RTB) exchanges. A
decision has to be made for advertisers to submit a bid for each selected RTB
ad request in milliseconds. Restricted by the budget, the goal is to buy a set
of ad impressions to reach as many targeted users as possible. A desired action
(conversion), advertiser specific, includes purchasing a product, filling out a
form, signing up for emails, etc. In addition, advertisers typically prefer to
spend their budget smoothly over the time in order to reach a wider range of
audience accessible throughout a day and have a sustainable impact. However,
since the conversions occur rarely and the occurrence feedback is normally
delayed, it is very challenging to achieve both budget and performance goals at
the same time. In this paper, we present an online approach to the smooth
budget delivery while optimizing for the conversion performance. Our algorithm
tries to select high quality impressions and adjust the bid price based on the
prior performance distribution in an adaptive manner by distributing the budget
optimally across time. Our experimental results from real advertising campaigns
demonstrate the effectiveness of our proposed approach.
| Kuang-Chih Lee, Ali Jalali and Ali Dasdan | null | 1305.3011 | null | null |
Scalable Audience Reach Estimation in Real-time Online Advertising | cs.LG cs.DB | Online advertising has been introduced as one of the most efficient methods
of advertising throughout the recent years. Yet, advertisers are concerned
about the efficiency of their online advertising campaigns and consequently,
would like to restrict their ad impressions to certain websites and/or certain
groups of audience. These restrictions, known as targeting criteria, limit the
reachability for better performance. This trade-off between reachability and
performance illustrates a need for a forecasting system that can quickly
predict/estimate (with good accuracy) this trade-off. Designing such a system
is challenging due to (a) the huge amount of data to process, and, (b) the need
for fast and accurate estimates. In this paper, we propose a distributed fault
tolerant system that can generate such estimates fast with good accuracy. The
main idea is to keep a small representative sample in memory across multiple
machines and formulate the forecasting problem as queries against the sample.
The key challenge is to find the best strata across the past data, perform
multivariate stratified sampling while ensuring fuzzy fall-back to cover the
small minorities. Our results show a significant improvement over the uniform
and simple stratified sampling strategies which are currently widely used in
the industry.
| Ali Jalali, Santanu Kolay, Peter Foldes and Ali Dasdan | null | 1305.3014 | null | null |
Optimization with First-Order Surrogate Functions | stat.ML cs.LG math.OC | In this paper, we study optimization methods consisting of iteratively
minimizing surrogates of an objective function. By proposing several
algorithmic variants and simple convergence analyses, we make two main
contributions. First, we provide a unified viewpoint for several first-order
optimization techniques such as accelerated proximal gradient, block coordinate
descent, or Frank-Wolfe algorithms. Second, we introduce a new incremental
scheme that experimentally matches or outperforms state-of-the-art solvers for
large-scale optimization problems typically arising in machine learning.
| Julien Mairal (INRIA Grenoble Rh\^one-Alpes / LJK Laboratoire Jean
Kuntzmann) | null | 1305.3120 | null | null |
Qualitative detection of oil adulteration with machine learning
approaches | cs.CE cs.LG | The study focused on the machine learning analysis approaches to identify the
adulteration of 9 kinds of edible oil qualitatively and answered the following
three questions: Is the oil sample adulterant? How does it constitute? What is
the main ingredient of the adulteration oil? After extracting the
high-performance liquid chromatography (HPLC) data on triglyceride from 370 oil
samples, we applied the adaptive boosting with multi-class Hamming loss
(AdaBoost.MH) to distinguish the oil adulteration in contrast with the support
vector machine (SVM). Further, we regarded the adulterant oil and the pure oil
samples as ones with multiple labels and with only one label, respectively.
Then multi-label AdaBoost.MH and multi-label learning vector quantization
(ML-LVQ) model were built to determine the ingredients and their relative ratio
in the adulteration oil. The experimental results on six measures show that
ML-LVQ achieves better performance than multi-label AdaBoost.MH.
| Xiao-Bo Jin, Qiang Lu, Feng Wang, Quan-gong Huo | null | 1305.3149 | null | null |
Efficient Density Estimation via Piecewise Polynomial Approximation | cs.LG cs.DS stat.ML | We give a highly efficient "semi-agnostic" algorithm for learning univariate
probability distributions that are well approximated by piecewise polynomial
density functions. Let $p$ be an arbitrary distribution over an interval $I$
which is $\tau$-close (in total variation distance) to an unknown probability
distribution $q$ that is defined by an unknown partition of $I$ into $t$
intervals and $t$ unknown degree-$d$ polynomials specifying $q$ over each of
the intervals. We give an algorithm that draws $\tilde{O}(t\new{(d+1)}/\eps^2)$
samples from $p$, runs in time $\poly(t,d,1/\eps)$, and with high probability
outputs a piecewise polynomial hypothesis distribution $h$ that is
$(O(\tau)+\eps)$-close (in total variation distance) to $p$. This sample
complexity is essentially optimal; we show that even for $\tau=0$, any
algorithm that learns an unknown $t$-piecewise degree-$d$ probability
distribution over $I$ to accuracy $\eps$ must use $\Omega({\frac {t(d+1)}
{\poly(1 + \log(d+1))}} \cdot {\frac 1 {\eps^2}})$ samples from the
distribution, regardless of its running time. Our algorithm combines tools from
approximation theory, uniform convergence, linear programming, and dynamic
programming.
We apply this general algorithm to obtain a wide range of results for many
natural problems in density estimation over both continuous and discrete
domains. These include state-of-the-art results for learning mixtures of
log-concave distributions; mixtures of $t$-modal distributions; mixtures of
Monotone Hazard Rate distributions; mixtures of Poisson Binomial Distributions;
mixtures of Gaussians; and mixtures of $k$-monotone densities. Our general
technique yields computationally efficient algorithms for all these problems,
in many cases with provably optimal sample complexities (up to logarithmic
factors) in all parameters.
| Siu-On Chan, Ilias Diakonikolas, Rocco A. Servedio, Xiaorui Sun | null | 1305.3207 | null | null |
Online Learning in a Contract Selection Problem | cs.LG cs.GT math.OC stat.ML | In an online contract selection problem there is a seller which offers a set
of contracts to sequentially arriving buyers whose types are drawn from an
unknown distribution. If there exists a profitable contract for the buyer in
the offered set, i.e., a contract with payoff higher than the payoff of not
accepting any contracts, the buyer chooses the contract that maximizes its
payoff. In this paper we consider the online contract selection problem to
maximize the sellers profit. Assuming that a structural property called ordered
preferences holds for the buyer's payoff function, we propose online learning
algorithms that have sub-linear regret with respect to the best set of
contracts given the distribution over the buyer's type. This problem has many
applications including spectrum contracts, wireless service provider data plans
and recommendation systems.
| Cem Tekin and Mingyan Liu | null | 1305.3334 | null | null |
Transfer Learning for Content-Based Recommender Systems using Tree
Matching | cs.LG cs.IR | In this paper we present a new approach to content-based transfer learning
for solving the data sparsity problem in cases when the users' preferences in
the target domain are either scarce or unavailable, but the necessary
information on the preferences exists in another domain. We show that training
a system to use such information across domains can produce better performance.
Specifically, we represent users' behavior patterns based on topological graph
structures. Each behavior pattern represents the behavior of a set of users,
when the users' behavior is defined as the items they rated and the items'
rating values. In the next step we find a correlation between behavior patterns
in the source domain and behavior patterns in the target domain. This mapping
is considered a bridge between the two domains. Based on the correlation and
content-attributes of the items, we train a machine learning model to predict
users' ratings in the target domain. When we compare our approach to the
popularity approach and KNN-cross-domain on a real world dataset, the results
show that on an average of 83$%$ of the cases our approach outperforms both
methods.
| Naseem Biadsy, Lior Rokach, Armin Shmilovici | null | 1305.3384 | null | null |
Noisy Subspace Clustering via Thresholding | cs.IT cs.LG math.IT math.ST stat.ML stat.TH | We consider the problem of clustering noisy high-dimensional data points into
a union of low-dimensional subspaces and a set of outliers. The number of
subspaces, their dimensions, and their orientations are unknown. A
probabilistic performance analysis of the thresholding-based subspace
clustering (TSC) algorithm introduced recently in [1] shows that TSC succeeds
in the noisy case, even when the subspaces intersect. Our results reveal an
explicit tradeoff between the allowed noise level and the affinity of the
subspaces. We furthermore find that the simple outlier detection scheme
introduced in [1] provably succeeds in the noisy case.
| Reinhard Heckel and Helmut B\"olcskei | null | 1305.3486 | null | null |
Evolution of Covariance Functions for Gaussian Process Regression using
Genetic Programming | cs.NE cs.LG stat.ML | In this contribution we describe an approach to evolve composite covariance
functions for Gaussian processes using genetic programming. A critical aspect
of Gaussian processes and similar kernel-based models such as SVM is, that the
covariance function should be adapted to the modeled data. Frequently, the
squared exponential covariance function is used as a default. However, this can
lead to a misspecified model, which does not fit the data well. In the proposed
approach we use a grammar for the composition of covariance functions and
genetic programming to search over the space of sentences that can be derived
from the grammar. We tested the proposed approach on synthetic data from
two-dimensional test functions, and on the Mauna Loa CO2 time series. The
results show, that our approach is feasible, finding covariance functions that
perform much better than a default covariance function. For the CO2 data set a
composite covariance function is found, that matches the performance of a
hand-tuned covariance function.
| Gabriel Kronberger and Michael Kommenda | null | 1305.3794 | null | null |
Multi-View Learning for Web Spam Detection | cs.IR cs.LG | Spam pages are designed to maliciously appear among the top search results by
excessive usage of popular terms. Therefore, spam pages should be removed using
an effective and efficient spam detection system. Previous methods for web spam
classification used several features from various information sources (page
contents, web graph, access logs, etc.) to detect web spam. In this paper, we
follow page-level classification approach to build fast and scalable spam
filters. We show that each web page can be classified with satisfiable accuracy
using only its own HTML content. In order to design a multi-view classification
system, we used state-of-the-art spam classification methods with distinct
feature sets (views) as the base classifiers. Then, a fusion model is learned
to combine the output of the base classifiers and make final prediction.
Results show that multi-view learning significantly improves the classification
performance, namely AUC by 22%, while providing linear speedup for parallel
execution.
| Ali Hadian, Behrouz Minaei-Bidgoli | null | 1305.3814 | null | null |
Inferring the Origin Locations of Tweets with Quantitative Confidence | cs.SI cs.HC cs.LG | Social Internet content plays an increasingly critical role in many domains,
including public health, disaster management, and politics. However, its
utility is limited by missing geographic information; for example, fewer than
1.6% of Twitter messages (tweets) contain a geotag. We propose a scalable,
content-based approach to estimate the location of tweets using a novel yet
simple variant of gaussian mixture models. Further, because real-world
applications depend on quantified uncertainty for such estimates, we propose
novel metrics of accuracy, precision, and calibration, and we evaluate our
approach accordingly. Experiments on 13 million global, comprehensively
multi-lingual tweets show that our approach yields reliable, well-calibrated
results competitive with previous computationally intensive methods. We also
show that a relatively small number of training data are required for good
estimates (roughly 30,000 tweets) and models are quite time-invariant
(effective on tweets many weeks newer than the training set). Finally, we show
that toponyms and languages with small geographic footprint provide the most
useful location signals.
| Reid Priedhorsky (1), Aron Culotta (2), Sara Y. Del Valle (1) ((1) Los
Alamos National Laboratory, (2) Illinois Institute of Technology) | 10.1145/2531602.2531607 | 1305.3932 | null | null |
Contractive De-noising Auto-encoder | cs.LG | Auto-encoder is a special kind of neural network based on reconstruction.
De-noising auto-encoder (DAE) is an improved auto-encoder which is robust to
the input by corrupting the original data first and then reconstructing the
original input by minimizing the reconstruction error function. And contractive
auto-encoder (CAE) is another kind of improved auto-encoder to learn robust
feature by introducing the Frobenius norm of the Jacobean matrix of the learned
feature with respect to the original input. In this paper, we combine
de-noising auto-encoder and contractive auto- encoder, and propose another
improved auto-encoder, contractive de-noising auto- encoder (CDAE), which is
robust to both the original input and the learned feature. We stack CDAE to
extract more abstract features and apply SVM for classification. The experiment
result on benchmark dataset MNIST shows that our proposed CDAE performed better
than both DAE and CAE, proving the effective of our method.
| Fu-qiang Chen, Yan Wu, Guo-dong Zhao, Jun-ming Zhang, Ming Zhu, Jing
Bai | null | 1305.4076 | null | null |
Conditions for Convergence in Regularized Machine Learning Objectives | cs.LG cs.NA math.OC | Analysis of the convergence rates of modern convex optimization algorithms
can be achived through binary means: analysis of emperical convergence, or
analysis of theoretical convergence. These two pathways of capturing
information diverge in efficacy when moving to the world of distributed
computing, due to the introduction of non-intuitive, non-linear slowdowns
associated with broadcasting, and in some cases, gathering operations. Despite
these nuances in the rates of convergence, we can still show the existence of
convergence, and lower bounds for the rates. This paper will serve as a helpful
cheat-sheet for machine learning practitioners encountering this problem class
in the field.
| Patrick Hop, Xinghao Pan | null | 1305.4081 | null | null |
Machine learning on images using a string-distance | cs.LG cs.CV | We present a new method for image feature-extraction which is based on
representing an image by a finite-dimensional vector of distances that measure
how different the image is from a set of image prototypes. We use the recently
introduced Universal Image Distance (UID) \cite{RatsabyChesterIEEE2012} to
compare the similarity between an image and a prototype image. The advantage in
using the UID is the fact that no domain knowledge nor any image analysis need
to be done. Each image is represented by a finite dimensional feature vector
whose components are the UID values between the image and a finite set of image
prototypes from each of the feature categories. The method is automatic since
once the user selects the prototype images, the feature vectors are
automatically calculated without the need to do any image analysis. The
prototype images can be of different size, in particular, different than the
image size. Based on a collection of such cases any supervised or unsupervised
learning algorithm can be used to train and produce an image classifier or
image cluster analysis. In this paper we present the image feature-extraction
method and use it on several supervised and unsupervised learning experiments
for satellite image data.
| Uzi Chester, Joel Ratsaby | null | 1305.4204 | null | null |
Horizon-Independent Optimal Prediction with Log-Loss in Exponential
Families | cs.LG stat.ML | We study online learning under logarithmic loss with regular parametric
models. Hedayati and Bartlett (2012b) showed that a Bayesian prediction
strategy with Jeffreys prior and sequential normalized maximum likelihood
(SNML) coincide and are optimal if and only if the latter is exchangeable, and
if and only if the optimal strategy can be calculated without knowing the time
horizon in advance. They put forward the question what families have
exchangeable SNML strategies. This paper fully answers this open problem for
one-dimensional exponential families. The exchangeability can happen only for
three classes of natural exponential family distributions, namely the Gaussian,
Gamma, and the Tweedie exponential family of order 3/2. Keywords: SNML
Exchangeability, Exponential Family, Online Learning, Logarithmic Loss,
Bayesian Strategy, Jeffreys Prior, Fisher Information1
| Peter Bartlett, Peter Grunwald, Peter Harremoes, Fares Hedayati,
Wojciech Kotlowski | null | 1305.4324 | null | null |
Generalized Centroid Estimators in Bioinformatics | q-bio.QM cs.LG | In a number of estimation problems in bioinformatics, accuracy measures of
the target problem are usually given, and it is important to design estimators
that are suitable to those accuracy measures. However, there is often a
discrepancy between an employed estimator and a given accuracy measure of the
problem. In this study, we introduce a general class of efficient estimators
for estimation problems on high-dimensional binary spaces, which representmany
fundamental problems in bioinformatics. Theoretical analysis reveals that the
proposed estimators generally fit with commonly-used accuracy measures (e.g.
sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in
many cases, and cover a wide range of problems in bioinformatics from the
viewpoint of the principle of maximum expected accuracy (MEA). It is also shown
that some important algorithms in bioinformatics can be interpreted in a
unified manner. Not only the concept presented in this paper gives a useful
framework to design MEA-based estimators but also it is highly extendable and
sheds new light on many problems in bioinformatics.
| Michiaki Hamada, Hisanori Kiryu, Wataru Iwasaki and Kiyoshi Asai | null | 1305.4339 | null | null |
Ensembles of Classifiers based on Dimensionality Reduction | cs.LG | We present a novel approach for the construction of ensemble classifiers
based on dimensionality reduction. Dimensionality reduction methods represent
datasets using a small number of attributes while preserving the information
conveyed by the original dataset. The ensemble members are trained based on
dimension-reduced versions of the training set. These versions are obtained by
applying dimensionality reduction to the original training set using different
values of the input parameters. This construction meets both the diversity and
accuracy criteria which are required to construct an ensemble classifier where
the former criterion is obtained by the various input parameter values and the
latter is achieved due to the decorrelation and noise reduction properties of
dimensionality reduction. In order to classify a test sample, it is first
embedded into the dimension reduced space of each individual classifier by
using an out-of-sample extension algorithm. Each classifier is then applied to
the embedded sample and the classification is obtained via a voting scheme. We
present three variations of the proposed approach based on the Random
Projections, the Diffusion Maps and the Random Subspaces dimensionality
reduction algorithms. We also present a multi-strategy ensemble which combines
AdaBoost and Diffusion Maps. A comparison is made with the Bagging, AdaBoost,
Rotation Forest ensemble classifiers and also with the base classifier which
does not incorporate dimensionality reduction. Our experiments used seventeen
benchmark datasets from the UCI repository. The results obtained by the
proposed algorithms were superior in many cases to other algorithms.
| Alon Schclar and Lior Rokach and Amir Amit | null | 1305.4345 | null | null |
Meta Path-Based Collective Classification in Heterogeneous Information
Networks | cs.LG stat.ML | Collective classification has been intensively studied due to its impact in
many important applications, such as web mining, bioinformatics and citation
analysis. Collective classification approaches exploit the dependencies of a
group of linked objects whose class labels are correlated and need to be
predicted simultaneously. In this paper, we focus on studying the collective
classification problem in heterogeneous networks, which involves multiple types
of data objects interconnected by multiple types of links. Intuitively, two
objects are correlated if they are linked by many paths in the network.
However, most existing approaches measure the dependencies among objects
through directly links or indirect links without considering the different
semantic meanings behind different paths. In this paper, we study the
collective classification problem taht is defined among the same type of
objects in heterogenous networks. Moreover, by considering different linkage
paths in the network, one can capture the subtlety of different types of
dependencies among objects. We introduce the concept of meta-path based
dependencies among objects, where a meta path is a path consisting a certain
sequence of linke types. We show that the quality of collective classification
results strongly depends upon the meta paths used. To accommodate the large
network size, a novel solution, called HCC (meta-path based Heterogenous
Collective Classification), is developed to effectively assign labels to a
group of instances that are interconnected through different meta-paths. The
proposed HCC model can capture different types of dependencies among objects
with respect to different meta paths. Empirical studies on real-world networks
demonstrate that effectiveness of the proposed meta path-based collective
classification approach.
| Xiangnan Kong, Bokai Cao, Philip S. Yu, Ying Ding and David J. Wild | null | 1305.4433 | null | null |
Robustness of Random Forest-based gene selection methods | cs.LG q-bio.QM | Gene selection is an important part of microarray data analysis because it
provides information that can lead to a better mechanistic understanding of an
investigated phenomenon. At the same time, gene selection is very difficult
because of the noisy nature of microarray data. As a consequence, gene
selection is often performed with machine learning methods. The Random Forest
method is particularly well suited for this purpose. In this work, four
state-of-the-art Random Forest-based feature selection methods were compared in
a gene selection context. The analysis focused on the stability of selection
because, although it is necessary for determining the significance of results,
it is often ignored in similar studies.
The comparison of post-selection accuracy in the validation of Random Forest
classifiers revealed that all investigated methods were equivalent in this
context. However, the methods substantially differed with respect to the number
of selected genes and the stability of selection. Of the analysed methods, the
Boruta algorithm predicted the most genes as potentially important.
The post-selection classifier error rate, which is a frequently used measure,
was found to be a potentially deceptive measure of gene selection quality. When
the number of consistently selected genes was considered, the Boruta algorithm
was clearly the best. Although it was also the most computationally intensive
method, the Boruta algorithm's computational demands could be reduced to levels
comparable to those of other algorithms by replacing the Random Forest
importance with a comparable measure from Random Ferns (a similar but
simplified classifier). Despite their design assumptions, the minimal optimal
selection methods, were found to select a high fraction of false positives.
| Miron B. Kursa | null | 1305.4525 | null | null |
On the Complexity Analysis of Randomized Block-Coordinate Descent
Methods | math.OC cs.LG cs.NA math.NA stat.ML | In this paper we analyze the randomized block-coordinate descent (RBCD)
methods proposed in [8,11] for minimizing the sum of a smooth convex function
and a block-separable convex function. In particular, we extend Nesterov's
technique developed in [8] for analyzing the RBCD method for minimizing a
smooth convex function over a block-separable closed convex set to the
aforementioned more general problem and obtain a sharper expected-value type of
convergence rate than the one implied in [11]. Also, we obtain a better
high-probability type of iteration complexity, which improves upon the one in
[11] by at least the amount $O(n/\epsilon)$, where $\epsilon$ is the target
solution accuracy and $n$ is the number of problem blocks. In addition, for
unconstrained smooth convex minimization, we develop a new technique called
{\it randomized estimate sequence} to analyze the accelerated RBCD method
proposed by Nesterov [11] and establish a sharper expected-value type of
convergence rate than the one given in [11].
| Zhaosong Lu and Lin Xiao | null | 1305.4723 | null | null |
Power to the Points: Validating Data Memberships in Clusterings | cs.LG cs.CG | A clustering is an implicit assignment of labels of points, based on
proximity to other points. It is these labels that are then used for downstream
analysis (either focusing on individual clusters, or identifying
representatives of clusters and so on). Thus, in order to trust a clustering as
a first step in exploratory data analysis, we must trust the labels assigned to
individual data. Without supervision, how can we validate this assignment? In
this paper, we present a method to attach affinity scores to the implicit
labels of individual points in a clustering. The affinity scores capture the
confidence level of the cluster that claims to "own" the point. This method is
very general: it can be used with clusterings derived from Euclidean data,
kernelized data, or even data derived from information spaces. It smoothly
incorporates importance functions on clusters, allowing us to eight different
clusters differently. It is also efficient: assigning an affinity score to a
point depends only polynomially on the number of clusters and is independent of
the number of points in the data. The dimensionality of the underlying space
only appears in preprocessing. We demonstrate the value of our approach with an
experimental study that illustrates the use of these scores in different data
analysis tasks, as well as the efficiency and flexibility of the method. We
also demonstrate useful visualizations of these scores; these might prove
useful within an interactive analytics framework.
| Parasaran Raman and Suresh Venkatasubramanian | null | 1305.4757 | null | null |
Zero-sum repeated games: Counterexamples to the existence of the
asymptotic value and the conjecture
$\operatorname{maxmin}=\operatorname{lim}v_n$ | math.OC cs.LG | Mertens [In Proceedings of the International Congress of Mathematicians
(Berkeley, Calif., 1986) (1987) 1528-1577 Amer. Math. Soc.] proposed two
general conjectures about repeated games: the first one is that, in any
two-person zero-sum repeated game, the asymptotic value exists, and the second
one is that, when Player 1 is more informed than Player 2, in the long run
Player 1 is able to guarantee the asymptotic value. We disprove these two
long-standing conjectures by providing an example of a zero-sum repeated game
with public signals and perfect observation of the actions, where the value of
the $\lambda$-discounted game does not converge when $\lambda$ goes to 0. The
aforementioned example involves seven states, two actions and two signals for
each player. Remarkably, players observe the payoffs, and play in turn.
| Bruno Ziliotto | 10.1214/14-AOP997 | 1305.4778 | null | null |
A Data Mining Approach to Solve the Goal Scoring Problem | cs.AI cs.LG | In soccer, scoring goals is a fundamental objective which depends on many
conditions and constraints. Considering the RoboCup soccer 2D-simulator, this
paper presents a data mining-based decision system to identify the best time
and direction to kick the ball towards the goal to maximize the overall chances
of scoring during a simulated soccer match. Following the CRISP-DM methodology,
data for modeling were extracted from matches of major international
tournaments (10691 kicks), knowledge about soccer was embedded via
transformation of variables and a Multilayer Perceptron was used to estimate
the scoring chance. Experimental performance assessment to compare this
approach against previous LDA-based approach was conducted from 100 matches.
Several statistical metrics were used to analyze the performance of the system
and the results showed an increase of 7.7% in the number of kicks, producing an
overall increase of 78% in the number of goals scored.
| Renato Oliveira and Paulo Adeodato and Arthur Carvalho and Icamaan
Viegas and Christian Diego and Tsang Ing-Ren | 10.1109/IJCNN.2009.5178616 | 1305.4955 | null | null |
Robust Logistic Regression using Shift Parameters (Long Version) | cs.AI cs.LG stat.ML | Annotation errors can significantly hurt classifier performance, yet datasets
are only growing noisier with the increased use of Amazon Mechanical Turk and
techniques like distant supervision that automatically generate labels. In this
paper, we present a robust extension of logistic regression that incorporates
the possibility of mislabelling directly into the objective. Our model can be
trained through nearly the same means as logistic regression, and retains its
efficiency on high-dimensional datasets. Through named entity recognition
experiments, we demonstrate that our approach can provide a significant
improvement over the standard model when annotation errors are present.
| Julie Tibshirani and Christopher D. Manning | null | 1305.4987 | null | null |
Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with
Minimax Optimal Rates | math.ST cs.LG stat.ML stat.TH | We establish optimal convergence rates for a decomposition-based scalable
approach to kernel ridge regression. The method is simple to describe: it
randomly partitions a dataset of size N into m subsets of equal size, computes
an independent kernel ridge regression estimator for each subset, then averages
the local solutions into a global predictor. This partitioning leads to a
substantial reduction in computation time versus the standard approach of
performing kernel ridge regression on all N samples. Our two main theorems
establish that despite the computational speed-up, statistical optimality is
retained: as long as m is not too large, the partition-based estimator achieves
the statistical minimax rate over all estimators using the set of N samples. As
concrete examples, our theory guarantees that the number of processors m may
grow nearly linearly for finite-rank kernels and Gaussian kernels and
polynomially in N for Sobolev spaces, which in turn allows for substantial
reductions in computational cost. We conclude with experiments on both
simulated data and a music-prediction task that complement our theoretical
results, exhibiting the computational and statistical benefits of our approach.
| Yuchen Zhang and John C. Duchi and Martin J. Wainwright | null | 1305.5029 | null | null |
A Comparison of Random Forests and Ferns on Recognition of Instruments
in Jazz Recordings | cs.LG cs.IR cs.SD | In this paper, we first apply random ferns for classification of real music
recordings of a jazz band. No initial segmentation of audio data is assumed,
i.e., no onset, offset, nor pitch data are needed. The notion of random ferns
is described in the paper, to familiarize the reader with this classification
algorithm, which was introduced quite recently and applied so far in image
recognition tasks. The performance of random ferns is compared with random
forests for the same data. The results of experiments are presented in the
paper, and conclusions are drawn.
| Alicja A. Wieczorkowska, Miron B. Kursa | null | 1305.5078 | null | null |
A Supervised Neural Autoregressive Topic Model for Simultaneous Image
Classification and Annotation | cs.CV cs.LG stat.ML | Topic modeling based on latent Dirichlet allocation (LDA) has been a
framework of choice to perform scene recognition and annotation. Recently, a
new type of topic model called the Document Neural Autoregressive Distribution
Estimator (DocNADE) was proposed and demonstrated state-of-the-art performance
for document modeling. In this work, we show how to successfully apply and
extend this model to the context of visual scene modeling. Specifically, we
propose SupDocNADE, a supervised extension of DocNADE, that increases the
discriminative power of the hidden topic features by incorporating label
information into the training objective of the model. We also describe how to
leverage information about the spatial position of the visual words and how to
embed additional image annotations, so as to simultaneously perform image
classification and annotation. We test our model on the Scene15, LabelMe and
UIUC-Sports datasets and show that it compares favorably to other topic models
such as the supervised variant of LDA.
| Yin Zheng, Yu-Jin Zhang, Hugo Larochelle | null | 1305.5306 | null | null |
A Primal Condition for Approachability with Partial Monitoring | math.OC cs.GT cs.LG stat.ML | In approachability with full monitoring there are two types of conditions
that are known to be equivalent for convex sets: a primal and a dual condition.
The primal one is of the form: a set C is approachable if and only all
containing half-spaces are approachable in the one-shot game; while the dual
one is of the form: a convex set C is approachable if and only if it intersects
all payoff sets of a certain form. We consider approachability in games with
partial monitoring. In previous works (Perchet 2011; Mannor et al. 2011) we
provided a dual characterization of approachable convex sets; we also exhibited
efficient strategies in the case where C is a polytope. In this paper we
provide primal conditions on a convex set to be approachable with partial
monitoring. They depend on a modified reward function and lead to
approachability strategies, based on modified payoff functions, that proceed by
projections similarly to Blackwell's (1956) strategy; this is in contrast with
previously studied strategies in this context that relied mostly on the
signaling structure and aimed at estimating well the distributions of the
signals received. Our results generalize classical results by Kohlberg 1975
(see also Mertens et al. 1994) and apply to games with arbitrary signaling
structure as well as to arbitrary convex sets.
| Shie Mannor (EE-Technion), Vianney Perchet (LPMA), Gilles Stoltz
(INRIA Paris - Rocquencourt, DMA, GREGH) | null | 1305.5399 | null | null |
Characterizing A Database of Sequential Behaviors with Latent Dirichlet
Hidden Markov Models | stat.ML cs.LG | This paper proposes a generative model, the latent Dirichlet hidden Markov
models (LDHMM), for characterizing a database of sequential behaviors
(sequences). LDHMMs posit that each sequence is generated by an underlying
Markov chain process, which are controlled by the corresponding parameters
(i.e., the initial state vector, transition matrix and the emission matrix).
These sequence-level latent parameters for each sequence are modeled as latent
Dirichlet random variables and parameterized by a set of deterministic
database-level hyper-parameters. Through this way, we expect to model the
sequence in two levels: the database level by deterministic hyper-parameters
and the sequence-level by latent parameters. To learn the deterministic
hyper-parameters and approximate posteriors of parameters in LDHMMs, we propose
an iterative algorithm under the variational EM framework, which consists of E
and M steps. We examine two different schemes, the fully-factorized and
partially-factorized forms, for the framework, based on different assumptions.
We present empirical results of behavior modeling and sequence classification
on three real-world data sets, and compare them to other related models. The
experimental results prove that the proposed LDHMMs produce better
generalization performance in terms of log-likelihood and deliver competitive
results on the sequence classification problem.
| Yin Song, Longbing Cao, Xuhui Fan, Wei Cao and Jian Zhang | null | 1305.5734 | null | null |
Adapting the Stochastic Block Model to Edge-Weighted Networks | stat.ML cs.LG cs.SI physics.data-an | We generalize the stochastic block model to the important case in which edges
are annotated with weights drawn from an exponential family distribution. This
generalization introduces several technical difficulties for model estimation,
which we solve using a Bayesian approach. We introduce a variational algorithm
that efficiently approximates the model's posterior distribution for dense
graphs. In specific numerical experiments on edge-weighted networks, this
weighted stochastic block model outperforms the common approach of first
applying a single threshold to all weights and then applying the classic
stochastic block model, which can obscure latent block structure in networks.
This model will enable the recovery of latent structure in a broader range of
network data than was previously possible.
| Christopher Aicher, Abigail Z. Jacobs, Aaron Clauset | null | 1305.5782 | null | null |
Parallel Gaussian Process Regression with Low-Rank Covariance Matrix
Approximations | stat.ML cs.DC cs.LG | Gaussian processes (GP) are Bayesian non-parametric models that are widely
used for probabilistic regression. Unfortunately, it cannot scale well with
large data nor perform real-time predictions due to its cubic time cost in the
data size. This paper presents two parallel GP regression methods that exploit
low-rank covariance matrix approximations for distributing the computational
load among parallel machines to achieve time efficiency and scalability. We
theoretically guarantee the predictive performances of our proposed parallel
GPs to be equivalent to that of some centralized approximate GP regression
methods: The computation of their centralized counterparts can be distributed
among parallel machines, hence achieving greater time efficiency and
scalability. We analytically compare the properties of our parallel GPs such as
time, space, and communication complexity. Empirical evaluation on two
real-world datasets in a cluster of 20 computing nodes shows that our parallel
GPs are significantly more time-efficient and scalable than their centralized
counterparts and exact/full GP while achieving predictive performances
comparable to full GP.
| Jie Chen, Nannan Cao, Kian Hsiang Low, Ruofei Ouyang, Colin Keng-Yan
Tan, Patrick Jaillet | null | 1305.5826 | null | null |
A Symmetric Rank-one Quasi Newton Method for Non-negative Matrix
Factorization | math.NA cs.LG cs.NA | As we all known, the nonnegative matrix factorization (NMF) is a dimension
reduction method that has been widely used in image processing, text
compressing and signal processing etc. In this paper, an algorithm for
nonnegative matrix approximation is proposed. This method mainly bases on the
active set and the quasi-Newton type algorithm, by using the symmetric rank-one
and negative curvature direction technologies to approximate the Hessian
matrix. Our method improves the recent results of those methods in [Pattern
Recognition, 45(2012)3557-3565; SIAM J. Sci. Comput., 33(6)(2011)3261-3281;
Neural Computation, 19(10)(2007)2756-2779, etc.]. Moreover, the object function
decreases faster than many other NMF methods. In addition, some numerical
experiments are presented in the synthetic data, imaging processing and text
clustering. By comparing with the other six nonnegative matrix approximation
methods, our experiments confirm to our analysis.
| Shu-Zhen Lai, Hou-Biao Li, Zu-Tao Zhang | null | 1305.5829 | null | null |
Supervised Feature Selection for Diagnosis of Coronary Artery Disease
Based on Genetic Algorithm | cs.LG cs.CE | Feature Selection (FS) has become the focus of much research on decision
support systems areas for which data sets with tremendous number of variables
are analyzed. In this paper we present a new method for the diagnosis of
Coronary Artery Diseases (CAD) founded on Genetic Algorithm (GA) wrapped Bayes
Naive (BN) based FS. Basically, CAD dataset contains two classes defined with
13 features. In GA BN algorithm, GA generates in each iteration a subset of
attributes that will be evaluated using the BN in the second step of the
selection procedure. The final set of attribute contains the most relevant
feature model that increases the accuracy. The algorithm in this case produces
85.50% classification accuracy in the diagnosis of CAD. Thus, the asset of the
Algorithm is then compared with the use of Support Vector Machine (SVM),
MultiLayer Perceptron (MLP) and C4.5 decision tree Algorithm. The result of
classification accuracy for those algorithms are respectively 83.5%, 83.16% and
80.85%. Consequently, the GA wrapped BN Algorithm is correspondingly compared
with other FS algorithms. The Obtained results have shown very promising
outcomes for the diagnosis of CAD.
| Sidahmed Mokeddem, Baghdad Atmani and Mostefa Mokaddem | 10.5121/csit.2013.3305 | 1305.6046 | null | null |
Information-Theoretic Approach to Efficient Adaptive Path Planning for
Mobile Robotic Environmental Sensing | cs.LG cs.AI cs.MA cs.RO | Recent research in robot exploration and mapping has focused on sampling
environmental hotspot fields. This exploration task is formalized by Low,
Dolan, and Khosla (2008) in a sequential decision-theoretic planning under
uncertainty framework called MASP. The time complexity of solving MASP
approximately depends on the map resolution, which limits its use in
large-scale, high-resolution exploration and mapping. To alleviate this
computational difficulty, this paper presents an information-theoretic approach
to MASP (iMASP) for efficient adaptive path planning; by reformulating the
cost-minimizing iMASP as a reward-maximizing problem, its time complexity
becomes independent of map resolution and is less sensitive to increasing robot
team size as demonstrated both theoretically and empirically. Using the
reward-maximizing dual, we derive a novel adaptive variant of maximum entropy
sampling, thus improving the induced exploration policy performance. It also
allows us to establish theoretical bounds quantifying the performance advantage
of optimal adaptive over non-adaptive policies and the performance quality of
approximately optimal vs. optimal adaptive policies. We show analytically and
empirically the superior performance of iMASP-based policies for sampling the
log-Gaussian process to that of policies for the widely-used Gaussian process
in mapping the hotspot field. Lastly, we provide sufficient conditions that,
when met, guarantee adaptivity has no benefit under an assumed environment
model.
| Kian Hsiang Low, John M. Dolan, Pradeep Khosla | null | 1305.6129 | null | null |
Fast and accurate sentiment classification using an enhanced Naive Bayes
model | cs.CL cs.IR cs.LG | We have explored different methods of improving the accuracy of a Naive Bayes
classifier for sentiment analysis. We observed that a combination of methods
like negation handling, word n-grams and feature selection by mutual
information results in a significant improvement in accuracy. This implies that
a highly accurate and fast sentiment classifier can be built using a simple
Naive Bayes model that has linear training and testing time complexities. We
achieved an accuracy of 88.80% on the popular IMDB movie reviews dataset.
| Vivek Narayanan, Ishan Arora, Arjun Bhatia | 10.1007/978-3-642-41278-3_24 | 1305.6143 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.