title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
The Linearization of Belief Propagation on Pairwise Markov Networks | cs.AI cs.LG cs.SI | Belief Propagation (BP) is a widely used approximation for exact
probabilistic inference in graphical models, such as Markov Random Fields
(MRFs). In graphs with cycles, however, no exact convergence guarantees for BP
are known, in general. For the case when all edges in the MRF carry the same
symmetric, doubly stochastic potential, recent works have proposed to
approximate BP by linearizing the update equations around default values, which
was shown to work well for the problem of node classification. The present
paper generalizes all prior work and derives an approach that approximates
loopy BP on any pairwise MRF with the problem of solving a linear equation
system. This approach combines exact convergence guarantees and a fast matrix
implementation with the ability to model heterogenous networks. Experiments on
synthetic graphs with planted edge potentials show that the linearization has
comparable labeling accuracy as BP for graphs with weak potentials, while
speeding-up inference by orders of magnitude.
| Wolfgang Gatterbauer | null | 1502.04956 | null | null |
A New Sampling Technique for Tensors | stat.ML cs.DS cs.IT cs.LG math.IT | In this paper we propose new techniques to sample arbitrary third-order
tensors, with an objective of speeding up tensor algorithms that have recently
gained popularity in machine learning. Our main contribution is a new way to
select, in a biased random way, only $O(n^{1.5}/\epsilon^2)$ of the possible
$n^3$ elements while still achieving each of the three goals: \\ {\em (a)
tensor sparsification}: for a tensor that has to be formed from arbitrary
samples, compute very few elements to get a good spectral approximation, and
for arbitrary orthogonal tensors {\em (b) tensor completion:} recover an
exactly low-rank tensor from a small number of samples via alternating least
squares, or {\em (c) tensor factorization:} approximating factors of a low-rank
tensor corrupted by noise. \\ Our sampling can be used along with existing
tensor-based algorithms to speed them up, removing the computational bottleneck
in these methods.
| Srinadh Bhojanapalli, Sujay Sanghavi | null | 1502.05023 | null | null |
On Sex, Evolution, and the Multiplicative Weights Update Algorithm | cs.LG cs.GT | We consider a recent innovative theory by Chastain et al. on the role of sex
in evolution [PNAS'14]. In short, the theory suggests that the evolutionary
process of gene recombination implements the celebrated multiplicative weights
updates algorithm (MWUA). They prove that the population dynamics induced by
sexual reproduction can be precisely modeled by genes that use MWUA as their
learning strategy in a particular coordination game. The result holds in the
environments of \emph{weak selection}, under the assumption that the population
frequencies remain a product distribution.
We revisit the theory, eliminating both the requirement of weak selection and
any assumption on the distribution of the population. Removing the assumption
of product distributions is crucial, since as we show, this assumption is
inconsistent with the population dynamics. We show that the marginal allele
distributions induced by the population dynamics precisely match the marginals
induced by a multiplicative weights update algorithm in this general setting,
thereby affirming and substantially generalizing these earlier results.
We further revise the implications for convergence and utility or fitness
guarantees in coordination games. In contrast to the claim of Chastain et
al.[PNAS'14], we conclude that the sexual evolutionary dynamics does not entail
any property of the population distribution, beyond those already implied by
convergence.
| Reshef Meir and David Parkes | null | 1502.05056 | null | null |
Real time clustering of time series using triangular potentials | cs.LG | Motivated by the problem of computing investment portfolio weightings we
investigate various methods of clustering as alternatives to traditional
mean-variance approaches. Such methods can have significant benefits from a
practical point of view since they remove the need to invert a sample
covariance matrix, which can suffer from estimation error and will almost
certainly be non-stationary. The general idea is to find groups of assets which
share similar return characteristics over time and treat each group as a single
composite asset. We then apply inverse volatility weightings to these new
composite assets. In the course of our investigation we devise a method of
clustering based on triangular potentials and we present associated theoretical
results as well as various examples based on synthetic data.
| Aldo Pacchiano, Oliver Williams | null | 1502.05090 | null | null |
CSAL: Self-adaptive Labeling based Clustering Integrating Supervised
Learning on Unlabeled Data | cs.LG | Supervised classification approaches can predict labels for unknown data
because of the supervised training process. The success of classification is
heavily dependent on the labeled training data. Differently, clustering is
effective in revealing the aggregation property of unlabeled data, but the
performance of most clustering methods is limited by the absence of labeled
data. In real applications, however, it is time-consuming and sometimes
impossible to obtain labeled data. The combination of clustering and
classification is a promising and active approach which can largely improve the
performance. In this paper, we propose an innovative and effective clustering
framework based on self-adaptive labeling (CSAL) which integrates clustering
and classification on unlabeled data. Clustering is first employed to partition
data and a certain proportion of clustered data are selected by our proposed
labeling approach for training classifiers. In order to refine the trained
classifiers, an iterative process of Expectation-Maximization algorithm is
devised into the proposed clustering framework CSAL. Experiments are conducted
on publicly data sets to test different combinations of clustering algorithms
and classification models as well as various training data labeling methods.
The experimental results show that our approach along with the self-adaptive
method outperforms other methods.
| Fangfang Li, Guandong Xu, Longbing Cao | null | 1502.05111 | null | null |
Temporal Embedding in Convolutional Neural Networks for Robust Learning
of Abstract Snippets | cs.LG cs.NE | The prediction of periodical time-series remains challenging due to various
types of data distortions and misalignments. Here, we propose a novel model
called Temporal embedding-enhanced convolutional neural Network (TeNet) to
learn repeatedly-occurring-yet-hidden structural elements in periodical
time-series, called abstract snippets, for predicting future changes. Our model
uses convolutional neural networks and embeds a time-series with its potential
neighbors in the temporal domain for aligning it to the dominant patterns in
the dataset. The model is robust to distortions and misalignments in the
temporal domain and demonstrates strong prediction power for periodical
time-series.
We conduct extensive experiments and discover that the proposed model shows
significant and consistent advantages over existing methods on a variety of
data modalities ranging from human mobility to household power consumption
records. Empirical results indicate that the model is robust to various factors
such as number of samples, variance of data, numerical ranges of data etc. The
experiments also verify that the intuition behind the model can be generalized
to multiple data types and applications and promises significant improvement in
prediction performances across the datasets studied.
| Jiajun Liu, Kun Zhao, Brano Kusy, Ji-rong Wen, Raja Jurdak | 10.1109/TKDE.2016.2598171 | 1502.05113 | null | null |
Supervised cross-modal factor analysis for multiple modal data
classification | cs.LG | In this paper we study the problem of learning from multiple modal data for
purpose of document classification. In this problem, each document is composed
two different modals of data, i.e., an image and a text. Cross-modal factor
analysis (CFA) has been proposed to project the two different modals of data to
a shared data space, so that the classification of a image or a text can be
performed directly in this space. A disadvantage of CFA is that it has ignored
the supervision information. In this paper, we improve CFA by incorporating the
supervision information to represent and classify both image and text modals of
documents. We project both image and text data to a shared data space by factor
analysis, and then train a class label predictor in the shared space to use the
class label information. The factor analysis parameter and the predictor
parameter are learned jointly by solving one single objective function. With
this objective function, we minimize the distance between the projections of
image and text of the same document, and the classification error of the
projection measured by hinge loss function. The objective function is optimized
by an alternate optimization strategy in an iterative algorithm. Experiments in
two different multiple modal document data sets show the advantage of the
proposed algorithm over other CFA methods.
| Jingbin Wang, Yihua Zhou, Kanghong Duan, Jim Jing-Yan Wang, Halima
Bensmail | null | 1502.05134 | null | null |
Dengue disease prediction using weka data mining tool | cs.CY cs.LG | Dengue is a life threatening disease prevalent in several developed as well
as developing countries like India.In this paper we discuss various algorithm
approaches of data mining that have been utilized for dengue disease
prediction. Data mining is a well known technique used by health organizations
for classification of diseases such as dengue, diabetes and cancer in
bioinformatics research. In the proposed approach we have used WEKA with 10
cross validation to evaluate data and compare results. Weka has an extensive
collection of different machine learning and data mining algorithms. In this
paper we have firstly classified the dengue data set and then compared the
different data mining techniques in weka through Explorer, knowledge flow and
Experimenter interfaces. Furthermore in order to validate our approach we have
used a dengue dataset with 108 instances but weka used 99 rows and 18
attributes to determine the prediction of disease and their accuracy using
classifications of different algorithms to find out the best performance. The
main objective of this paper is to classify data and assist the users in
extracting useful information from data and easily identify a suitable
algorithm for accurate predictive model from it. From the findings of this
paper it can be concluded that Na\"ive Bayes and J48 are the best performance
algorithms for classified accuracy because they achieved maximum accuracy= 100%
with 99 correctly classified instances, maximum ROC = 1, had least mean
absolute error and it took minimum time for building this model through
Explorer and Knowledge flow results
| Kashish Ara Shakil, Shadma Anis and Mansaf Alam | null | 1502.05167 | null | null |
F0 Modeling In Hmm-Based Speech Synthesis System Using Deep Belief
Network | cs.LG cs.NE | In recent years multilayer perceptrons (MLPs) with many hid- den layers Deep
Neural Network (DNN) has performed sur- prisingly well in many speech tasks,
i.e. speech recognition, speaker verification, speech synthesis etc. Although
in the context of F0 modeling these techniques has not been ex- ploited
properly. In this paper, Deep Belief Network (DBN), a class of DNN family has
been employed and applied to model the F0 contour of synthesized speech which
was generated by HMM-based speech synthesis system. The experiment was done on
Bengali language. Several DBN-DNN architectures ranging from four to seven
hidden layers and up to 200 hid- den units per hidden layer was presented and
evaluated. The results were compared against clustering tree techniques pop-
ularly found in statistical parametric speech synthesis. We show that from
textual inputs DBN-DNN learns a high level structure which in turn improves F0
contour in terms of ob- jective and subjective tests.
| Sankar Mukherjee, Shyamal Kumar Das Mandal | null | 1502.05213 | null | null |
On learning k-parities with and without noise | cs.DS cs.DM cs.LG | We first consider the problem of learning $k$-parities in the on-line
mistake-bound model: given a hidden vector $x \in \{0,1\}^n$ with $|x|=k$ and a
sequence of "questions" $a_1, a_2, ...\in \{0,1\}^n$, where the algorithm must
reply to each question with $< a_i, x> \pmod 2$, what is the best tradeoff
between the number of mistakes made by the algorithm and its time complexity?
We improve the previous best result of Buhrman et al. by an $\exp(k)$ factor in
the time complexity.
Second, we consider the problem of learning $k$-parities in the presence of
classification noise of rate $\eta \in (0,1/2)$. A polynomial time algorithm
for this problem (when $\eta > 0$ and $k = \omega(1)$) is a longstanding
challenge in learning theory. Grigorescu et al. showed an algorithm running in
time ${n \choose k/2}^{1 + 4\eta^2 +o(1)}$. Note that this algorithm inherently
requires time ${n \choose k/2}$ even when the noise rate $\eta$ is polynomially
small. We observe that for sufficiently small noise rate, it is possible to
break the $n \choose k/2$ barrier. In particular, if for some function $f(n) =
\omega(1)$ and $\alpha \in [1/2, 1)$, $k = n/f(n)$ and $\eta = o(f(n)^{-
\alpha}/\log n)$, then there is an algorithm for the problem with running time
$poly(n)\cdot {n \choose k}^{1-\alpha} \cdot e^{-k/4.01}$.
| Arnab Bhattacharyya, Ameet Gadekar, Ninad Rajgopal | null | 1502.05375 | null | null |
On the Effects of Low-Quality Training Data on Information Extraction
from Clinical Reports | cs.LG cs.CL cs.IR | In the last five years there has been a flurry of work on information
extraction from clinical documents, i.e., on algorithms capable of extracting,
from the informal and unstructured texts that are generated during everyday
clinical practice, mentions of concepts relevant to such practice. Most of this
literature is about methods based on supervised learning, i.e., methods for
training an information extraction system from manually annotated examples.
While a lot of work has been devoted to devising learning methods that generate
more and more accurate information extractors, no work has been devoted to
investigating the effect of the quality of training data on the learning
process. Low quality in training data often derives from the fact that the
person who has annotated the data is different from the one against whose
judgment the automatically annotated data must be evaluated. In this paper we
test the impact of such data quality issues on the accuracy of information
extraction systems as applied to the clinical domain. We do this by comparing
the accuracy deriving from training data annotated by the authoritative coder
(i.e., the one who has also annotated the test data, and by whose judgment we
must abide), with the accuracy deriving from training data annotated by a
different coder. The results indicate that, although the disagreement between
the two coders (as measured on the training set) is substantial, the difference
is (surprisingly enough) not always statistically significant.
| Diego Marcheggiani and Fabrizio Sebastiani | 10.1145/3106235 | 1502.05472 | null | null |
Trust Region Policy Optimization | cs.LG | We describe an iterative procedure for optimizing policies, with guaranteed
monotonic improvement. By making several approximations to the
theoretically-justified procedure, we develop a practical algorithm, called
Trust Region Policy Optimization (TRPO). This algorithm is similar to natural
policy gradient methods and is effective for optimizing large nonlinear
policies such as neural networks. Our experiments demonstrate its robust
performance on a wide variety of tasks: learning simulated robotic swimming,
hopping, and walking gaits; and playing Atari games using images of the screen
as input. Despite its approximations that deviate from the theory, TRPO tends
to give monotonic improvement, with little tuning of hyperparameters.
| John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan,
Pieter Abbeel | null | 1502.05477 | null | null |
Optimizing Text Quantifiers for Multivariate Loss Functions | cs.LG cs.IR | We address the problem of \emph{quantification}, a supervised learning task
whose goal is, given a class, to estimate the relative frequency (or
\emph{prevalence}) of the class in a dataset of unlabelled items.
Quantification has several applications in data and text mining, such as
estimating the prevalence of positive reviews in a set of reviews of a given
product, or estimating the prevalence of a given support issue in a dataset of
transcripts of phone calls to tech support. So far, quantification has been
addressed by learning a general-purpose classifier, counting the unlabelled
items which have been assigned the class, and tuning the obtained counts
according to some heuristics. In this paper we depart from the tradition of
using general-purpose classifiers, and use instead a supervised learning model
for \emph{structured prediction}, capable of generating classifiers directly
optimized for the (multivariate and non-linear) function used for evaluating
quantification accuracy. The experiments that we have run on 5500 binary
high-dimensional datasets (averaging more than 14,000 documents each) show that
this method is more accurate, more stable, and more efficient than existing,
state-of-the-art quantification methods.
| Andrea Esuli and Fabrizio Sebastiani | 10.1145/2700406 | 1502.05491 | null | null |
NeuroSVM: A Graphical User Interface for Identification of Liver
Patients | cs.LG cs.HC | Diagnosis of liver infection at preliminary stage is important for better
treatment. In todays scenario devices like sensors are used for detection of
infections. Accurate classification techniques are required for automatic
identification of disease samples. In this context, this study utilizes data
mining approaches for classification of liver patients from healthy
individuals. Four algorithms (Naive Bayes, Bagging, Random forest and SVM) were
implemented for classification using R platform. Further to improve the
accuracy of classification a hybrid NeuroSVM model was developed using SVM and
feed-forward artificial neural network (ANN). The hybrid model was tested for
its performance using statistical parameters like root mean square error (RMSE)
and mean absolute percentage error (MAPE). The model resulted in a prediction
accuracy of 98.83%. The results suggested that development of hybrid model
improved the accuracy of prediction. To serve the medicinal community for
prediction of liver disease among patients, a graphical user interface (GUI)
has been developed using R. The GUI is deployed as a package in local
repository of R platform for users to perform prediction.
| Kalyan Nagaraj and Amulyashree Sridhar | null | 1502.05534 | null | null |
Just Sort It! A Simple and Effective Approach to Active Preference
Learning | stat.ML cs.LG | We address the problem of learning a ranking by using adaptively chosen
pairwise comparisons. Our goal is to recover the ranking accurately but to
sample the comparisons sparingly. If all comparison outcomes are consistent
with the ranking, the optimal solution is to use an efficient sorting
algorithm, such as Quicksort. But how do sorting algorithms behave if some
comparison outcomes are inconsistent with the ranking? We give favorable
guarantees for Quicksort for the popular Bradley-Terry model, under natural
assumptions on the parameters. Furthermore, we empirically demonstrate that
sorting algorithms lead to a very simple and effective active learning
strategy: repeatedly sort the items. This strategy performs as well as
state-of-the-art methods (and much better than random sampling) at a minuscule
fraction of the computational cost.
| Lucas Maystre, Matthias Grossglauser | null | 1502.05556 | null | null |
Adaptive system optimization using random directions stochastic
approximation | math.OC cs.LG | We present novel algorithms for simulation optimization using random
directions stochastic approximation (RDSA). These include first-order
(gradient) as well as second-order (Newton) schemes. We incorporate both
continuous-valued as well as discrete-valued perturbations into both our
algorithms. The former are chosen to be independent and identically distributed
(i.i.d.) symmetric, uniformly distributed random variables (r.v.), while the
latter are i.i.d., asymmetric, Bernoulli r.v.s. Our Newton algorithm, with a
novel Hessian estimation scheme, requires N-dimensional perturbations and three
loss measurements per iteration, whereas the simultaneous perturbation Newton
search algorithm of [1] requires 2N-dimensional perturbations and four loss
measurements per iteration. We prove the unbiasedness of both gradient and
Hessian estimates and asymptotic (strong) convergence for both first-order and
second-order schemes. We also provide asymptotic normality results, which in
particular establish that the asymmetric Bernoulli variant of Newton RDSA
method is better than 2SPSA of [1]. Numerical experiments are used to validate
the theoretical results.
| Prashanth L.A., Shalabh Bhatnagar, Michael Fu and Steve Marcus | null | 1502.05577 | null | null |
NP-Hardness and Inapproximability of Sparse PCA | cs.LG cs.CC cs.DS math.CO stat.ML | We give a reduction from {\sc clique} to establish that sparse PCA is
NP-hard. The reduction has a gap which we use to exclude an FPTAS for sparse
PCA (unless P=NP). Under weaker complexity assumptions, we also exclude
polynomial constant-factor approximation algorithms.
| Malik Magdon-Ismail | null | 1502.05675 | null | null |
Approval Voting and Incentives in Crowdsourcing | cs.GT cs.AI cs.LG cs.MA | The growing need for labeled training data has made crowdsourcing an
important part of machine learning. The quality of crowdsourced labels is,
however, adversely affected by three factors: (1) the workers are not experts;
(2) the incentives of the workers are not aligned with those of the requesters;
and (3) the interface does not allow workers to convey their knowledge
accurately, by forcing them to make a single choice among a set of options. In
this paper, we address these issues by introducing approval voting to utilize
the expertise of workers who have partial knowledge of the true answer, and
coupling it with a ("strictly proper") incentive-compatible compensation
mechanism. We show rigorous theoretical guarantees of optimality of our
mechanism together with a simple axiomatic characterization. We also conduct
preliminary empirical studies on Amazon Mechanical Turk which validate our
approach.
| Nihar B. Shah, Dengyong Zhou, Yuval Peres | null | 1502.05696 | null | null |
Scale-Free Algorithms for Online Linear Optimization | cs.LG math.OC | We design algorithms for online linear optimization that have optimal regret
and at the same time do not need to know any upper or lower bounds on the norm
of the loss vectors. We achieve adaptiveness to norms of loss vectors by scale
invariance, i.e., our algorithms make exactly the same decisions if the
sequence of loss vectors is multiplied by any positive constant. Our algorithms
work for any decision set, bounded or unbounded. For unbounded decisions sets,
these are the first truly adaptive algorithms for online linear optimization.
| Francesco Orabona and David Pal | null | 1502.05744 | null | null |
Pairwise Constraint Propagation: A Survey | cs.CV cs.LG stat.ML | As one of the most important types of (weaker) supervised information in
machine learning and pattern recognition, pairwise constraint, which specifies
whether a pair of data points occur together, has recently received significant
attention, especially the problem of pairwise constraint propagation. At least
two reasons account for this trend: the first is that compared to the data
label, pairwise constraints are more general and easily to collect, and the
second is that since the available pairwise constraints are usually limited,
the constraint propagation problem is thus important.
This paper provides an up-to-date critical survey of pairwise constraint
propagation research. There are two underlying motivations for us to write this
survey paper: the first is to provide an up-to-date review of the existing
literature, and the second is to offer some insights into the studies of
pairwise constraint propagation. To provide a comprehensive survey, we not only
categorize existing propagation techniques but also present detailed
descriptions of representative methods within each category.
| Zhenyong Fu and Zhiwu Lu | null | 1502.05752 | null | null |
Automatic differentiation in machine learning: a survey | cs.SC cs.LG stat.ML | Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in
machine learning. Automatic differentiation (AD), also called algorithmic
differentiation or simply "autodiff", is a family of techniques similar to but
more general than backpropagation for efficiently and accurately evaluating
derivatives of numeric functions expressed as computer programs. AD is a small
but established field with applications in areas including computational fluid
dynamics, atmospheric sciences, and engineering design optimization. Until very
recently, the fields of machine learning and AD have largely been unaware of
each other and, in some cases, have independently discovered each other's
results. Despite its relevance, general-purpose AD has been missing from the
machine learning toolbox, a situation slowly changing with its ongoing adoption
under the names "dynamic computational graphs" and "differentiable
programming". We survey the intersection of AD and machine learning, cover
applications where AD has direct relevance, and address the main implementation
techniques. By precisely defining the main differentiation techniques and their
interrelationships, we aim to bring clarity to the usage of the terms
"autodiff", "automatic differentiation", and "symbolic differentiation" as
these are encountered more and more in machine learning settings.
| Atilim Gunes Baydin, Barak A. Pearlmutter, Alexey Andreyevich Radul,
Jeffrey Mark Siskind | null | 1502.05767 | null | null |
Low-Cost Learning via Active Data Procurement | cs.GT cs.AI cs.LG stat.ML | We design mechanisms for online procurement of data held by strategic agents
for machine learning tasks. The challenge is to use past data to actively price
future data and give learning guarantees even when an agent's cost for
revealing her data may depend arbitrarily on the data itself. We achieve this
goal by showing how to convert a large class of no-regret algorithms into
online posted-price and learning mechanisms. Our results in a sense parallel
classic sample complexity guarantees, but with the key resource being money
rather than quantity of data: With a budget constraint $B$, we give robust risk
(predictive error) bounds on the order of $1/\sqrt{B}$. Because we use an
active approach, we can often guarantee to do significantly better by
leveraging correlations between costs and data.
Our algorithms and analysis go through a model of no-regret learning with $T$
arriving pairs (cost, data) and a budget constraint of $B$. Our regret bounds
for this model are on the order of $T/\sqrt{B}$ and we give lower bounds on the
same order.
| Jacob Abernethy, Yiling Chen, Chien-Ju Ho, Bo Waggoner | 10.1145/2764468.2764519 | 1502.05774 | null | null |
Spike Event Based Learning in Neural Networks | cs.NE cs.LG | A scheme is derived for learning connectivity in spiking neural networks. The
scheme learns instantaneous firing rates that are conditional on the activity
in other parts of the network. The scheme is independent of the choice of
neuron dynamics or activation function, and network architecture. It involves
two simple, online, local learning rules that are applied only in response to
occurrences of spike events. This scheme provides a direct method for
transferring ideas between the fields of deep learning and computational
neuroscience. This learning scheme is demonstrated using a layered feedforward
spiking neural network trained self-supervised on a prediction and
classification task for moving MNIST images collected using a Dynamic Vision
Sensor.
| James A. Henderson, TingTing A. Gibson, Janet Wiles | null | 1502.05777 | null | null |
A provably convergent alternating minimization method for mean field
inference | cs.LG math.OC | Mean-Field is an efficient way to approximate a posterior distribution in
complex graphical models and constitutes the most popular class of Bayesian
variational approximation methods. In most applications, the mean field
distribution parameters are computed using an alternate coordinate
minimization. However, the convergence properties of this algorithm remain
unclear. In this paper, we show how, by adding an appropriate penalization
term, we can guarantee convergence to a critical point, while keeping a closed
form update at each step. A convergence rate estimate can also be derived based
on recent results in non-convex optimization.
| Pierre Baqu\'e, Jean-Hubert Hours, Fran\c{c}ois Fleuret, Pascal Fua | null | 1502.05832 | null | null |
On predictability of rare events leveraging social media: a machine
learning perspective | cs.SI cs.LG physics.data-an physics.soc-ph | Information extracted from social media streams has been leveraged to
forecast the outcome of a large number of real-world events, from political
elections to stock market fluctuations. An increasing amount of studies
demonstrates how the analysis of social media conversations provides cheap
access to the wisdom of the crowd. However, extents and contexts in which such
forecasting power can be effectively leveraged are still unverified at least in
a systematic way. It is also unclear how social-media-based predictions compare
to those based on alternative information sources. To address these issues,
here we develop a machine learning framework that leverages social media
streams to automatically identify and predict the outcomes of soccer matches.
We focus in particular on matches in which at least one of the possible
outcomes is deemed as highly unlikely by professional bookmakers. We argue that
sport events offer a systematic approach for testing the predictive power of
social media, and allow to compare such power against the rigorous baselines
set by external sources. Despite such strict baselines, our framework yields
above 8% marginal profit when used to inform simple betting strategies. The
system is based on real-time sentiment analysis and exploits data collected
immediately before the games, allowing for informed bets. We discuss the
rationale behind our approach, describe the learning framework, its prediction
performance and the return it provides as compared to a set of betting
strategies. To test our framework we use both historical Twitter data from the
2014 FIFA World Cup games, and real-time Twitter data collected by monitoring
the conversations about all soccer matches of four major European tournaments
(FA Premier League, Serie A, La Liga, and Bundesliga), and the 2014 UEFA
Champions League, during the period between Oct. 25th 2014 and Nov. 26th 2014.
| Lei Le, Emilio Ferrara, Alessandro Flammini | 10.1145/2817946.2817949 | 1502.05886 | null | null |
Contextual Semibandits via Supervised Learning Oracles | cs.LG stat.ML | We study an online decision making problem where on each round a learner
chooses a list of items based on some side information, receives a scalar
feedback value for each individual item, and a reward that is linearly related
to this feedback. These problems, known as contextual semibandits, arise in
crowdsourcing, recommendation, and many other domains. This paper reduces
contextual semibandits to supervised learning, allowing us to leverage powerful
supervised learning methods in this partial-feedback setting. Our first
reduction applies when the mapping from feedback to reward is known and leads
to a computationally efficient algorithm with near-optimal regret. We show that
this algorithm outperforms state-of-the-art approaches on real-world
learning-to-rank datasets, demonstrating the advantage of oracle-based
algorithms. Our second reduction applies to the previously unstudied setting
when the linear mapping from feedback to reward is unknown. Our regret
guarantees are superior to prior techniques that ignore the feedback.
| Akshay Krishnamurthy, Alekh Agarwal, Miroslav Dudik | null | 1502.05890 | null | null |
A Data Mining framework to model Consumer Indebtedness with
Psychological Factors | cs.LG cs.CE | Modelling Consumer Indebtedness has proven to be a problem of complex nature.
In this work we utilise Data Mining techniques and methods to explore the
multifaceted aspect of Consumer Indebtedness by examining the contribution of
Psychological Factors, like Impulsivity to the analysis of Consumer Debt. Our
results confirm the beneficial impact of Psychological Factors in modelling
Consumer Indebtedness and suggest a new approach in analysing Consumer Debt,
that would take into consideration more Psychological characteristics of
consumers and adopt techniques and practices from Data Mining.
| Alexandros Ladas, Eamonn Ferguson, Uwe Aickelin and Jon Garibaldi | null | 1502.05911 | null | null |
Feature-Budgeted Random Forest | stat.ML cs.LG | We seek decision rules for prediction-time cost reduction, where complete
data is available for training, but during prediction-time, each feature can
only be acquired for an additional cost. We propose a novel random forest
algorithm to minimize prediction error for a user-specified {\it average}
feature acquisition budget. While random forests yield strong generalization
performance, they do not explicitly account for feature costs and furthermore
require low correlation among trees, which amplifies costs. Our random forest
grows trees with low acquisition cost and high strength based on greedy minimax
cost-weighted-impurity splits. Theoretically, we establish near-optimal
acquisition cost guarantees for our algorithm. Empirically, on a number of
benchmark datasets we demonstrate superior accuracy-cost curves against
state-of-the-art prediction-time algorithms.
| Feng Nan, Joseph Wang, Venkatesh Saligrama | null | 1502.05925 | null | null |
Achieving All with No Parameters: Adaptive NormalHedge | cs.LG | We study the classic online learning problem of predicting with expert
advice, and propose a truly parameter-free and adaptive algorithm that achieves
several objectives simultaneously without using any prior information. The main
component of this work is an improved version of the NormalHedge.DT algorithm
(Luo and Schapire, 2014), called AdaNormalHedge. On one hand, this new
algorithm ensures small regret when the competitor has small loss and almost
constant regret when the losses are stochastic. On the other hand, the
algorithm is able to compete with any convex combination of the experts
simultaneously, with a regret in terms of the relative entropy of the prior and
the competitor. This resolves an open problem proposed by Chaudhuri et al.
(2009) and Chernov and Vovk (2010). Moreover, we extend the results to the
sleeping expert setting and provide two applications to illustrate the power of
AdaNormalHedge: 1) competing with time-varying unknown competitors and 2)
predicting almost as well as the best pruning tree. Our results on these
applications significantly improve previous work from different aspects, and a
special case of the first application resolves another open problem proposed by
Warmuth and Koolen (2014) on whether one can simultaneously achieve optimal
shifting regret for both adversarial and stochastic losses.
| Haipeng Luo and Robert E. Schapire | null | 1502.05934 | null | null |
Refining Adverse Drug Reactions using Association Rule Mining for
Electronic Healthcare Data | cs.DB cs.CE cs.LG | Side effects of prescribed medications are a common occurrence. Electronic
healthcare databases present the opportunity to identify new side effects
efficiently but currently the methods are limited due to confounding (i.e. when
an association between two variables is identified due to them both being
associated to a third variable).
In this paper we propose a proof of concept method that learns common
associations and uses this knowledge to automatically refine side effect
signals (i.e. exposure-outcome associations) by removing instances of the
exposure-outcome associations that are caused by confounding. This leaves the
signal instances that are most likely to correspond to true side effect
occurrences. We then calculate a novel measure termed the confounding-adjusted
risk value, a more accurate absolute risk value of a patient experiencing the
outcome within 60 days of the exposure.
Tentative results suggest that the method works. For the four signals (i.e.
exposure-outcome associations) investigated we are able to correctly filter the
majority of exposure-outcome instances that were unlikely to correspond to true
side effects. The method is likely to improve when tuning the association rule
mining parameters for specific health outcomes.
This paper shows that it may be possible to filter signals at a patient level
based on association rules learned from considering patients' medical
histories. However, additional work is required to develop a way to automate
the tuning of the method's parameters.
| Jenna M. Reps, Uwe Aickelin, Jiangang Ma, Yanchun Zhang | 10.1109/ICDMW.2014.53 | 1502.05943 | null | null |
Deep Learning for Multi-label Classification | cs.LG cs.AI | In multi-label classification, the main focus has been to develop ways of
learning the underlying dependencies between labels, and to take advantage of
this at classification time. Developing better feature-space representations
has been predominantly employed to reduce complexity, e.g., by eliminating
non-helpful feature attributes from the input space prior to (or during)
training. This is an important task, since many multi-label methods typically
create many different copies or views of the same input data as they transform
it, and considerable memory can be saved by taking advantage of redundancy. In
this paper, we show that a proper development of the feature space can make
labels less interdependent and easier to model and predict at inference time.
For this task we use a deep learning approach with restricted Boltzmann
machines. We present a deep network that, in an empirical evaluation,
outperforms a number of competitive methods from the literature
| Jesse Read, Fernando Perez-Cruz | null | 1502.05988 | null | null |
MILJS : Brand New JavaScript Libraries for Matrix Calculation and
Machine Learning | stat.ML cs.LG cs.MS | MILJS is a collection of state-of-the-art, platform-independent, scalable,
fast JavaScript libraries for matrix calculation and machine learning. Our core
library offering a matrix calculation is called Sushi, which exhibits far
better performance than any other leading machine learning libraries written in
JavaScript. Especially, our matrix multiplication is 177 times faster than the
fastest JavaScript benchmark. Based on Sushi, a machine learning library called
Tempura is provided, which supports various algorithms widely used in machine
learning research. We also provide Soba as a visualization library. The
implementations of our libraries are clearly written, properly documented and
thus can are easy to get started with, as long as there is a web browser. These
libraries are available from http://mil-tokyo.github.io/ under the MIT license.
| Ken Miura, Tetsuaki Mano, Atsushi Kanehira, Yuichiro Tsuchiya and
Tatsuya Harada | null | 1502.06064 | null | null |
Regularization and Kernelization of the Maximin Correlation Approach | cs.CV cs.LG | Robust classification becomes challenging when each class consists of
multiple subclasses. Examples include multi-font optical character recognition
and automated protein function prediction. In correlation-based
nearest-neighbor classification, the maximin correlation approach (MCA)
provides the worst-case optimal solution by minimizing the maximum
misclassification risk through an iterative procedure. Despite the optimality,
the original MCA has drawbacks that have limited its wide applicability in
practice. That is, the MCA tends to be sensitive to outliers, cannot
effectively handle nonlinearities in datasets, and suffers from having high
computational complexity. To address these limitations, we propose an improved
solution, named regularized maximin correlation approach (R-MCA). We first
reformulate MCA as a quadratically constrained linear programming (QCLP)
problem, incorporate regularization by introducing slack variables in the
primal problem of the QCLP, and derive the corresponding Lagrangian dual. The
dual formulation enables us to apply the kernel trick to R-MCA so that it can
better handle nonlinearities. Our experimental results demonstrate that the
regularization and kernelization make the proposed R-MCA more robust and
accurate for various classification tasks than the original MCA. Furthermore,
when the data size or dimensionality grows, R-MCA runs substantially faster by
solving either the primal or dual (whichever has a smaller variable dimension)
of the QCLP.
| Taehoon Lee, Taesup Moon, Seung Jean Kim, Sungroh Yoon | 10.1109/ACCESS.2016.2551727 | 1502.06105 | null | null |
Universal Memory Architectures for Autonomous Machines | cs.AI cs.LG cs.RO math.MG | We propose a self-organizing memory architecture for perceptual experience,
capable of supporting autonomous learning and goal-directed problem solving in
the absence of any prior information about the agent's environment. The
architecture is simple enough to ensure (1) a quadratic bound (in the number of
available sensors) on space requirements, and (2) a quadratic bound on the
time-complexity of the update-execute cycle. At the same time, it is
sufficiently complex to provide the agent with an internal representation which
is (3) minimal among all representations of its class which account for every
sensory equivalence class subject to the agent's belief state; (4) capable, in
principle, of recovering the homotopy type of the system's state space; (5)
learnable with arbitrary precision through a random application of the
available actions. The provable properties of an effectively trained memory
structure exploit a duality between weak poc sets -- a symbolic (discrete)
representation of subset nesting relations -- and non-positively curved cubical
complexes, whose rich convexity theory underlies the planning cycle of the
proposed architecture.
| Dan P. Guralnik and Daniel E. Koditschek | null | 1502.06132 | null | null |
Learning with Square Loss: Localization through Offset Rademacher
Complexity | stat.ML cs.LG math.ST stat.TH | We consider regression with square loss and general classes of functions
without the boundedness assumption. We introduce a notion of offset Rademacher
complexity that provides a transparent way to study localization both in
expectation and in high probability. For any (possibly non-convex) class, the
excess loss of a two-step estimator is shown to be upper bounded by this offset
complexity through a novel geometric inequality. In the convex case, the
estimator reduces to an empirical risk minimizer. The method recovers the
results of \citep{RakSriTsy15} for the bounded case while also providing
guarantees without the boundedness assumption.
| Tengyuan Liang, Alexander Rakhlin, Karthik Sridharan | null | 1502.06134 | null | null |
Detection of Planted Solutions for Flat Satisfiability Problems | math.ST cs.CC cs.LG stat.TH | We study the detection problem of finding planted solutions in random
instances of flat satisfiability problems, a generalization of boolean
satisfiability formulas. We describe the properties of random instances of flat
satisfiability, as well of the optimal rates of detection of the associated
hypothesis testing problem. We also study the performance of an algorithmically
efficient testing procedure. We introduce a modification of our model, the
light planting of solutions, and show that it is as hard as the problem of
learning parity with noise. This hints strongly at the difficulty of detecting
planted flat satisfiability for a wide class of tests.
| Quentin Berthet and Jordan S. Ellenberg | null | 1502.06144 | null | null |
Using NLP to measure democracy | cs.CL cs.IR cs.LG stat.ML | This paper uses natural language processing to create the first machine-coded
democracy index, which I call Automated Democracy Scores (ADS). The ADS are
based on 42 million news articles from 6,043 different sources and cover all
independent countries in the 1993-2012 period. Unlike the democracy indices we
have today the ADS are replicable and have standard errors small enough to
actually distinguish between cases.
The ADS are produced with supervised learning. Three approaches are tried: a)
a combination of Latent Semantic Analysis and tree-based regression methods; b)
a combination of Latent Dirichlet Allocation and tree-based regression methods;
and c) the Wordscores algorithm. The Wordscores algorithm outperforms the
alternatives, so it is the one on which the ADS are based.
There is a web application where anyone can change the training set and see
how the results change: democracy-scores.org
| Thiago Marzag\~ao | null | 1502.06161 | null | null |
SDCA without Duality | cs.LG | Stochastic Dual Coordinate Ascent is a popular method for solving regularized
loss minimization for the case of convex losses. In this paper we show how a
variant of SDCA can be applied for non-convex losses. We prove linear
convergence rate even if individual loss functions are non-convex as long as
the expected loss is convex.
| Shai Shalev-Shwartz | null | 1502.06177 | null | null |
Teaching and compressing for low VC-dimension | cs.LG | In this work we study the quantitative relation between VC-dimension and two
other basic parameters related to learning and teaching. Namely, the quality of
sample compression schemes and of teaching sets for classes of low
VC-dimension. Let $C$ be a binary concept class of size $m$ and VC-dimension
$d$. Prior to this work, the best known upper bounds for both parameters were
$\log(m)$, while the best lower bounds are linear in $d$. We present
significantly better upper bounds on both as follows. Set $k = O(d 2^d \log
\log |C|)$.
We show that there always exists a concept $c$ in $C$ with a teaching set
(i.e. a list of $c$-labeled examples uniquely identifying $c$ in $C$) of size
$k$. This problem was studied by Kuhlmann (1999). Our construction implies that
the recursive teaching (RT) dimension of $C$ is at most $k$ as well. The
RT-dimension was suggested by Zilles et al. and Doliwa et al. (2010). The same
notion (under the name partial-ID width) was independently studied by Wigderson
and Yehudayoff (2013). An upper bound on this parameter that depends only on
$d$ is known just for the very simple case $d=1$, and is open even for $d=2$.
We also make small progress towards this seemingly modest goal.
We further construct sample compression schemes of size $k$ for $C$, with
additional information of $k \log(k)$ bits. Roughly speaking, given any list of
$C$-labelled examples of arbitrary length, we can retain only $k$ labeled
examples in a way that allows to recover the labels of all others examples in
the list, using additional $k\log (k)$ information bits. This problem was first
suggested by Littlestone and Warmuth (1986).
| Shay Moran, Amir Shpilka, Avi Wigderson, and Amir Yehudayoff | null | 1502.06187 | null | null |
Two-stage Sampling, Prediction and Adaptive Regression via Correlation
Screening (SPARCS) | stat.ML cs.LG | This paper proposes a general adaptive procedure for budget-limited predictor
design in high dimensions called two-stage Sampling, Prediction and Adaptive
Regression via Correlation Screening (SPARCS). SPARCS can be applied to high
dimensional prediction problems in experimental science, medicine, finance, and
engineering, as illustrated by the following. Suppose one wishes to run a
sequence of experiments to learn a sparse multivariate predictor of a dependent
variable $Y$ (disease prognosis for instance) based on a $p$ dimensional set of
independent variables $\mathbf X=[X_1,\ldots, X_p]^T$ (assayed biomarkers).
Assume that the cost of acquiring the full set of variables $\mathbf X$
increases linearly in its dimension. SPARCS breaks the data collection into two
stages in order to achieve an optimal tradeoff between sampling cost and
predictor performance. In the first stage we collect a few ($n$) expensive
samples $\{y_i,\mathbf x_i\}_{i=1}^n$, at the full dimension $p\gg n$ of
$\mathbf X$, winnowing the number of variables down to a smaller dimension $l <
p$ using a type of cross-correlation or regression coefficient screening. In
the second stage we collect a larger number $(t-n)$ of cheaper samples of the
$l$ variables that passed the screening of the first stage. At the second
stage, a low dimensional predictor is constructed by solving the standard
regression problem using all $t$ samples of the selected variables. SPARCS is
an adaptive online algorithm that implements false positive control on the
selected variables, is well suited to small sample sizes, and is scalable to
high dimensions. We establish asymptotic bounds for the Familywise Error Rate
(FWER), specify high dimensional convergence rates for support recovery, and
establish optimal sample allocation rules to the first and second stages.
| Hamed Firouzi, Alfred Hero, Bala Rajaratnam | 10.1109/TIT.2016.2621111 | 1502.06189 | null | null |
On Online Control of False Discovery Rate | stat.ME cs.LG math.ST stat.AP stat.TH | Multiple hypotheses testing is a core problem in statistical inference and
arises in almost every scientific field. Given a sequence of null hypotheses
$\mathcal{H}(n) = (H_1,..., H_n)$, Benjamini and Hochberg
\cite{benjamini1995controlling} introduced the false discovery rate (FDR)
criterion, which is the expected proportion of false positives among rejected
null hypotheses, and proposed a testing procedure that controls FDR below a
pre-assigned significance level. They also proposed a different criterion,
called mFDR, which does not control a property of the realized set of tests;
rather it controls the ratio of expected number of false discoveries to the
expected number of discoveries.
In this paper, we propose two procedures for multiple hypotheses testing that
we will call "LOND" and "LORD". These procedures control FDR and mFDR in an
\emph{online manner}. Concretely, we consider an ordered --possibly infinite--
sequence of null hypotheses $\mathcal{H} = (H_1,H_2,H_3,...)$ where, at each
step $i$, the statistician must decide whether to reject hypothesis $H_i$
having access only to the previous decisions. To the best of our knowledge, our
work is the first that controls FDR in this setting. This model was introduced
by Foster and Stine \cite{alpha-investing} whose alpha-investing rule only
controls mFDR in online manner.
In order to compare different procedures, we develop lower bounds on the
total discovery rate under the mixture model and prove that both LOND and LORD
have nearly linear number of discoveries. We further propose adjustment to LOND
to address arbitrary correlation among the $p$-values. Finally, we evaluate the
performance of our procedures on both synthetic and real data comparing them
with alpha-investing rule, Benjamin-Hochberg method and a Bonferroni procedure.
| Adel Javanmard and Andrea Montanari | null | 1502.06197 | null | null |
Nearly optimal classification for semimetrics | cs.LG cs.CC cs.DS | We initiate the rigorous study of classification in semimetric spaces, which
are point sets with a distance function that is non-negative and symmetric, but
need not satisfy the triangle inequality. For metric spaces, the doubling
dimension essentially characterizes both the runtime and sample complexity of
classification algorithms --- yet we show that this is not the case for
semimetrics. Instead, we define the {\em density dimension} and discover that
it plays a central role in the statistical and algorithmic feasibility of
learning in semimetric spaces. We present nearly optimal sample compression
algorithms and use these to obtain generalization guarantees, including fast
rates. The latter hold for general sample compression schemes and may be of
independent interest.
| Lee-Ad Gottlieb and Aryeh Kontorovich | null | 1502.06208 | null | null |
The fundamental nature of the log loss function | cs.LG stat.ME | The standard loss functions used in the literature on probabilistic
prediction are the log loss function, the Brier loss function, and the
spherical loss function; however, any computable proper loss function can be
used for comparison of prediction algorithms. This note shows that the log loss
function is most selective in that any prediction algorithm that is optimal for
a given data sequence (in the sense of the algorithmic theory of randomness)
under the log loss function will be optimal under any computable proper mixable
loss function; on the other hand, there is a data sequence and a prediction
algorithm that is optimal for that sequence under either of the two other
standard loss functions but not under the log loss function.
| Vladimir Vovk | null | 1502.06254 | null | null |
Spaced seeds improve k-mer-based metagenomic classification | q-bio.GN cs.CE cs.LG | Metagenomics is a powerful approach to study genetic content of environmental
samples that has been strongly promoted by NGS technologies. To cope with
massive data involved in modern metagenomic projects, recent tools [4, 39] rely
on the analysis of k-mers shared between the read to be classified and sampled
reference genomes. Within this general framework, we show in this work that
spaced seeds provide a significant improvement of classification accuracy as
opposed to traditional contiguous k-mers. We support this thesis through a
series a different computational experiments, including simulations of
large-scale metagenomic projects. Scripts and programs used in this study, as
well as supplementary material, are available from
http://github.com/gregorykucherov/spaced-seeds-for-metagenomics.
| Karel Brinda and Maciej Sykulski and Gregory Kucherov | 10.1093/bioinformatics/btv419 | 1502.06256 | null | null |
Learning with Differential Privacy: Stability, Learnability and the
Sufficiency and Necessity of ERM Principle | stat.ML cs.CR cs.LG | While machine learning has proven to be a powerful data-driven solution to
many real-life problems, its use in sensitive domains has been limited due to
privacy concerns. A popular approach known as **differential privacy** offers
provable privacy guarantees, but it is often observed in practice that it could
substantially hamper learning accuracy. In this paper we study the learnability
(whether a problem can be learned by any algorithm) under Vapnik's general
learning setting with differential privacy constraint, and reveal some
intricate relationships between privacy, stability and learnability.
In particular, we show that a problem is privately learnable **if an only
if** there is a private algorithm that asymptotically minimizes the empirical
risk (AERM). In contrast, for non-private learning AERM alone is not sufficient
for learnability. This result suggests that when searching for private learning
algorithms, we can restrict the search to algorithms that are AERM. In light of
this, we propose a conceptual procedure that always finds a universally
consistent algorithm whenever the problem is learnable under privacy
constraint. We also propose a generic and practical algorithm and show that
under very general conditions it privately learns a wide class of learning
problems. Lastly, we extend some of the results to the more practical
$(\epsilon,\delta)$-differential privacy and establish the existence of a
phase-transition on the class of problems that are approximately privately
learnable with respect to how small $\delta$ needs to be.
| Yu-Xiang Wang, Jing Lei, Stephen E. Fienberg | null | 1502.06309 | null | null |
First-order regret bounds for combinatorial semi-bandits | cs.LG stat.ML | We consider the problem of online combinatorial optimization under
semi-bandit feedback, where a learner has to repeatedly pick actions from a
combinatorial decision set in order to minimize the total losses associated
with its decisions. After making each decision, the learner observes the losses
associated with its action, but not other losses. For this problem, there are
several learning algorithms that guarantee that the learner's expected regret
grows as $\widetilde{O}(\sqrt{T})$ with the number of rounds $T$. In this
paper, we propose an algorithm that improves this scaling to
$\widetilde{O}(\sqrt{{L_T^*}})$, where $L_T^*$ is the total loss of the best
action. Our algorithm is among the first to achieve such guarantees in a
partial-feedback scheme, and the first one to do so in a combinatorial setting.
| Gergely Neu | null | 1502.06354 | null | null |
Contextual Dueling Bandits | cs.LG | We consider the problem of learning to choose actions using contextual
information when provided with limited feedback in the form of relative
pairwise comparisons. We study this problem in the dueling-bandits framework of
Yue et al. (2009), which we extend to incorporate context. Roughly, the
learner's goal is to find the best policy, or way of behaving, in some space of
policies, although "best" is not always so clearly defined. Here, we propose a
new and natural solution concept, rooted in game theory, called a von Neumann
winner, a randomized policy that beats or ties every other policy. We show that
this notion overcomes important limitations of existing solutions, particularly
the Condorcet winner which has typically been used in the past, but which
requires strong and often unrealistic assumptions. We then present three
efficient algorithms for online learning in our setting, and for approximating
a von Neumann winner from batch-like data. The first of these algorithms
achieves particularly low regret, even when data is adversarial, although its
time and space requirements are linear in the size of the policy space. The
other two algorithms require time and space only logarithmic in the size of the
policy space when provided access to an oracle for solving classification
problems on the space.
| Miroslav Dud\'ik and Katja Hofmann and Robert E. Schapire and
Aleksandrs Slivkins and Masrour Zoghi | null | 1502.06362 | null | null |
Bandit Convex Optimization: sqrt{T} Regret in One Dimension | cs.LG math.OC | We analyze the minimax regret of the adversarial bandit convex optimization
problem. Focusing on the one-dimensional case, we prove that the minimax regret
is $\widetilde\Theta(\sqrt{T})$ and partially resolve a decade-old open
problem. Our analysis is non-constructive, as we do not present a concrete
algorithm that attains this regret rate. Instead, we use minimax duality to
reduce the problem to a Bayesian setting, where the convex loss functions are
drawn from a worst-case distribution, and then we solve the Bayesian version of
the problem with a variant of Thompson Sampling. Our analysis features a novel
use of convexity, formalized as a "local-to-global" property of convex
functions, that may be of independent interest.
| S\'ebastien Bubeck, Ofer Dekel, Tomer Koren, Yuval Peres | null | 1502.06398 | null | null |
ANN Model to Predict Stock Prices at Stock Exchange Markets | q-fin.ST cs.CE cs.LG cs.NE | Stock exchanges are considered major players in financial sectors of many
countries. Most Stockbrokers, who execute stock trade, use technical,
fundamental or time series analysis in trying to predict stock prices, so as to
advise clients. However, these strategies do not usually guarantee good returns
because they guide on trends and not the most likely price. It is therefore
necessary to explore improved methods of prediction.
The research proposes the use of Artificial Neural Network that is
feedforward multi-layer perceptron with error backpropagation and develops a
model of configuration 5:21:21:1 with 80% training data in 130,000 cycles. The
research develops a prototype and tests it on 2008-2012 data from stock markets
e.g. Nairobi Securities Exchange and New York Stock Exchange, where prediction
results show MAPE of between 0.71% and 2.77%. Validation done with Encog and
Neuroph realized comparable results. The model is thus capable of prediction on
typical stock markets.
| B. W. Wanjawa and L. Muchemi | null | 1502.06434 | null | null |
Rectified Factor Networks | cs.LG cs.CV cs.NE stat.ML | We propose rectified factor networks (RFNs) to efficiently construct very
sparse, non-linear, high-dimensional representations of the input. RFN models
identify rare and small events in the input, have a low interference between
code units, have a small reconstruction error, and explain the data covariance
structure. RFN learning is a generalized alternating minimization algorithm
derived from the posterior regularization method which enforces non-negative
and normalized posterior means. We proof convergence and correctness of the RFN
learning algorithm. On benchmarks, RFNs are compared to other unsupervised
methods like autoencoders, RBMs, factor analysis, ICA, and PCA. In contrast to
previous sparse coding methods, RFNs yield sparser codes, capture the data's
covariance structure more precisely, and have a significantly smaller
reconstruction error. We test RFNs as pretraining technique for deep networks
on different vision datasets, where RFNs were superior to RBMs and
autoencoders. On gene expression data from two pharmaceutical drug discovery
studies, RFNs detected small and rare gene modules that revealed highly
relevant new biological insights which were so far missed by other unsupervised
methods.
| Djork-Arn\'e Clevert, Andreas Mayr, Thomas Unterthiner, Sepp
Hochreiter | null | 1502.06464 | null | null |
Scalable Variational Inference in Log-supermodular Models | cs.LG stat.ML | We consider the problem of approximate Bayesian inference in log-supermodular
models. These models encompass regular pairwise MRFs with binary variables, but
allow to capture high-order interactions, which are intractable for existing
approximate inference techniques such as belief propagation, mean field, and
variants. We show that a recently proposed variational approach to inference in
log-supermodular models -L-FIELD- reduces to the widely-studied minimum norm
problem for submodular minimization. This insight allows to leverage powerful
existing tools, and hence to solve the variational problem orders of magnitude
more efficiently than previously possible. We then provide another natural
interpretation of L-FIELD, demonstrating that it exactly minimizes a specific
type of R\'enyi divergence measure. This insight sheds light on the nature of
the variational approximations produced by L-FIELD. Furthermore, we show how to
perform parallel inference as message passing in a suitable factor graph at a
linear convergence rate, without having to sum up over all the configurations
of the factor. Finally, we apply our approach to a challenging image
segmentation task. Our experiments confirm scalability of our approach, high
quality of the marginals, and the benefit of incorporating higher-order
potentials.
| Josip Djolonga and Andreas Krause | null | 1502.06531 | null | null |
Optimal Sparse Linear Auto-Encoders and Sparse PCA | cs.LG cs.AI cs.IT math.IT stat.CO stat.ML | Principal components analysis (PCA) is the optimal linear auto-encoder of
data, and it is often used to construct features. Enforcing sparsity on the
principal components can promote better generalization, while improving the
interpretability of the features. We study the problem of constructing optimal
sparse linear auto-encoders. Two natural questions in such a setting are: i)
Given a level of sparsity, what is the best approximation to PCA that can be
achieved? ii) Are there low-order polynomial-time algorithms which can
asymptotically achieve this optimal tradeoff between the sparsity and the
approximation quality?
In this work, we answer both questions by giving efficient low-order
polynomial-time algorithms for constructing asymptotically \emph{optimal}
linear auto-encoders (in particular, sparse features with near-PCA
reconstruction error) and demonstrate the performance of our algorithms on real
data.
| Malik Magdon-Ismail, Christos Boutsidis | null | 1502.06626 | null | null |
On The Identifiability of Mixture Models from Grouped Samples | stat.ML cs.LG math.ST stat.TH | Finite mixture models are statistical models which appear in many problems in
statistics and machine learning. In such models it is assumed that data are
drawn from random probability measures, called mixture components, which are
themselves drawn from a probability measure P over probability measures. When
estimating mixture models, it is common to make assumptions on the mixture
components, such as parametric assumptions. In this paper, we make no
assumption on the mixture components, and instead assume that observations from
the mixture model are grouped, such that observations in the same group are
known to be drawn from the same component. We show that any mixture of m
probability measures can be uniquely identified provided there are 2m-1
observations per group. Moreover we show that, for any m, there exists a
mixture of m probability measures that cannot be uniquely identified when
groups have 2m-2 observations. Our results hold for any sample space with more
than one element.
| Robert A. Vandermeulen and Clayton D. Scott | null | 1502.06644 | null | null |
Reified Context Models | cs.LG | A classic tension exists between exact inference in a simple model and
approximate inference in a complex model. The latter offers expressivity and
thus accuracy, but the former provides coverage of the space, an important
property for confidence estimation and learning with indirect supervision. In
this work, we introduce a new approach, reified context models, to reconcile
this tension. Specifically, we let the amount of context (the arity of the
factors in a graphical model) be chosen "at run-time" by reifying it---that is,
letting this choice itself be a random variable inside the model. Empirically,
we show that our approach obtains expressivity and coverage on three natural
language tasks.
| Jacob Steinhardt and Percy Liang | null | 1502.06665 | null | null |
Learning Fast-Mixing Models for Structured Prediction | cs.LG | Markov Chain Monte Carlo (MCMC) algorithms are often used for approximate
inference inside learning, but their slow mixing can be difficult to diagnose
and the approximations can seriously degrade learning. To alleviate these
issues, we define a new model family using strong Doeblin Markov chains, whose
mixing times can be precisely controlled by a parameter. We also develop an
algorithm to learn such models, which involves maximizing the data likelihood
under the induced stationary distribution of these chains. We show empirical
improvements on two challenging inference tasks.
| Jacob Steinhardt and Percy Liang | null | 1502.06668 | null | null |
On the Equivalence between Kernel Quadrature Rules and Random Feature
Expansions | cs.LG math.NA stat.ML | We show that kernel-based quadrature rules for computing integrals can be
seen as a special case of random feature expansions for positive definite
kernels, for a particular decomposition that always exists for such kernels. We
provide a theoretical analysis of the number of required samples for a given
approximation error, leading to both upper and lower bounds that are based
solely on the eigenvalues of the associated integral operator and match up to
logarithmic terms. In particular, we show that the upper bound may be obtained
from independent and identically distributed samples from a specific
non-uniform distribution, while the lower bound if valid for any set of points.
Applying our results to kernel-based quadrature, while our results are fairly
general, we recover known upper and lower bounds for the special cases of
Sobolev spaces. Moreover, our results extend to the more general problem of
full function approximations (beyond simply computing an integral), with
results in L2- and L$\infty$-norm that match known results for special cases.
Applying our results to random features, we show an improvement of the number
of random features needed to preserve the generalization guarantees for
learning with Lipschitz-continuous losses.
| Francis Bach (LIENS, SIERRA) | null | 1502.06800 | null | null |
On the consistency theory of high dimensional variable screening | math.ST cs.LG stat.ML stat.TH | Variable screening is a fast dimension reduction technique for assisting high
dimensional feature selection. As a preselection method, it selects a moderate
size subset of candidate variables for further refining via feature selection
to produce the final model. The performance of variable screening depends on
both computational efficiency and the ability to dramatically reduce the number
of variables without discarding the important ones. When the data dimension $p$
is substantially larger than the sample size $n$, variable screening becomes
crucial as 1) Faster feature selection algorithms are needed; 2) Conditions
guaranteeing selection consistency might fail to hold. This article studies a
class of linear screening methods and establishes consistency theory for this
special class. In particular, we prove the restricted diagonally dominant (RDD)
condition is a necessary and sufficient condition for strong screening
consistency. As concrete examples, we show two screening methods $SIS$ and
$HOLP$ are both strong screening consistent (subject to additional constraints)
with large probability if $n > O((\rho s + \sigma/\tau)^2\log p)$ under random
designs. In addition, we relate the RDD condition to the irrepresentable
condition, and highlight limitations of $SIS$.
| Xiangyu Wang, Chenlei Leng, David B. Dunson | null | 1502.06895 | null | null |
Deep Sentence Embedding Using Long Short-Term Memory Networks: Analysis
and Application to Information Retrieval | cs.CL cs.IR cs.LG cs.NE | This paper develops a model that addresses sentence embedding, a hot topic in
current natural language processing research, using recurrent neural networks
with Long Short-Term Memory (LSTM) cells. Due to its ability to capture long
term memory, the LSTM-RNN accumulates increasingly richer information as it
goes through the sentence, and when it reaches the last word, the hidden layer
of the network provides a semantic representation of the whole sentence. In
this paper, the LSTM-RNN is trained in a weakly supervised manner on user
click-through data logged by a commercial web search engine. Visualization and
analysis are performed to understand how the embedding process works. The model
is found to automatically attenuate the unimportant words and detects the
salient keywords in the sentence. Furthermore, these detected keywords are
found to automatically activate different cells of the LSTM-RNN, where words
belonging to a similar topic activate the same cell. As a semantic
representation of the sentence, the embedding vector can be used in many
different applications. These automatic keyword detection and topic allocation
abilities enabled by the LSTM-RNN allow the network to perform document
retrieval, a difficult language processing task, where the similarity between
the query and documents can be measured by the distance between their
corresponding sentence embedding vectors computed by the LSTM-RNN. On a web
search task, the LSTM-RNN embedding is shown to significantly outperform
several existing state of the art methods. We emphasize that the proposed model
generates sentence embedding vectors that are specially useful for web document
retrieval tasks. A comparison with a well known general sentence embedding
method, the Paragraph Vector, is performed. The results show that the proposed
method in this paper significantly outperforms it for web document retrieval
task.
| Hamid Palangi, Li Deng, Yelong Shen, Jianfeng Gao, Xiaodong He,
Jianshu Chen, Xinying Song, Rabab Ward | 10.1109/TASLP.2016.2520371 | 1502.06922 | null | null |
Evaluation of Deep Convolutional Nets for Document Image Classification
and Retrieval | cs.CV cs.IR cs.LG cs.NE | This paper presents a new state-of-the-art for document image classification
and retrieval, using features learned by deep convolutional neural networks
(CNNs). In object and scene analysis, deep neural nets are capable of learning
a hierarchical chain of abstraction from pixel inputs to concise and
descriptive representations. The current work explores this capacity in the
realm of document analysis, and confirms that this representation strategy is
superior to a variety of popular hand-crafted alternatives. Experiments also
show that (i) features extracted from CNNs are robust to compression, (ii) CNNs
trained on non-document images transfer well to document analysis tasks, and
(iii) enforcing region-specific feature-learning is unnecessary given
sufficient training data. This work also makes available a new labelled subset
of the IIT-CDIP collection, containing 400,000 document images across 16
categories, useful for training new CNNs for document analysis.
| Adam W. Harley, Alex Ufkes, and Konstantinos G. Derpanis | null | 1502.07058 | null | null |
Strongly Adaptive Online Learning | cs.LG | Strongly adaptive algorithms are algorithms whose performance on every time
interval is close to optimal. We present a reduction that can transform
standard low-regret algorithms to strongly adaptive. As a consequence, we
derive simple, yet efficient, strongly adaptive algorithms for a handful of
problems.
| Amit Daniely, Alon Gonen, Shai Shalev-Shwartz | null | 1502.07073 | null | null |
The VC-Dimension of Similarity Hypotheses Spaces | cs.LG | Given a set $X$ and a function $h:X\longrightarrow\{0,1\}$ which labels each
element of $X$ with either $0$ or $1$, we may define a function $h^{(s)}$ to
measure the similarity of pairs of points in $X$ according to $h$.
Specifically, for $h\in \{0,1\}^X$ we define $h^{(s)}\in \{0,1\}^{X\times X}$
by $h^{(s)}(w,x):= \mathbb{1}[h(w) = h(x)]$. This idea can be extended to a set
of functions, or hypothesis space $\mathcal{H} \subseteq \{0,1\}^X$ by defining
a similarity hypothesis space $\mathcal{H}^{(s)}:=\{h^{(s)}:h\in\mathcal{H}\}$.
We show that ${{vc-dimension}}(\mathcal{H}^{(s)}) \in
\Theta({{vc-dimension}}(\mathcal{H}))$.
| Mark Herbster, Paul Rubenstein, James Townsend | null | 1502.07143 | null | null |
Topic-adjusted visibility metric for scientific articles | stat.ML cs.LG | Measuring the impact of scientific articles is important for evaluating the
research output of individual scientists, academic institutions and journals.
While citations are raw data for constructing impact measures, there exist
biases and potential issues if factors affecting citation patterns are not
properly accounted for. In this work, we address the problem of field variation
and introduce an article level metric useful for evaluating individual
articles' visibility. This measure derives from joint probabilistic modeling of
the content in the articles and the citations amongst them using latent
Dirichlet allocation (LDA) and the mixed membership stochastic blockmodel
(MMSB). Our proposed model provides a visibility metric for individual articles
adjusted for field variation in citation rates, a structural understanding of
citation behavior in different fields, and article recommendations which take
into account article visibility and citation patterns. We develop an efficient
algorithm for model fitting using variational methods. To scale up to large
networks, we develop an online variant using stochastic gradient methods and
case-control likelihood approximation. We apply our methods to the benchmark
KDD Cup 2003 dataset with approximately 30,000 high energy physics papers.
| Linda S. L. Tan, Aik Hui Chan and Tian Zheng | 10.1214/15-AOAS887 | 1502.07190 | null | null |
Online Pairwise Learning Algorithms with Kernels | stat.ML cs.LG | Pairwise learning usually refers to a learning task which involves a loss
function depending on pairs of examples, among which most notable ones include
ranking, metric learning and AUC maximization. In this paper, we study an
online algorithm for pairwise learning with a least-square loss function in an
unconstrained setting of a reproducing kernel Hilbert space (RKHS), which we
refer to as the Online Pairwise lEaRning Algorithm (OPERA). In contrast to
existing works \cite{Kar,Wang} which require that the iterates are restricted
to a bounded domain or the loss function is strongly-convex, OPERA is
associated with a non-strongly convex objective function and learns the target
function in an unconstrained RKHS. Specifically, we establish a general theorem
which guarantees the almost surely convergence for the last iterate of OPERA
without any assumptions on the underlying distribution. Explicit convergence
rates are derived under the condition of polynomially decaying step sizes. We
also establish an interesting property for a family of widely-used kernels in
the setting of pairwise learning and illustrate the above convergence results
using such kernels. Our methodology mainly depends on the characterization of
RKHSs using its associated integral operators and probability inequalities for
random variables with values in a Hilbert space.
| Yiming Ying and Ding-Xuan Zhou | null | 1502.07229 | null | null |
Online Learning with Feedback Graphs: Beyond Bandits | cs.LG | We study a general class of online learning problems where the feedback is
specified by a graph. This class includes online prediction with expert advice
and the multi-armed bandit problem, but also several learning problems where
the online player does not necessarily observe his own loss. We analyze how the
structure of the feedback graph controls the inherent difficulty of the induced
$T$-round learning problem. Specifically, we show that any feedback graph
belongs to one of three classes: strongly observable graphs, weakly observable
graphs, and unobservable graphs. We prove that the first class induces learning
problems with $\widetilde\Theta(\alpha^{1/2} T^{1/2})$ minimax regret, where
$\alpha$ is the independence number of the underlying graph; the second class
induces problems with $\widetilde\Theta(\delta^{1/3}T^{2/3})$ minimax regret,
where $\delta$ is the domination number of a certain portion of the graph; and
the third class induces problems with linear minimax regret. Our results
subsume much of the previous work on learning with feedback graphs and reveal
new connections to partial monitoring games. We also show how the regret is
affected if the graphs are allowed to vary with time.
| Noga Alon, Nicol\`o Cesa-Bianchi, Ofer Dekel, Tomer Koren | null | 1502.07617 | null | null |
ROCKET: Robust Confidence Intervals via Kendall's Tau for
Transelliptical Graphical Models | math.ST cs.LG stat.TH | Undirected graphical models are used extensively in the biological and social
sciences to encode a pattern of conditional independences between variables,
where the absence of an edge between two nodes $a$ and $b$ indicates that the
corresponding two variables $X_a$ and $X_b$ are believed to be conditionally
independent, after controlling for all other measured variables. In the
Gaussian case, conditional independence corresponds to a zero entry in the
precision matrix $\Omega$ (the inverse of the covariance matrix $\Sigma$). Real
data often exhibits heavy tail dependence between variables, which cannot be
captured by the commonly-used Gaussian or nonparanormal (Gaussian copula)
graphical models. In this paper, we study the transelliptical model, an
elliptical copula model that generalizes Gaussian and nonparanormal models to a
broader family of distributions. We propose the ROCKET method, which constructs
an estimator of $\Omega_{ab}$ that we prove to be asymptotically normal under
mild assumptions. Empirically, ROCKET outperforms the nonparanormal and
Gaussian models in terms of achieving accurate inference on simulated data. We
also compare the three methods on real data (daily stock returns), and find
that the ROCKET estimator is the only method whose behavior across subsamples
agrees with the distribution predicted by the theory.
| Rina Foygel Barber and Mladen Kolar | null | 1502.07641 | null | null |
Privacy for Free: Posterior Sampling and Stochastic Gradient Monte Carlo | stat.ML cs.LG | We consider the problem of Bayesian learning on sensitive datasets and
present two simple but somewhat surprising results that connect Bayesian
learning to "differential privacy:, a cryptographic approach to protect
individual-level privacy while permiting database-level utility. Specifically,
we show that that under standard assumptions, getting one single sample from a
posterior distribution is differentially private "for free". We will see that
estimator is statistically consistent, near optimal and computationally
tractable whenever the Bayesian model of interest is consistent, optimal and
tractable. Similarly but separately, we show that a recent line of works that
use stochastic gradient for Hybrid Monte Carlo (HMC) sampling also preserve
differentially privacy with minor or no modifications of the algorithmic
procedure at all, these observations lead to an "anytime" algorithm for
Bayesian learning under privacy constraint. We demonstrate that it performs
much better than the state-of-the-art differential private methods on synthetic
and real datasets.
| Yu-Xiang Wang, Stephen E. Fienberg, Alex Smola | null | 1502.07645 | null | null |
A Chaining Algorithm for Online Nonparametric Regression | stat.ML cs.LG | We consider the problem of online nonparametric regression with arbitrary
deterministic sequences. Using ideas from the chaining technique, we design an
algorithm that achieves a Dudley-type regret bound similar to the one obtained
in a non-constructive fashion by Rakhlin and Sridharan (2014). Our regret bound
is expressed in terms of the metric entropy in the sup norm, which yields
optimal guarantees when the metric and sequential entropies are of the same
order of magnitude. In particular our algorithm is the first one that achieves
optimal rates for online regression over H{\"o}lder balls. In addition we show
for this example how to adapt our chaining algorithm to get a reasonable
computational efficiency with similar regret guarantees (up to a log factor).
| Pierre Gaillard (GREGHEC, EDF R\&D), S\'ebastien Gerchinovitz (IMT,
UPS) | null | 1502.07697 | null | null |
Efficient Geometric-based Computation of the String Subsequence Kernel | cs.LG cs.CG | Kernel methods are powerful tools in machine learning. They have to be
computationally efficient. In this paper, we present a novel Geometric-based
approach to compute efficiently the string subsequence kernel (SSK). Our main
idea is that the SSK computation reduces to range query problem. We started by
the construction of a match list $L(s,t)=\{(i,j):s_{i}=t_{j}\}$ where $s$ and
$t$ are the strings to be compared; such match list contains only the required
data that contribute to the result. To compute efficiently the SSK, we extended
the layered range tree data structure to a layered range sum tree, a
range-aggregation data structure. The whole process takes $ O(p|L|\log|L|)$
time and $O(|L|\log|L|)$ space, where $|L|$ is the size of the match list and
$p$ is the length of the SSK. We present empiric evaluations of our approach
against the dynamic and the sparse programming approaches both on synthetically
generated data and on newswire article data. Such experiments show the
efficiency of our approach for large alphabet size except for very short
strings. Moreover, compared to the sparse dynamic approach, the proposed
approach outperforms absolutely for long strings.
| Slimane Bellaouar, Hadda Cherroun, and Djelloul Ziadi | null | 1502.07776 | null | null |
Minimum message length estimation of mixtures of multivariate Gaussian
and von Mises-Fisher distributions | cs.LG stat.ML | Mixture modelling involves explaining some observed evidence using a
combination of probability distributions. The crux of the problem is the
inference of an optimal number of mixture components and their corresponding
parameters. This paper discusses unsupervised learning of mixture models using
the Bayesian Minimum Message Length (MML) criterion. To demonstrate the
effectiveness of search and inference of mixture parameters using the proposed
approach, we select two key probability distributions, each handling
fundamentally different types of data: the multivariate Gaussian distribution
to address mixture modelling of data distributed in Euclidean space, and the
multivariate von Mises-Fisher (vMF) distribution to address mixture modelling
of directional data distributed on a unit hypersphere. The key contributions of
this paper, in addition to the general search and inference methodology,
include the derivation of MML expressions for encoding the data using
multivariate Gaussian and von Mises-Fisher distributions, and the analytical
derivation of the MML estimates of the parameters of the two distributions. Our
approach is tested on simulated and real world data sets. For instance, we
infer vMF mixtures that concisely explain experimentally determined
three-dimensional protein conformations, providing an effective null model
description of protein structures that is central to many inference problems in
structural bioinformatics. The experimental results demonstrate that the
performance of our proposed search and inference method along with the encoding
schemes improve on the state of the art mixture modelling techniques.
| Parthan Kasarapu and Lloyd Allison | null | 1502.07813 | null | null |
Non-stochastic Best Arm Identification and Hyperparameter Optimization | cs.LG stat.ML | Motivated by the task of hyperparameter optimization, we introduce the
non-stochastic best-arm identification problem. Within the multi-armed bandit
literature, the cumulative regret objective enjoys algorithms and analyses for
both the non-stochastic and stochastic settings while to the best of our
knowledge, the best-arm identification framework has only been considered in
the stochastic setting. We introduce the non-stochastic setting under this
framework, identify a known algorithm that is well-suited for this setting, and
analyze its behavior. Next, by leveraging the iterative nature of standard
machine learning algorithms, we cast hyperparameter optimization as an instance
of non-stochastic best-arm identification, and empirically evaluate our
proposed algorithm on this task. Our empirical results show that, by allocating
more resources to promising hyperparameter settings, we typically achieve
comparable test accuracies an order of magnitude faster than baseline methods.
| Kevin Jamieson, Ameet Talwalkar | null | 1502.07943 | null | null |
Error-Correcting Factorization | cs.CV cs.LG | Error Correcting Output Codes (ECOC) is a successful technique in multi-class
classification, which is a core problem in Pattern Recognition and Machine
Learning. A major advantage of ECOC over other methods is that the multi- class
problem is decoupled into a set of binary problems that are solved
independently. However, literature defines a general error-correcting
capability for ECOCs without analyzing how it distributes among classes,
hindering a deeper analysis of pair-wise error-correction. To address these
limitations this paper proposes an Error-Correcting Factorization (ECF) method,
our contribution is three fold: (I) We propose a novel representation of the
error-correction capability, called the design matrix, that enables us to build
an ECOC on the basis of allocating correction to pairs of classes. (II) We
derive the optimal code length of an ECOC using rank properties of the design
matrix. (III) ECF is formulated as a discrete optimization problem, and a
relaxed solution is found using an efficient constrained block coordinate
descent approach. (IV) Enabled by the flexibility introduced with the design
matrix we propose to allocate the error-correction on classes that are prone to
confusion. Experimental results in several databases show that when allocating
the error-correction to confusable classes ECF outperforms state-of-the-art
approaches.
| Miguel Angel Bautista, Oriol Pujol, Fernando de la Torre and Sergio
Escalera | null | 1502.07976 | null | null |
Second-order Quantile Methods for Experts and Combinatorial Games | cs.LG stat.ML | We aim to design strategies for sequential decision making that adjust to the
difficulty of the learning problem. We study this question both in the setting
of prediction with expert advice, and for more general combinatorial decision
tasks. We are not satisfied with just guaranteeing minimax regret rates, but we
want our algorithms to perform significantly better on easy data. Two popular
ways to formalize such adaptivity are second-order regret bounds and quantile
bounds. The underlying notions of 'easy data', which may be paraphrased as "the
learning problem has small variance" and "multiple decisions are useful", are
synergetic. But even though there are sophisticated algorithms that exploit one
of the two, no existing algorithm is able to adapt to both.
In this paper we outline a new method for obtaining such adaptive algorithms,
based on a potential function that aggregates a range of learning rates (which
are essential tuning parameters). By choosing the right prior we construct
efficient algorithms and show that they reap both benefits by proving the first
bounds that are both second-order and incorporate quantiles.
| Wouter M. Koolen and Tim van Erven | null | 1502.08009 | null | null |
Describing Videos by Exploiting Temporal Structure | stat.ML cs.AI cs.CL cs.CV cs.LG | Recent progress in using recurrent neural networks (RNNs) for image
description has motivated the exploration of their application for video
description. However, while images are static, working with videos requires
modeling their dynamic temporal structure and then properly integrating that
information into a natural language description. In this context, we propose an
approach that successfully takes into account both the local and global
temporal structure of videos to produce descriptions. First, our approach
incorporates a spatial temporal 3-D convolutional neural network (3-D CNN)
representation of the short temporal dynamics. The 3-D CNN representation is
trained on video action recognition tasks, so as to produce a representation
that is tuned to human motion and behavior. Second we propose a temporal
attention mechanism that allows to go beyond local temporal modeling and learns
to automatically select the most relevant temporal segments given the
text-generating RNN. Our approach exceeds the current state-of-art for both
BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on
a new, larger and more challenging dataset of paired video and natural language
descriptions.
| Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal,
Hugo Larochelle, Aaron Courville | null | 1502.08029 | null | null |
Author Name Disambiguation by Using Deep Neural Network | cs.DL cs.CL cs.LG | Author name ambiguity decreases the quality and reliability of information
retrieved from digital libraries. Existing methods have tried to solve this
problem by predefining a feature set based on expert's knowledge for a specific
dataset. In this paper, we propose a new approach which uses deep neural
network to learn features automatically from data. Additionally, we propose the
general system architecture for author name disambiguation on any dataset. In
this research, we evaluate the proposed method on a dataset containing
Vietnamese author names. The results show that this method significantly
outperforms other methods that use predefined feature set. The proposed method
achieves 99.31% in terms of accuracy. Prediction error rate decreases from
1.83% to 0.69%, i.e., it decreases by 1.14%, or 62.3% relatively compared with
other methods that use predefined feature set (Table 3).
| Hung Nghiep Tran, Tin Huynh, Tien Do | 10.1007/978-3-319-05476-6_13 | 1502.08030 | null | null |
Probabilistic Zero-shot Classification with Semantic Rankings | cs.LG cs.AI cs.CV | In this paper we propose a non-metric ranking-based representation of
semantic similarity that allows natural aggregation of semantic information
from multiple heterogeneous sources. We apply the ranking-based representation
to zero-shot learning problems, and present deterministic and probabilistic
zero-shot classifiers which can be built from pre-trained classifiers without
retraining. We demonstrate their the advantages on two large real-world image
datasets. In particular, we show that aggregating different sources of semantic
information, including crowd-sourcing, leads to more accurate classification.
| Jihun Hamm, Mikhail Belkin | null | 1502.08039 | null | null |
Stochastic Dual Coordinate Ascent with Adaptive Probabilities | math.OC cs.LG stat.ML | This paper introduces AdaSDCA: an adaptive variant of stochastic dual
coordinate ascent (SDCA) for solving the regularized empirical risk
minimization problems. Our modification consists in allowing the method
adaptively change the probability distribution over the dual variables
throughout the iterative process. AdaSDCA achieves provably better complexity
bound than SDCA with the best fixed probability distribution, known as
importance sampling. However, it is of a theoretical character as it is
expensive to implement. We also propose AdaSDCA+: a practical variant which in
our experiments outperforms existing non-adaptive methods.
| Dominik Csiba, Zheng Qu, Peter Richt\'arik | null | 1502.08053 | null | null |
Influence Maximization with Bandits | cs.SI cs.LG stat.ML | We consider the problem of \emph{influence maximization}, the problem of
maximizing the number of people that become aware of a product by finding the
`best' set of `seed' users to expose the product to. Most prior work on this
topic assumes that we know the probability of each user influencing each other
user, or we have data that lets us estimate these influences. However, this
information is typically not initially available or is difficult to obtain. To
avoid this assumption, we adopt a combinatorial multi-armed bandit paradigm
that estimates the influence probabilities as we sequentially try different
seed sets. We establish bounds on the performance of this procedure under the
existing edge-level feedback as well as a novel and more realistic node-level
feedback. Beyond our theoretical results, we describe a practical
implementation and experimentally demonstrate its efficiency and effectiveness
on four real datasets.
| Sharan Vaswani, Laks.V.S. Lakshmanan and Mark Schmidt | null | 1503.00024 | null | null |
Norm-Based Capacity Control in Neural Networks | cs.LG cs.AI cs.NE stat.ML | We investigate the capacity, convexity and characterization of a general
family of norm-constrained feed-forward networks.
| Behnam Neyshabur, Ryota Tomioka, Nathan Srebro | null | 1503.00036 | null | null |
Sequential Feature Explanations for Anomaly Detection | cs.AI cs.LG stat.ML | In many applications, an anomaly detection system presents the most anomalous
data instance to a human analyst, who then must determine whether the instance
is truly of interest (e.g. a threat in a security setting). Unfortunately, most
anomaly detectors provide no explanation about why an instance was considered
anomalous, leaving the analyst with no guidance about where to begin the
investigation. To address this issue, we study the problems of computing and
evaluating sequential feature explanations (SFEs) for anomaly detectors. An SFE
of an anomaly is a sequence of features, which are presented to the analyst one
at a time (in order) until the information contained in the highlighted
features is enough for the analyst to make a confident judgement about the
anomaly. Since analyst effort is related to the amount of information that they
consider in an investigation, an explanation's quality is related to the number
of features that must be revealed to attain confidence. One of our main
contributions is to present a novel framework for large scale quantitative
evaluations of SFEs, where the quality measure is based on analyst effort. To
do this we construct anomaly detection benchmarks from real data sets along
with artificial experts that can be simulated for evaluation. Our second
contribution is to evaluate several novel explanation approaches within the
framework and on traditional anomaly detection benchmarks, offering several
insights into the approaches.
| Md Amran Siddiqui, Alan Fern, Thomas G. Dietterich and Weng-Keen Wong | null | 1503.00038 | null | null |
Improved Semantic Representations From Tree-Structured Long Short-Term
Memory Networks | cs.CL cs.AI cs.LG | Because of their superior ability to preserve sequence information over time,
Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with
a more complex computational unit, have obtained strong results on a variety of
sequence modeling tasks. The only underlying LSTM structure that has been
explored so far is a linear chain. However, natural language exhibits syntactic
properties that would naturally combine words to phrases. We introduce the
Tree-LSTM, a generalization of LSTMs to tree-structured network topologies.
Tree-LSTMs outperform all existing systems and strong LSTM baselines on two
tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task
1) and sentiment classification (Stanford Sentiment Treebank).
| Kai Sheng Tai, Richard Socher, Christopher D. Manning | null | 1503.00075 | null | null |
Analysis of Crowdsourced Sampling Strategies for HodgeRank with Sparse
Random Graphs | stat.ML cs.LG | Crowdsourcing platforms are now extensively used for conducting subjective
pairwise comparison studies. In this setting, a pairwise comparison dataset is
typically gathered via random sampling, either \emph{with} or \emph{without}
replacement. In this paper, we use tools from random graph theory to analyze
these two random sampling methods for the HodgeRank estimator. Using the
Fiedler value of the graph as a measurement for estimator stability
(informativeness), we provide a new estimate of the Fiedler value for these two
random graph models. In the asymptotic limit as the number of vertices tends to
infinity, we prove the validity of the estimate. Based on our findings, for a
small number of items to be compared, we recommend a two-stage sampling
strategy where a greedy sampling method is used initially and random sampling
\emph{without} replacement is used in the second stage. When a large number of
items is to be compared, we recommend random sampling with replacement as this
is computationally inexpensive and trivially parallelizable. Experiments on
synthetic and real-world datasets support our analysis.
| Braxton Osting and Jiechao Xiong and Qianqian Xu and Yuan Yao | 10.1016/j.acha.2016.03.007 | 1503.00164 | null | null |
23-bit Metaknowledge Template Towards Big Data Knowledge Discovery and
Management | cs.DB cs.AI cs.IR cs.LG | The global influence of Big Data is not only growing but seemingly endless.
The trend is leaning towards knowledge that is attained easily and quickly from
massive pools of Big Data. Today we are living in the technological world that
Dr. Usama Fayyad and his distinguished research fellows discussed in the
introductory explanations of Knowledge Discovery in Databases (KDD) predicted
nearly two decades ago. Indeed, they were precise in their outlook on Big Data
analytics. In fact, the continued improvement of the interoperability of
machine learning, statistics, database building and querying fused to create
this increasingly popular science- Data Mining and Knowledge Discovery. The
next generation computational theories are geared towards helping to extract
insightful knowledge from even larger volumes of data at higher rates of speed.
As the trend increases in popularity, the need for a highly adaptive solution
for knowledge discovery will be necessary. In this research paper, we are
introducing the investigation and development of 23 bit-questions for a
Metaknowledge template for Big Data Processing and clustering purposes. This
research aims to demonstrate the construction of this methodology and proves
the validity and the beneficial utilization that brings Knowledge Discovery
from Big Data.
| Nima Bari, Roman Vichr, Kamran Kowsari, Simon Y. Berkovich | 10.1109/DSAA.2014.7058121 | 1503.00244 | null | null |
An Online Convex Optimization Approach to Blackwell's Approachability | cs.GT cs.LG | The notion of approachability in repeated games with vector payoffs was
introduced by Blackwell in the 1950s, along with geometric conditions for
approachability and corresponding strategies that rely on computing {\em
steering directions} as projections from the current average payoff vector to
the (convex) target set. Recently, Abernethy, Batlett and Hazan (2011) proposed
a class of approachability algorithms that rely on the no-regret properties of
Online Linear Programming for computing a suitable sequence of steering
directions. This is first carried out for target sets that are convex cones,
and then generalized to any convex set by embedding it in a higher-dimensional
convex cone. In this paper we present a more direct formulation that relies on
the support function of the set, along with suitable Online Convex Optimization
algorithms, which leads to a general class of approachability algorithms. We
further show that Blackwell's original algorithm and its convergence follow as
a special case.
| Nahum Shimkin | null | 1503.00255 | null | null |
Contrastive Pessimistic Likelihood Estimation for Semi-Supervised
Classification | stat.ML cs.LG stat.ME | Improvement guarantees for semi-supervised classifiers can currently only be
given under restrictive conditions on the data. We propose a general way to
perform semi-supervised parameter estimation for likelihood-based classifiers
for which, on the full training set, the estimates are never worse than the
supervised solution in terms of the log-likelihood. We argue, moreover, that we
may expect these solutions to really improve upon the supervised classifier in
particular cases. In a worked-out example for LDA, we take it one step further
and essentially prove that its semi-supervised version is strictly better than
its supervised counterpart. The two new concepts that form the core of our
estimation principle are contrast and pessimism. The former refers to the fact
that our objective function takes the supervised estimates into account,
enabling the semi-supervised solution to explicitly control the potential
improvements over this estimate. The latter refers to the fact that our
estimates are conservative and therefore resilient to whatever form the true
labeling of the unlabeled data takes on. Experiments demonstrate the
improvements in terms of both the log-likelihood and the classification error
rate on independent test sets.
| Marco Loog | null | 1503.00269 | null | null |
Sparse Approximation of a Kernel Mean | stat.ML cs.LG | Kernel means are frequently used to represent probability distributions in
machine learning problems. In particular, the well known kernel density
estimator and the kernel mean embedding both have the form of a kernel mean.
Unfortunately, kernel means are faced with scalability issues. A single point
evaluation of the kernel density estimator, for example, requires a computation
time linear in the training sample size. To address this challenge, we present
a method to efficiently construct a sparse approximation of a kernel mean. We
do so by first establishing an incoherence-based bound on the approximation
error, and then noticing that, for the case of radial kernels, the bound can be
minimized by solving the $k$-center problem. The outcome is a linear time
construction of a sparse kernel mean, which also lends itself naturally to an
automatic sparsity selection scheme. We show the computational gains of our
method by looking at three problems involving kernel means: Euclidean embedding
of distributions, class proportion estimation, and clustering using the
mean-shift algorithm.
| E. Cruz Cort\'es, C. Scott | null | 1503.00323 | null | null |
JUMP-Means: Small-Variance Asymptotics for Markov Jump Processes | stat.ML cs.LG | Markov jump processes (MJPs) are used to model a wide range of phenomena from
disease progression to RNA path folding. However, maximum likelihood estimation
of parametric models leads to degenerate trajectories and inferential
performance is poor in nonparametric models. We take a small-variance
asymptotics (SVA) approach to overcome these limitations. We derive the
small-variance asymptotics for parametric and nonparametric MJPs for both
directly observed and hidden state models. In the parametric case we obtain a
novel objective function which leads to non-degenerate trajectories. To derive
the nonparametric version we introduce the gamma-gamma process, a novel
extension to the gamma-exponential process. We propose algorithms for each of
these formulations, which we call \emph{JUMP-means}. Our experiments
demonstrate that JUMP-means is competitive with or outperforms widely used MJP
inference approaches in terms of both speed and reconstruction accuracy.
| Jonathan H. Huggins, Karthik Narasimhan, Ardavan Saeedi, Vikash K.
Mansinghka | null | 1503.00332 | null | null |
Learning Mixtures of Gaussians in High Dimensions | cs.LG | Efficiently learning mixture of Gaussians is a fundamental problem in
statistics and learning theory. Given samples coming from a random one out of k
Gaussian distributions in Rn, the learning problem asks to estimate the means
and the covariance matrices of these Gaussians. This learning problem arises in
many areas ranging from the natural sciences to the social sciences, and has
also found many machine learning applications. Unfortunately, learning mixture
of Gaussians is an information theoretically hard problem: in order to learn
the parameters up to a reasonable accuracy, the number of samples required is
exponential in the number of Gaussian components in the worst case. In this
work, we show that provided we are in high enough dimensions, the class of
Gaussian mixtures is learnable in its most general form under a smoothed
analysis framework, where the parameters are randomly perturbed from an
adversarial starting point. In particular, given samples from a mixture of
Gaussians with randomly perturbed parameters, when n > {\Omega}(k^2), we give
an algorithm that learns the parameters with polynomial running time and using
polynomial number of samples. The central algorithmic ideas consist of new ways
to decompose the moment tensor of the Gaussian mixture by exploiting its
structural properties. The symmetries of this tensor are derived from the
combinatorial structure of higher order moments of Gaussian distributions
(sometimes referred to as Isserlis' theorem or Wick's theorem). We also develop
new tools for bounding smallest singular values of structured random matrices,
which could be useful in other smoothed analysis settings.
| Rong Ge, Qingqing Huang, Sham M. Kakade | null | 1503.00424 | null | null |
Utility-Theoretic Ranking for Semi-Automated Text Classification | cs.LG | \emph{Semi-Automated Text Classification} (SATC) may be defined as the task
of ranking a set $\mathcal{D}$ of automatically labelled textual documents in
such a way that, if a human annotator validates (i.e., inspects and corrects
where appropriate) the documents in a top-ranked portion of $\mathcal{D}$ with
the goal of increasing the overall labelling accuracy of $\mathcal{D}$, the
expected increase is maximized. An obvious SATC strategy is to rank
$\mathcal{D}$ so that the documents that the classifier has labelled with the
lowest confidence are top-ranked. In this work we show that this strategy is
suboptimal. We develop new utility-theoretic ranking methods based on the
notion of \emph{validation gain}, defined as the improvement in classification
effectiveness that would derive by validating a given automatically labelled
document. We also propose a new effectiveness measure for SATC-oriented ranking
methods, based on the expected reduction in classification error brought about
by partially validating a list generated by a given ranking method. We report
the results of experiments showing that, with respect to the baseline method
above, and according to the proposed measure, our utility-theoretic ranking
methods can achieve substantially higher expected reductions in classification
error.
| Giacomo Berardi, Andrea Esuli, Fabrizio Sebastiani | 10.1145/2742548 | 1503.00491 | null | null |
Matrix Product State for Feature Extraction of Higher-Order Tensors | cs.CV cs.DS cs.LG | This paper introduces matrix product state (MPS) decomposition as a
computational tool for extracting features of multidimensional data represented
by higher-order tensors. Regardless of tensor order, MPS extracts its relevant
features to the so-called core tensor of maximum order three which can be used
for classification. Mainly based on a successive sequence of singular value
decompositions (SVD), MPS is quite simple to implement without any recursive
procedure needed for optimizing local tensors. Thus, it leads to substantial
computational savings compared to other tensor feature extraction methods such
as higher-order orthogonal iteration (HOOI) underlying the Tucker decomposition
(TD). Benchmark results show that MPS can reduce significantly the feature
space of data while achieving better classification performance compared to
HOOI.
| Johann A. Bengua, Ho N. Phien, Hoang D. Tuan and Minh N. Do | null | 1503.00516 | null | null |
Recovering PCA from Hybrid-$(\ell_1,\ell_2)$ Sparse Sampling of Data
Elements | cs.IT cs.LG math.IT stat.ML | This paper addresses how well we can recover a data matrix when only given a
few of its elements. We present a randomized algorithm that element-wise
sparsifies the data, retaining only a few its elements. Our new algorithm
independently samples the data using sampling probabilities that depend on both
the squares ($\ell_2$ sampling) and absolute values ($\ell_1$ sampling) of the
entries. We prove that the hybrid algorithm recovers a near-PCA reconstruction
of the data from a sublinear sample-size: hybrid-($\ell_1,\ell_2$) inherits the
$\ell_2$-ability to sample the important elements as well as the regularization
properties of $\ell_1$ sampling, and gives strictly better performance than
either $\ell_1$ or $\ell_2$ on their own. We also give a one-pass version of
our algorithm and show experiments to corroborate the theory.
| Abhisek Kundu, Petros Drineas, Malik Magdon-Ismail | null | 1503.00547 | null | null |
Personalising Mobile Advertising Based on Users Installed Apps | cs.CY cs.LG | Mobile advertising is a billion pound industry that is rapidly expanding. The
success of an advert is measured based on how users interact with it. In this
paper we investigate whether the application of unsupervised learning and
association rule mining could be used to enable personalised targeting of
mobile adverts with the aim of increasing the interaction rate. Over May and
June 2014 we recorded advert interactions such as tapping the advert or
watching the whole advert video along with the set of apps a user has installed
at the time of the interaction. Based on the apps that the users have installed
we applied k-means clustering to profile the users into one of ten classes. Due
to the large number of apps considered we implemented dimension reduction to
reduced the app feature space by mapping the apps to their iTunes category and
clustered users based on the percentage of their apps that correspond to each
iTunes app category. The clustering was externally validated by investigating
differences between the way the ten profiles interact with the various adverts
genres (lifestyle, finance and entertainment adverts). In addition association
rule mining was performed to find whether the time of the day that the advert
is served and the number of apps a user has installed makes certain profiles
more likely to interact with the advert genres. The results showed there were
clear differences in the way the profiles interact with the different advert
genres and the results of this paper suggest that mobile advert targeting would
improve the frequency that users interact with an advert.
| Jenna Reps, Uwe Aickelin, Jonathan Garibaldi, Chris Damski | 10.1109/ICDMW.2014.90 | 1503.00587 | null | null |
An $\mathcal{O}(n\log n)$ projection operator for weighted $\ell_1$-norm
regularization with sum constraint | cs.LG | We provide a simple and efficient algorithm for the projection operator for
weighted $\ell_1$-norm regularization subject to a sum constraint, together
with an elementary proof. The implementation of the proposed algorithm can be
downloaded from the author's homepage.
| Weiran Wang | null | 1503.00600 | null | null |
Unregularized Online Learning Algorithms with General Loss Functions | cs.LG stat.ML | In this paper, we consider unregularized online learning algorithms in a
Reproducing Kernel Hilbert Spaces (RKHS). Firstly, we derive explicit
convergence rates of the unregularized online learning algorithms for
classification associated with a general gamma-activating loss (see Definition
1 in the paper). Our results extend and refine the results in Ying and Pontil
(2008) for the least-square loss and the recent result in Bach and Moulines
(2011) for the loss function with a Lipschitz-continuous gradient. Moreover, we
establish a very general condition on the step sizes which guarantees the
convergence of the last iterate of such algorithms. Secondly, we establish, for
the first time, the convergence of the unregularized pairwise learning
algorithm with a general loss function and derive explicit rates under the
assumption of polynomially decaying step sizes. Concrete examples are used to
illustrate our main results. The main techniques are tools from convex
analysis, refined inequalities of Gaussian averages, and an induction approach.
| Yiming Ying and Ding-Xuan Zhou | null | 1503.00623 | null | null |
A review of mean-shift algorithms for clustering | cs.LG cs.CV stat.ML | A natural way to characterize the cluster structure of a dataset is by
finding regions containing a high density of data. This can be done in a
nonparametric way with a kernel density estimate, whose modes and hence
clusters can be found using mean-shift algorithms. We describe the theory and
practice behind clustering based on kernel density estimates and mean-shift
algorithms. We discuss the blurring and non-blurring versions of mean-shift;
theoretical results about mean-shift algorithms and Gaussian mixtures;
relations with scale-space theory, spectral clustering and other algorithms;
extensions to tracking, to manifold and graph data, and to manifold denoising;
K-modes and Laplacian K-modes algorithms; acceleration strategies for large
datasets; and applications to image segmentation, manifold denoising and
multivalued regression.
| Miguel \'A. Carreira-Perpi\~n\'an | null | 1503.00687 | null | null |
Bayesian Optimization of Text Representations | cs.CL cs.LG stat.ML | When applying machine learning to problems in NLP, there are many choices to
make about how to represent input texts. These choices can have a big effect on
performance, but they are often uninteresting to researchers or practitioners
who simply need a module that performs well. We propose an approach to
optimizing over this space of choices, formulating the problem as global
optimization. We apply a sequential model-based optimization technique and show
that our method makes standard linear models competitive with more
sophisticated, expensive state-of-the-art methods based on latent variable
models or neural networks on various topic classification and sentiment
analysis problems. Our approach is a first step towards black-box NLP systems
that work with raw text and do not require manual tuning.
| Dani Yogatama and Noah A. Smith | null | 1503.00693 | null | null |
A Review of Relational Machine Learning for Knowledge Graphs | stat.ML cs.LG | Relational machine learning studies methods for the statistical analysis of
relational, or graph-structured, data. In this paper, we provide a review of
how such statistical models can be "trained" on large knowledge graphs, and
then used to predict new facts about the world (which is equivalent to
predicting new edges in the graph). In particular, we discuss two fundamentally
different kinds of statistical relational models, both of which can scale to
massive datasets. The first is based on latent feature models such as tensor
factorization and multiway neural networks. The second is based on mining
observable patterns in the graph. We also show how to combine these latent and
observable models to get improved modeling power at decreased computational
cost. Finally, we discuss how such statistical models of graphs can be combined
with text-based information extraction methods for automatically constructing
knowledge graphs from the Web. To this end, we also discuss Google's Knowledge
Vault project as an example of such combination.
| Maximilian Nickel, Kevin Murphy, Volker Tresp, Evgeniy Gabrilovich | 10.1109/JPROC.2015.2483592 | 1503.00759 | null | null |
Simple, Efficient, and Neural Algorithms for Sparse Coding | cs.LG cs.DS cs.NE stat.ML | Sparse coding is a basic task in many fields including signal processing,
neuroscience and machine learning where the goal is to learn a basis that
enables a sparse representation of a given set of data, if one exists. Its
standard formulation is as a non-convex optimization problem which is solved in
practice by heuristics based on alternating minimization. Re- cent work has
resulted in several algorithms for sparse coding with provable guarantees, but
somewhat surprisingly these are outperformed by the simple alternating
minimization heuristics. Here we give a general framework for understanding
alternating minimization which we leverage to analyze existing heuristics and
to design new ones also with provable guarantees. Some of these algorithms seem
implementable on simple neural architectures, which was the original motivation
of Olshausen and Field (1997a) in introducing sparse coding. We also give the
first efficient algorithm for sparse coding that works almost up to the
information theoretic limit for sparse recovery on incoherent dictionaries. All
previous algorithms that approached or surpassed this limit run in time
exponential in some natural parameter. Finally, our algorithms improve upon the
sample complexity of existing approaches. We believe that our analysis
framework will have applications in other settings where simple iterative
algorithms are used.
| Sanjeev Arora, Rong Ge, Tengyu Ma, Ankur Moitra | null | 1503.00778 | null | null |
Robustly Leveraging Prior Knowledge in Text Classification | cs.CL cs.AI cs.IR cs.LG | Prior knowledge has been shown very useful to address many natural language
processing tasks. Many approaches have been proposed to formalise a variety of
knowledge, however, whether the proposed approach is robust or sensitive to the
knowledge supplied to the model has rarely been discussed. In this paper, we
propose three regularization terms on top of generalized expectation criteria,
and conduct extensive experiments to justify the robustness of the proposed
methods. Experimental results demonstrate that our proposed methods obtain
remarkable improvements and are much more robust than baselines.
| Biao Liu, Minlie Huang | null | 1503.00841 | null | null |
Normalization based K means Clustering Algorithm | cs.LG cs.DB | K-means is an effective clustering technique used to separate similar data
into groups based on initial centroids of clusters. In this paper,
Normalization based K-means clustering algorithm(N-K means) is proposed.
Proposed N-K means clustering algorithm applies normalization prior to
clustering on the available data as well as the proposed approach calculates
initial centroids based on weights. Experimental results prove the betterment
of proposed N-K means clustering algorithm over existing K-means clustering
algorithm in terms of complexity and overall performance.
| Deepali Virmani, Shweta Taneja, Geetika Malhotra | null | 1503.00900 | null | null |
Projection onto the capped simplex | cs.LG | We provide a simple and efficient algorithm for computing the Euclidean
projection of a point onto the capped simplex---a simplex with an additional
uniform bound on each coordinate---together with an elementary proof. Both the
MATLAB and C++ implementations of the proposed algorithm can be downloaded at
https://eng.ucmerced.edu/people/wwang5.
| Weiran Wang, Canyi Lu | null | 1503.01002 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.