categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
cs.CV cs.AI cs.LG | null | 1503.01444 | null | null | http://arxiv.org/pdf/1503.01444v2 | 2015-08-13T12:51:08Z | 2015-03-04T20:14:35Z | Partial Sum Minimization of Singular Values in Robust PCA: Algorithm and
Applications | Robust Principal Component Analysis (RPCA) via rank minimization is a
powerful tool for recovering underlying low-rank structure of clean data
corrupted with sparse noise/outliers. In many low-level vision problems, not
only it is known that the underlying structure of clean data is low-rank, but
the exact rank of clean data is also known. Yet, when applying conventional
rank minimization for those problems, the objective function is formulated in a
way that does not fully utilize a priori target rank information about the
problems. This observation motivates us to investigate whether there is a
better alternative solution when using rank minimization. In this paper,
instead of minimizing the nuclear norm, we propose to minimize the partial sum
of singular values, which implicitly encourages the target rank constraint. Our
experimental analyses show that, when the number of samples is deficient, our
approach leads to a higher success rate than conventional rank minimization,
while the solutions obtained by the two approaches are almost identical when
the number of samples is more than sufficient. We apply our approach to various
low-level vision problems, e.g. high dynamic range imaging, motion edge
detection, photometric stereo, image alignment and recovery, and show that our
results outperform those obtained by the conventional nuclear norm rank
minimization method.
| [
"['Tae-Hyun Oh' 'Yu-Wing Tai' 'Jean-Charles Bazin' 'Hyeongwoo Kim'\n 'In So Kweon']",
"Tae-Hyun Oh, Yu-Wing Tai, Jean-Charles Bazin, Hyeongwoo Kim, In So\n Kweon"
]
|
stat.ML cs.LG cs.NE q-bio.BM | null | 1503.01445 | null | null | http://arxiv.org/pdf/1503.01445v1 | 2015-03-04T20:18:55Z | 2015-03-04T20:18:55Z | Toxicity Prediction using Deep Learning | Everyday we are exposed to various chemicals via food additives, cleaning and
cosmetic products and medicines -- and some of them might be toxic. However
testing the toxicity of all existing compounds by biological experiments is
neither financially nor logistically feasible. Therefore the government
agencies NIH, EPA and FDA launched the Tox21 Data Challenge within the
"Toxicology in the 21st Century" (Tox21) initiative. The goal of this challenge
was to assess the performance of computational methods in predicting the
toxicity of chemical compounds. State of the art toxicity prediction methods
build upon specifically-designed chemical descriptors developed over decades.
Though Deep Learning is new to the field and was never applied to toxicity
prediction before, it clearly outperformed all other participating methods. In
this application paper we show that deep nets automatically learn features
resembling well-established toxicophores. In total, our Deep Learning approach
won both of the panel-challenges (nuclear receptors and stress response) as
well as the overall Grand Challenge, and thereby sets a new standard in tox
prediction.
| [
"['Thomas Unterthiner' 'Andreas Mayr' 'Günter Klambauer' 'Sepp Hochreiter']",
"Thomas Unterthiner, Andreas Mayr, G\\\"unter Klambauer, Sepp Hochreiter"
]
|
stat.ML cs.AI cs.CV cs.LG | null | 1503.01521 | null | null | http://arxiv.org/pdf/1503.01521v3 | 2015-10-06T21:42:55Z | 2015-03-05T02:57:19Z | Jointly Learning Multiple Measures of Similarities from Triplet
Comparisons | Similarity between objects is multi-faceted and it can be easier for human
annotators to measure it when the focus is on a specific aspect. We consider
the problem of mapping objects into view-specific embeddings where the distance
between them is consistent with the similarity comparisons of the form "from
the t-th view, object A is more similar to B than to C". Our framework jointly
learns view-specific embeddings exploiting correlations between views.
Experiments on a number of datasets, including one of multi-view crowdsourced
comparison on bird images, show the proposed method achieves lower triplet
generalization error when compared to both learning embeddings independently
for each view and all views pooled into one view. Our method can also be used
to learn multiple measures of similarity over input features taking class
labels into account and compares favorably to existing approaches for
multi-task metric learning on the ISOLET dataset.
| [
"['Liwen Zhang' 'Subhransu Maji' 'Ryota Tomioka']",
"Liwen Zhang, Subhransu Maji, Ryota Tomioka"
]
|
cs.DS cs.LG | null | 1503.01578 | null | null | http://arxiv.org/pdf/1503.01578v2 | 2015-06-05T20:47:35Z | 2015-03-05T08:54:51Z | Scalable Iterative Algorithm for Robust Subspace Clustering | Subspace clustering (SC) is a popular method for dimensionality reduction of
high-dimensional data, where it generalizes Principal Component Analysis (PCA).
Recently, several methods have been proposed to enhance the robustness of PCA
and SC, while most of them are computationally very expensive, in particular,
for high dimensional large-scale data. In this paper, we develop much faster
iterative algorithms for SC, incorporating robustness using a {\em non-squared}
$\ell_2$-norm objective. The known implementations for optimizing the objective
would be costly due to the alternative optimization of two separate objectives:
optimal cluster-membership assignment and robust subspace selection, while the
substitution of one process to a faster surrogate can cause failure in
convergence. To address the issue, we use a simplified procedure requiring
efficient matrix-vector multiplications for subspace update instead of solving
an expensive eigenvector problem at each iteration, in addition to release
nested robust PCA loops. We prove that the proposed algorithm monotonically
converges to a local minimum with approximation guarantees, e.g., it achieves
2-approximation for the robust PCA objective. In our experiments, the proposed
algorithm is shown to converge at an order of magnitude faster than known
algorithms optimizing the same objective, and have outperforms prior subspace
clustering methods in accuracy and running time for MNIST dataset.
| [
"['Sanghyuk Chun' 'Yung-Kyun Noh' 'Jinwoo Shin']",
"Sanghyuk Chun, Yung-Kyun Noh, Jinwoo Shin"
]
|
cs.LG stat.ML | null | 1503.01596 | null | null | http://arxiv.org/pdf/1503.01596v2 | 2015-03-10T02:28:41Z | 2015-03-05T10:17:16Z | Large-Scale Distributed Bayesian Matrix Factorization using Stochastic
Gradient MCMC | Despite having various attractive qualities such as high prediction accuracy
and the ability to quantify uncertainty and avoid over-fitting, Bayesian Matrix
Factorization has not been widely adopted because of the prohibitive cost of
inference. In this paper, we propose a scalable distributed Bayesian matrix
factorization algorithm using stochastic gradient MCMC. Our algorithm, based on
Distributed Stochastic Gradient Langevin Dynamics, can not only match the
prediction accuracy of standard MCMC methods like Gibbs sampling, but at the
same time is as fast and simple as stochastic gradient descent. In our
experiments, we show that our algorithm can achieve the same level of
prediction accuracy as Gibbs sampling an order of magnitude faster. We also
show that our method reduces the prediction error as fast as distributed
stochastic gradient descent, achieving a 4.1% improvement in RMSE for the
Netflix dataset and an 1.8% for the Yahoo music dataset.
| [
"['Sungjin Ahn' 'Anoop Korattikara' 'Nathan Liu' 'Suju Rajan' 'Max Welling']",
"Sungjin Ahn, Anoop Korattikara, Nathan Liu, Suju Rajan, Max Welling"
]
|
stat.ML cs.LG | null | 1503.01673 | null | null | http://arxiv.org/pdf/1503.01673v3 | 2016-05-13T15:31:03Z | 2015-03-05T15:56:08Z | High Dimensional Bayesian Optimisation and Bandits via Additive Models | Bayesian Optimisation (BO) is a technique used in optimising a
$D$-dimensional function which is typically expensive to evaluate. While there
have been many successes for BO in low dimensions, scaling it to high
dimensions has been notoriously difficult. Existing literature on the topic are
under very restrictive settings. In this paper, we identify two key challenges
in this endeavour. We tackle these challenges by assuming an additive structure
for the function. This setting is substantially more expressive and contains a
richer class of functions than previous work. We prove that, for additive
functions the regret has only linear dependence on $D$ even though the function
depends on all $D$ dimensions. We also demonstrate several other statistical
and computational benefits in our framework. Via synthetic examples, a
scientific simulation and a face detection problem we demonstrate that our
method outperforms naive BO on additive functions and on several examples where
the function is not additive.
| [
"Kirthevasan Kandasamy, Jeff Schneider, Barnabas Poczos",
"['Kirthevasan Kandasamy' 'Jeff Schneider' 'Barnabas Poczos']"
]
|
stat.ML cs.LG stat.CO | null | 1503.01737 | null | null | http://arxiv.org/pdf/1503.01737v1 | 2015-03-05T19:29:03Z | 2015-03-05T19:29:03Z | Min-Max Kernels | The min-max kernel is a generalization of the popular resemblance kernel
(which is designed for binary data). In this paper, we demonstrate, through an
extensive classification study using kernel machines, that the min-max kernel
often provides an effective measure of similarity for nonnegative data. As the
min-max kernel is nonlinear and might be difficult to be used for industrial
applications with massive data, we show that the min-max kernel can be
linearized via hashing techniques. This allows practitioners to apply min-max
kernel to large-scale applications using well matured linear algorithms such as
linear SVM or logistic regression.
The previous remarkable work on consistent weighted sampling (CWS) produces
samples in the form of ($i^*, t^*$) where the $i^*$ records the location (and
in fact also the weights) information analogous to the samples produced by
classical minwise hashing on binary data. Because the $t^*$ is theoretically
unbounded, it was not immediately clear how to effectively implement CWS for
building large-scale linear classifiers. In this paper, we provide a simple
solution by discarding $t^*$ (which we refer to as the "0-bit" scheme). Via an
extensive empirical study, we show that this 0-bit scheme does not lose
essential information. We then apply the "0-bit" CWS for building linear
classifiers to approximate min-max kernel classifiers, as extensively validated
on a wide range of publicly available classification datasets. We expect this
work will generate interests among data mining practitioners who would like to
efficiently utilize the nonlinear information of non-binary and nonnegative
data.
| [
"Ping Li",
"['Ping Li']"
]
|
cs.LO cs.GT cs.LG cs.SY | null | 1503.01793 | null | null | http://arxiv.org/pdf/1503.01793v1 | 2015-03-05T21:23:45Z | 2015-03-05T21:23:45Z | Correct-by-synthesis reinforcement learning with temporal logic
constraints | We consider a problem on the synthesis of reactive controllers that optimize
some a priori unknown performance criterion while interacting with an
uncontrolled environment such that the system satisfies a given temporal logic
specification. We decouple the problem into two subproblems. First, we extract
a (maximally) permissive strategy for the system, which encodes multiple
(possibly all) ways in which the system can react to the adversarial
environment and satisfy the specifications. Then, we quantify the a priori
unknown performance criterion as a (still unknown) reward function and compute
an optimal strategy for the system within the operating envelope allowed by the
permissive strategy by using the so-called maximin-Q learning algorithm. We
establish both correctness (with respect to the temporal logic specifications)
and optimality (with respect to the a priori unknown performance criterion) of
this two-step technique for a fragment of temporal logic specifications. For
specifications beyond this fragment, correctness can still be preserved, but
the learned strategy may be sub-optimal. We present an algorithm to the overall
problem, and demonstrate its use and computational requirements on a set of
robot motion planning examples.
| [
"['Min Wen' 'Ruediger Ehlers' 'Ufuk Topcu']",
"Min Wen, Ruediger Ehlers, Ufuk Topcu"
]
|
cs.LG cs.CV | null | 1503.01800 | null | null | http://arxiv.org/pdf/1503.01800v2 | 2015-03-30T00:55:02Z | 2015-03-05T22:03:26Z | EmoNets: Multimodal deep learning approaches for emotion recognition in
video | The task of the emotion recognition in the wild (EmotiW) Challenge is to
assign one of seven emotions to short video clips extracted from Hollywood
style movies. The videos depict acted-out emotions under realistic conditions
with a large degree of variation in attributes such as pose and illumination,
making it worthwhile to explore approaches which consider combinations of
features from multiple modalities for label assignment. In this paper we
present our approach to learning several specialist models using deep learning
techniques, each focusing on one modality. Among these are a convolutional
neural network, focusing on capturing visual information in detected faces, a
deep belief net focusing on the representation of the audio stream, a K-Means
based "bag-of-mouths" model, which extracts visual features around the mouth
region and a relational autoencoder, which addresses spatio-temporal aspects of
videos. We explore multiple methods for the combination of cues from these
modalities into one common classifier. This achieves a considerably greater
accuracy than predictions from our strongest single-modality classifier. Our
method was the winning submission in the 2013 EmotiW challenge and achieved a
test set accuracy of 47.67% on the 2014 dataset.
| [
"['Samira Ebrahimi Kahou' 'Xavier Bouthillier' 'Pascal Lamblin'\n 'Caglar Gulcehre' 'Vincent Michalski' 'Kishore Konda' 'Sébastien Jean'\n 'Pierre Froumenty' 'Yann Dauphin' 'Nicolas Boulanger-Lewandowski'\n 'Raul Chandias Ferrari' 'Mehdi Mirza' 'David Warde-Farley'\n 'Aaron Courville' 'Pascal Vincent' 'Roland Memisevic' 'Christopher Pal'\n 'Yoshua Bengio']",
"Samira Ebrahimi Kahou, Xavier Bouthillier, Pascal Lamblin, Caglar\n Gulcehre, Vincent Michalski, Kishore Konda, S\\'ebastien Jean, Pierre\n Froumenty, Yann Dauphin, Nicolas Boulanger-Lewandowski, Raul Chandias\n Ferrari, Mehdi Mirza, David Warde-Farley, Aaron Courville, Pascal Vincent,\n Roland Memisevic, Christopher Pal, Yoshua Bengio"
]
|
cs.LG stat.ML | null | 1503.01811 | null | null | http://arxiv.org/pdf/1503.01811v3 | 2015-06-18T20:26:54Z | 2015-03-05T22:56:07Z | Optimally Combining Classifiers Using Unlabeled Data | We develop a worst-case analysis of aggregation of classifier ensembles for
binary classification. The task of predicting to minimize error is formulated
as a game played over a given set of unlabeled data (a transductive setting),
where prior label information is encoded as constraints on the game. The
minimax solution of this game identifies cases where a weighted combination of
the classifiers can perform significantly better than any single classifier.
| [
"Akshay Balsubramani, Yoav Freund",
"['Akshay Balsubramani' 'Yoav Freund']"
]
|
cs.RO cs.AI cs.CV cs.LG | null | 1503.01820 | null | null | http://arxiv.org/pdf/1503.01820v1 | 2015-03-06T00:05:12Z | 2015-03-06T00:05:12Z | Latent Hierarchical Model for Activity Recognition | We present a novel hierarchical model for human activity recognition. In
contrast to approaches that successively recognize actions and activities, our
approach jointly models actions and activities in a unified framework, and
their labels are simultaneously predicted. The model is embedded with a latent
layer that is able to capture a richer class of contextual information in both
state-state and observation-state pairs. Although loops are present in the
model, the model has an overall linear-chain structure, where the exact
inference is tractable. Therefore, the model is very efficient in both
inference and learning. The parameters of the graphical model are learned with
a Structured Support Vector Machine (Structured-SVM). A data-driven approach is
used to initialize the latent variables; therefore, no manual labeling for the
latent states is required. The experimental results from using two benchmark
datasets show that our model outperforms the state-of-the-art approach, and our
model is computationally more efficient.
| [
"Ninghang Hu, Gwenn Englebienne, Zhongyu Lou, and Ben Kr\\\"ose",
"['Ninghang Hu' 'Gwenn Englebienne' 'Zhongyu Lou' 'Ben Kröse']"
]
|
cs.LG cs.NE | null | 1503.01824 | null | null | http://arxiv.org/pdf/1503.01824v1 | 2015-03-06T00:53:40Z | 2015-03-06T00:53:40Z | Deep Clustered Convolutional Kernels | Deep neural networks have recently achieved state of the art performance
thanks to new training algorithms for rapid parameter estimation and new
regularization methods to reduce overfitting. However, in practice the network
architecture has to be manually set by domain experts, generally by a costly
trial and error procedure, which often accounts for a large portion of the
final system performance. We view this as a limitation and propose a novel
training algorithm that automatically optimizes network architecture, by
progressively increasing model complexity and then eliminating model redundancy
by selectively removing parameters at training time. For convolutional neural
networks, our method relies on iterative split/merge clustering of
convolutional kernels interleaved by stochastic gradient descent. We present a
training algorithm and experimental results on three different vision tasks,
showing improved performance compared to similarly sized hand-crafted
architectures.
| [
"Minyoung Kim, Luca Rigazio",
"['Minyoung Kim' 'Luca Rigazio']"
]
|
cs.CL cs.LG cs.NE | null | 1503.01838 | null | null | http://arxiv.org/pdf/1503.01838v5 | 2015-06-08T09:04:14Z | 2015-03-06T03:04:54Z | Encoding Source Language with Convolutional Neural Network for Machine
Translation | The recently proposed neural network joint model (NNJM) (Devlin et al., 2014)
augments the n-gram target language model with a heuristically chosen source
context window, achieving state-of-the-art performance in SMT. In this paper,
we give a more systematic treatment by summarizing the relevant source
information through a convolutional architecture guided by the target
information. With different guiding signals during decoding, our specifically
designed convolution+gating architectures can pinpoint the parts of a source
sentence that are relevant to predicting a target word, and fuse them with the
context of entire source sentence to form a unified representation. This
representation, together with target language words, are fed to a deep neural
network (DNN) to form a stronger NNJM. Experiments on two NIST Chinese-English
translation tasks show that the proposed model can achieve significant
improvements over the previous NNJM by up to +1.08 BLEU points on average
| [
"['Fandong Meng' 'Zhengdong Lu' 'Mingxuan Wang' 'Hang Li' 'Wenbin Jiang'\n 'Qun Liu']",
"Fandong Meng and Zhengdong Lu and Mingxuan Wang and Hang Li and Wenbin\n Jiang and Qun Liu"
]
|
cs.LG | 10.1016/j.eswa.2016.02.026 | 1503.01883 | null | null | http://arxiv.org/abs/1503.01883v1 | 2015-03-06T09:10:34Z | 2015-03-06T09:10:34Z | Ranking and significance of variable-length similarity-based time series
motifs | The detection of very similar patterns in a time series, commonly called
motifs, has received continuous and increasing attention from diverse
scientific communities. In particular, recent approaches for discovering
similar motifs of different lengths have been proposed. In this work, we show
that such variable-length similarity-based motifs cannot be directly compared,
and hence ranked, by their normalized dissimilarities. Specifically, we find
that length-normalized motif dissimilarities still have intrinsic dependencies
on the motif length, and that lowest dissimilarities are particularly affected
by this dependency. Moreover, we find that such dependencies are generally
non-linear and change with the considered data set and dissimilarity measure.
Based on these findings, we propose a solution to rank those motifs and measure
their significance. This solution relies on a compact but accurate model of the
dissimilarity space, using a beta distribution with three parameters that
depend on the motif length in a non-linear way. We believe the incomparability
of variable-length dissimilarities could go beyond the field of time series,
and that similar modeling strategies as the one used here could be of help in a
more broad context.
| [
"['Joan Serrà' 'Isabel Serra' 'Álvaro Corral' 'Josep Lluis Arcos']",
"Joan Serr\\`a, Isabel Serra, \\'Alvaro Corral and Josep Lluis Arcos"
]
|
cs.LG cs.AI | null | 1503.01910 | null | null | http://arxiv.org/pdf/1503.01910v1 | 2015-03-06T11:02:41Z | 2015-03-06T11:02:41Z | Sequential Relevance Maximization with Binary Feedback | Motivated by online settings where users can provide explicit feedback about
the relevance of products that are sequentially presented to them, we look at
the recommendation process as a problem of dynamically optimizing this
relevance feedback. Such an algorithm optimizes the fine tradeoff between
presenting the products that are most likely to be relevant, and learning the
preferences of the user so that more relevant recommendations can be made in
the future.
We assume a standard predictive model inspired by collaborative filtering, in
which a user is sampled from a distribution over a set of possible types. For
every product category, each type has an associated relevance feedback that is
assumed to be binary: the category is either relevant or irrelevant. Assuming
that the user stays for each additional recommendation opportunity with
probability $\beta$ independent of the past, the problem is to find a policy
that maximizes the expected number of recommendations that are deemed relevant
in a session.
We analyze this problem and prove key structural properties of the optimal
policy. Based on these properties, we first present an algorithm that strikes a
balance between recursion and dynamic programming to compute this policy. We
further propose and analyze two heuristic policies: a `farsighted' greedy
policy that attains at least $1-\beta$ factor of the optimal payoff, and a
naive greedy policy that attains at least $\frac{1-\beta}{1+\beta}$ factor of
the optimal payoff in the worst case. Extensive simulations show that these
heuristics are very close to optimal in practice.
| [
"Vijay Kamble, Nadia Fawaz, Fernando Silveira",
"['Vijay Kamble' 'Nadia Fawaz' 'Fernando Silveira']"
]
|
stat.ML cs.LG q-bio.QM | null | 1503.01916 | null | null | http://arxiv.org/pdf/1503.01916v1 | 2015-03-06T11:16:58Z | 2015-03-06T11:16:58Z | Hamiltonian ABC | Approximate Bayesian computation (ABC) is a powerful and elegant framework
for performing inference in simulation-based models. However, due to the
difficulty in scaling likelihood estimates, ABC remains useful for relatively
low-dimensional problems. We introduce Hamiltonian ABC (HABC), a set of
likelihood-free algorithms that apply recent advances in scaling Bayesian
learning using Hamiltonian Monte Carlo (HMC) and stochastic gradients. We find
that a small number forward simulations can effectively approximate the ABC
gradient, allowing Hamiltonian dynamics to efficiently traverse parameter
spaces. We also describe a new simple yet general approach of incorporating
random seeds into the state of the Markov chain, further reducing the random
walk behavior of HABC. We demonstrate HABC on several typical ABC problems, and
show that HABC samples comparably to regular Bayesian inference using true
gradients on a high-dimensional problem from machine learning.
| [
"Edward Meeds, Robert Leenders, and Max Welling",
"['Edward Meeds' 'Robert Leenders' 'Max Welling']"
]
|
cs.LG cs.NE stat.ML | null | 1503.02031 | null | null | http://arxiv.org/pdf/1503.02031v1 | 2015-03-06T18:39:53Z | 2015-03-06T18:39:53Z | To Drop or Not to Drop: Robustness, Consistency and Differential Privacy
Properties of Dropout | Training deep belief networks (DBNs) requires optimizing a non-convex
function with an extremely large number of parameters. Naturally, existing
gradient descent (GD) based methods are prone to arbitrarily poor local minima.
In this paper, we rigorously show that such local minima can be avoided (upto
an approximation error) by using the dropout technique, a widely used heuristic
in this domain. In particular, we show that by randomly dropping a few nodes of
a one-hidden layer neural network, the training objective function, up to a
certain approximation error, decreases by a multiplicative factor.
On the flip side, we show that for training convex empirical risk minimizers
(ERM), dropout in fact acts as a "stabilizer" or regularizer. That is, a simple
dropout based GD method for convex ERMs is stable in the face of arbitrary
changes to any one of the training points. Using the above assertion, we show
that dropout provides fast rates for generalization error in learning (convex)
generalized linear models (GLM). Moreover, using the above mentioned stability
properties of dropout, we design dropout based differentially private
algorithms for solving ERMs. The learned GLM thus, preserves privacy of each of
the individual training points while providing accurate predictions for new
test points. Finally, we empirically validate our stability assertions for
dropout in the context of convex ERMs and show that surprisingly, dropout
significantly outperforms (in terms of prediction accuracy) the L2
regularization based methods for several benchmark datasets.
| [
"['Prateek Jain' 'Vivek Kulkarni' 'Abhradeep Thakurta' 'Oliver Williams']",
"Prateek Jain, Vivek Kulkarni, Abhradeep Thakurta, Oliver Williams"
]
|
cs.LG math.OC stat.ML | null | 1503.02101 | null | null | http://arxiv.org/pdf/1503.02101v1 | 2015-03-06T22:07:05Z | 2015-03-06T22:07:05Z | Escaping From Saddle Points --- Online Stochastic Gradient for Tensor
Decomposition | We analyze stochastic gradient descent for optimizing non-convex functions.
In many cases for non-convex functions the goal is to find a reasonable local
minimum, and the main concern is that gradient updates are trapped in saddle
points. In this paper we identify strict saddle property for non-convex problem
that allows for efficient optimization. Using this property we show that
stochastic gradient descent converges to a local minimum in a polynomial number
of iterations. To the best of our knowledge this is the first work that gives
global convergence guarantees for stochastic gradient descent on non-convex
functions with exponentially many local minima and saddle points. Our analysis
can be applied to orthogonal tensor decomposition, which is widely used in
learning a rich class of latent variable models. We propose a new optimization
formulation for the tensor decomposition problem that has strict saddle
property. As a result we get the first online algorithm for orthogonal tensor
decomposition with global convergence guarantee.
| [
"Rong Ge, Furong Huang, Chi Jin, Yang Yuan",
"['Rong Ge' 'Furong Huang' 'Chi Jin' 'Yang Yuan']"
]
|
cs.LG cs.CL cs.NE | null | 1503.02108 | null | null | http://arxiv.org/pdf/1503.02108v2 | 2015-08-12T04:53:53Z | 2015-03-06T22:48:29Z | Maximum a Posteriori Adaptation of Network Parameters in Deep Models | We present a Bayesian approach to adapting parameters of a well-trained
context-dependent, deep-neural-network, hidden Markov model (CD-DNN-HMM) to
improve automatic speech recognition performance. Given an abundance of DNN
parameters but with only a limited amount of data, the effectiveness of the
adapted DNN model can often be compromised. We formulate maximum a posteriori
(MAP) adaptation of parameters of a specially designed CD-DNN-HMM with an
augmented linear hidden networks connected to the output tied states, or
senones, and compare it to feature space MAP linear regression previously
proposed. Experimental evidences on the 20,000-word open vocabulary Wall Street
Journal task demonstrate the feasibility of the proposed framework. In
supervised adaptation, the proposed MAP adaptation approach provides more than
10% relative error reduction and consistently outperforms the conventional
transformation based methods. Furthermore, we present an initial attempt to
generate hierarchical priors to improve adaptation efficiency and effectiveness
with limited adaptation data by exploiting similarities among senones.
| [
"['Zhen Huang' 'Sabato Marco Siniscalchi' 'I-Fan Chen' 'Jiadong Wu'\n 'Chin-Hui Lee']",
"Zhen Huang, Sabato Marco Siniscalchi, I-Fan Chen, Jiadong Wu, and\n Chin-Hui Lee"
]
|
cs.LG cs.AI stat.ML | null | 1503.02128 | null | null | http://arxiv.org/pdf/1503.02128v2 | 2015-06-18T02:52:51Z | 2015-03-07T03:34:48Z | Exact Hybrid Covariance Thresholding for Joint Graphical Lasso | This paper considers the problem of estimating multiple related Gaussian
graphical models from a $p$-dimensional dataset consisting of different
classes. Our work is based upon the formulation of this problem as group
graphical lasso. This paper proposes a novel hybrid covariance thresholding
algorithm that can effectively identify zero entries in the precision matrices
and split a large joint graphical lasso problem into small subproblems. Our
hybrid covariance thresholding method is superior to existing uniform
thresholding methods in that our method can split the precision matrix of each
individual class using different partition schemes and thus split group
graphical lasso into much smaller subproblems, each of which can be solved very
fast. In addition, this paper establishes necessary and sufficient conditions
for our hybrid covariance thresholding algorithm. The superior performance of
our thresholding method is thoroughly analyzed and illustrated by a few
experiments on simulated data and real gene expression data.
| [
"['Qingming Tang' 'Chao Yang' 'Jian Peng' 'Jinbo Xu']",
"Qingming Tang, Chao Yang, Jian Peng and Jinbo Xu"
]
|
cs.LG cs.AI stat.ML | null | 1503.02129 | null | null | http://arxiv.org/pdf/1503.02129v3 | 2015-06-18T03:37:44Z | 2015-03-07T03:35:26Z | Learning Scale-Free Networks by Dynamic Node-Specific Degree Prior | Learning the network structure underlying data is an important problem in
machine learning. This paper introduces a novel prior to study the inference of
scale-free networks, which are widely used to model social and biological
networks. The prior not only favors a desirable global node degree
distribution, but also takes into consideration the relative strength of all
the possible edges adjacent to the same node and the estimated degree of each
individual node.
To fulfill this, ranking is incorporated into the prior, which makes the
problem challenging to solve. We employ an ADMM (alternating direction method
of multipliers) framework to solve the Gaussian Graphical model regularized by
this prior. Our experiments on both synthetic and real data show that our prior
not only yields a scale-free network, but also produces many more correctly
predicted edges than the others such as the scale-free inducing prior, the
hub-inducing prior and the $l_1$ norm.
| [
"['Qingming Tang' 'Siqi Sun' 'Jinbo Xu']",
"Qingming Tang, Siqi Sun, and Jinbo Xu"
]
|
cs.LG | null | 1503.02143 | null | null | http://arxiv.org/pdf/1503.02143v2 | 2023-06-13T14:20:25Z | 2015-03-07T08:39:15Z | Model selection of polynomial kernel regression | Polynomial kernel regression is one of the standard and state-of-the-art
learning strategies. However, as is well known, the choices of the degree of
polynomial kernel and the regularization parameter are still open in the realm
of model selection. The first aim of this paper is to develop a strategy to
select these parameters. On one hand, based on the worst-case learning rate
analysis, we show that the regularization term in polynomial kernel regression
is not necessary. In other words, the regularization parameter can decrease
arbitrarily fast when the degree of the polynomial kernel is suitable tuned. On
the other hand,taking account of the implementation of the algorithm, the
regularization term is required. Summarily, the effect of the regularization
term in polynomial kernel regression is only to circumvent the " ill-condition"
of the kernel matrix. Based on this, the second purpose of this paper is to
propose a new model selection strategy, and then design an efficient learning
algorithm. Both theoretical and experimental analysis show that the new
strategy outperforms the previous one. Theoretically, we prove that the new
learning strategy is almost optimal if the regression function is smooth.
Experimentally, it is shown that the new strategy can significantly reduce the
computational burden without loss of generalization capability.
| [
"['Shaobo Lin' 'Xingping Sun' 'Zongben Xu' 'Jinshan Zeng']",
"Shaobo Lin, Xingping Sun, Zongben Xu, Jinshan Zeng"
]
|
cs.LG cs.IT math.IT | null | 1503.02144 | null | null | http://arxiv.org/pdf/1503.02144v1 | 2015-03-07T09:03:37Z | 2015-03-07T09:03:37Z | Sparse Bayesian Dictionary Learning with a Gaussian Hierarchical Model | We consider a dictionary learning problem whose objective is to design a
dictionary such that the signals admits a sparse or an approximate sparse
representation over the learned dictionary. Such a problem finds a variety of
applications such as image denoising, feature extraction, etc. In this paper,
we propose a new hierarchical Bayesian model for dictionary learning, in which
a Gaussian-inverse Gamma hierarchical prior is used to promote the sparsity of
the representation. Suitable priors are also placed on the dictionary and the
noise variance such that they can be reasonably inferred from the data. Based
on the hierarchical model, a variational Bayesian method and a Gibbs sampling
method are developed for Bayesian inference. The proposed methods have the
advantage that they do not require the knowledge of the noise variance \emph{a
priori}. Numerical results show that the proposed methods are able to learn the
dictionary with an accuracy better than existing methods, particularly for the
case where there is a limited number of training signals.
| [
"Linxiao Yang, Jun Fang, Hong Cheng, and Hongbin Li",
"['Linxiao Yang' 'Jun Fang' 'Hong Cheng' 'Hongbin Li']"
]
|
cs.IT cs.LG math.IT | null | 1503.02164 | null | null | http://arxiv.org/pdf/1503.02164v1 | 2015-03-07T12:06:48Z | 2015-03-07T12:06:48Z | A Nonconvex Approach for Structured Sparse Learning | Sparse learning is an important topic in many areas such as machine learning,
statistical estimation, signal processing, etc. Recently, there emerges a
growing interest on structured sparse learning. In this paper we focus on the
$\ell_q$-analysis optimization problem for structured sparse learning ($0< q
\leq 1$). Compared to previous work, we establish weaker conditions for exact
recovery in noiseless case and a tighter non-asymptotic upper bound of estimate
error in noisy case. We further prove that the nonconvex $\ell_q$-analysis
optimization can do recovery with a lower sample complexity and in a wider
range of cosparsity than its convex counterpart. In addition, we develop an
iteratively reweighted method to solve the optimization problem under the
variational framework. Theoretical analysis shows that our method is capable of
pursuing a local minima close to the global minima. Also, empirical results of
preliminary computational experiments illustrate that our nonconvex method
outperforms both its convex counterpart and other state-of-the-art methods.
| [
"['Shubao Zhang' 'Hui Qian' 'Zhihua Zhang']",
"Shubao Zhang and Hui Qian and Zhihua Zhang"
]
|
cs.LG | null | 1503.02193 | null | null | http://arxiv.org/pdf/1503.02193v2 | 2015-08-24T19:56:12Z | 2015-03-07T17:36:08Z | Label optimal regret bounds for online local learning | We resolve an open question from (Christiano, 2014b) posed in COLT'14
regarding the optimal dependency of the regret achievable for online local
learning on the size of the label set. In this framework the algorithm is shown
a pair of items at each step, chosen from a set of $n$ items. The learner then
predicts a label for each item, from a label set of size $L$ and receives a
real valued payoff. This is a natural framework which captures many interesting
scenarios such as collaborative filtering, online gambling, and online max cut
among others. (Christiano, 2014a) designed an efficient online learning
algorithm for this problem achieving a regret of $O(\sqrt{nL^3T})$, where $T$
is the number of rounds. Information theoretically, one can achieve a regret of
$O(\sqrt{n \log L T})$. One of the main open questions left in this framework
concerns closing the above gap.
In this work, we provide a complete answer to the question above via two main
results. We show, via a tighter analysis, that the semi-definite programming
based algorithm of (Christiano, 2014a), in fact achieves a regret of
$O(\sqrt{nLT})$. Second, we show a matching computational lower bound. Namely,
we show that a polynomial time algorithm for online local learning with lower
regret would imply a polynomial time algorithm for the planted clique problem
which is widely believed to be hard. We prove a similar hardness result under a
related conjecture concerning planted dense subgraphs that we put forth. Unlike
planted clique, the planted dense subgraph problem does not have any known
quasi-polynomial time algorithms.
Computational lower bounds for online learning are relatively rare, and we
hope that the ideas developed in this work will lead to lower bounds for other
online learning scenarios as well.
| [
"['Pranjal Awasthi' 'Moses Charikar' 'Kevin A. Lai' 'Andrej Risteski']",
"Pranjal Awasthi, Moses Charikar, Kevin A. Lai, Andrej Risteski"
]
|
stat.ML cs.LG math.OC | null | 1503.02216 | null | null | http://arxiv.org/pdf/1503.02216v1 | 2015-03-07T21:38:07Z | 2015-03-07T21:38:07Z | Higher order Matching Pursuit for Low Rank Tensor Learning | Low rank tensor learning, such as tensor completion and multilinear multitask
learning, has received much attention in recent years. In this paper, we
propose higher order matching pursuit for low rank tensor learning problems
with a convex or a nonconvex cost function, which is a generalization of the
matching pursuit type methods. At each iteration, the main cost of the proposed
methods is only to compute a rank-one tensor, which can be done efficiently,
making the proposed methods scalable to large scale problems. Moreover, storing
the resulting rank-one tensors is of low storage requirement, which can help to
break the curse of dimensionality. The linear convergence rate of the proposed
methods is established in various circumstances. Along with the main methods,
we also provide a method of low computational complexity for approximately
computing the rank-one tensors, with provable approximation ratio, which helps
to improve the efficiency of the main methods and to analyze the convergence
rate. Experimental results on synthetic as well as real datasets verify the
efficiency and effectiveness of the proposed methods.
| [
"['Yuning Yang' 'Siamak Mehrkanoon' 'Johan A. K. Suykens']",
"Yuning Yang, Siamak Mehrkanoon and Johan A.K. Suykens"
]
|
cs.CE cs.LG | null | 1503.02328 | null | null | http://arxiv.org/pdf/1503.02328v1 | 2015-03-08T21:45:07Z | 2015-03-08T21:45:07Z | Financial Market Prediction | Given financial data from popular sites like Yahoo and the London Exchange,
the presented paper attempts to model and predict stocks that can be considered
"good investments". Stocks are characterized by 125 features ranging from gross
domestic product to EDIBTA, and are labeled by discrepancies between stock and
market price returns. An artificial neural network (Self-Organizing Map) is
fitted to train on more than a million data points to predict "good
investments" given testing stocks from 2013 and after.
| [
"['Mike Wu']",
"Mike Wu"
]
|
stat.ME cs.IT cs.LG math.IT | null | 1503.02346 | null | null | http://arxiv.org/pdf/1503.02346v2 | 2015-11-11T17:11:29Z | 2015-03-08T23:53:04Z | One Scan 1-Bit Compressed Sensing | Based on $\alpha$-stable random projections with small $\alpha$, we develop a
simple algorithm for compressed sensing (sparse signal recovery) by utilizing
only the signs (i.e., 1-bit) of the measurements. Using only 1-bit information
of the measurements results in substantial cost reduction in collection,
storage, communication, and decoding for compressed sensing. The proposed
algorithm is efficient in that the decoding procedure requires only one scan of
the coordinates. Our analysis can precisely show that, for a $K$-sparse signal
of length $N$, $12.3K\log N/\delta$ measurements (where $\delta$ is the
confidence) would be sufficient for recovering the support and the signs of the
signal. While the method is very robust against typical measurement noises, we
also provide the analysis of the scheme under random flipping of the signs of
the measurements.
\noindent Compared to the well-known work on 1-bit marginal regression (which
can also be viewed as a one-scan method), the proposed algorithm requires
orders of magnitude fewer measurements. Compared to 1-bit Iterative Hard
Thresholding (IHT) (which is not a one-scan algorithm), our method is still
significantly more accurate. Furthermore, the proposed method is reasonably
robust against random sign flipping while IHT is known to be very sensitive to
this type of noise.
| [
"Ping Li",
"['Ping Li']"
]
|
cs.CV cs.LG | null | 1503.02351 | null | null | http://arxiv.org/pdf/1503.02351v1 | 2015-03-09T01:08:00Z | 2015-03-09T01:08:00Z | Fully Connected Deep Structured Networks | Convolutional neural networks with many layers have recently been shown to
achieve excellent results on many high-level tasks such as image
classification, object detection and more recently also semantic segmentation.
Particularly for semantic segmentation, a two-stage procedure is often
employed. Hereby, convolutional networks are trained to provide good local
pixel-wise features for the second step being traditionally a more global
graphical model. In this work we unify this two-stage process into a single
joint training algorithm. We demonstrate our method on the semantic image
segmentation task and show encouraging results on the challenging PASCAL VOC
2012 dataset.
| [
"['Alexander G. Schwing' 'Raquel Urtasun']",
"Alexander G. Schwing and Raquel Urtasun"
]
|
cs.CL cs.LG cs.NE | null | 1503.02357 | null | null | http://arxiv.org/pdf/1503.02357v2 | 2015-06-24T01:07:40Z | 2015-03-09T02:16:19Z | Context-Dependent Translation Selection Using Convolutional Neural
Network | We propose a novel method for translation selection in statistical machine
translation, in which a convolutional neural network is employed to judge the
similarity between a phrase pair in two languages. The specifically designed
convolutional architecture encodes not only the semantic similarity of the
translation pair, but also the context containing the phrase in the source
language. Therefore, our approach is able to capture context-dependent semantic
similarities of translation pairs. We adopt a curriculum learning strategy to
train the model: we classify the training examples into easy, medium, and
difficult categories, and gradually build the ability of representing phrase
and sentence level context by using training examples from easy to difficult.
Experimental results show that our approach significantly outperforms the
baseline system by up to 1.4 BLEU points.
| [
"['Zhaopeng Tu' 'Baotian Hu' 'Zhengdong Lu' 'Hang Li']",
"Zhaopeng Tu, Baotian Hu, Zhengdong Lu, and Hang Li"
]
|
cs.LG stat.ML | 10.1109/TSP.2015.2481875 | 1503.02398 | null | null | http://arxiv.org/abs/1503.02398v5 | 2015-09-11T19:04:19Z | 2015-03-09T08:53:33Z | Learning Co-Sparse Analysis Operators with Separable Structures | In the co-sparse analysis model a set of filters is applied to a signal out
of the signal class of interest yielding sparse filter responses. As such, it
may serve as a prior in inverse problems, or for structural analysis of signals
that are known to belong to the signal class. The more the model is adapted to
the class, the more reliable it is for these purposes. The task of learning
such operators for a given class is therefore a crucial problem. In many
applications, it is also required that the filter responses are obtained in a
timely manner, which can be achieved by filters with a separable structure. Not
only can operators of this sort be efficiently used for computing the filter
responses, but they also have the advantage that less training samples are
required to obtain a reliable estimate of the operator. The first contribution
of this work is to give theoretical evidence for this claim by providing an
upper bound for the sample complexity of the learning process. The second is a
stochastic gradient descent (SGD) method designed to learn an analysis operator
with separable structures, which includes a novel and efficient step size
selection rule. Numerical experiments are provided that link the sample
complexity to the convergence speed of the SGD algorithm.
| [
"Matthias Seibert, Julian W\\\"ormann, R\\'emi Gribonval, Martin\n Kleinsteuber",
"['Matthias Seibert' 'Julian Wörmann' 'Rémi Gribonval'\n 'Martin Kleinsteuber']"
]
|
cs.LG | null | 1503.02406 | null | null | http://arxiv.org/pdf/1503.02406v1 | 2015-03-09T09:39:41Z | 2015-03-09T09:39:41Z | Deep Learning and the Information Bottleneck Principle | Deep Neural Networks (DNNs) are analyzed via the theoretical framework of the
information bottleneck (IB) principle. We first show that any DNN can be
quantified by the mutual information between the layers and the input and
output variables. Using this representation we can calculate the optimal
information theoretic limits of the DNN and obtain finite sample generalization
bounds. The advantage of getting closer to the theoretical limit is
quantifiable both by the generalization bound and by the network's simplicity.
We argue that both the optimal architecture, number of layers and
features/connections at each layer, are related to the bifurcation points of
the information bottleneck tradeoff, namely, relevant compression of the input
layer with respect to the output layer. The hierarchical representations at the
layered network naturally correspond to the structural phase transitions along
the information curve. We believe that this new insight can lead to new
optimality bounds and deep learning algorithms.
| [
"Naftali Tishby and Noga Zaslavsky",
"['Naftali Tishby' 'Noga Zaslavsky']"
]
|
cs.LG cs.CL | null | 1503.02417 | null | null | http://arxiv.org/pdf/1503.02417v1 | 2015-03-09T10:35:10Z | 2015-03-09T10:35:10Z | Structured Prediction of Sequences and Trees using Infinite Contexts | Linguistic structures exhibit a rich array of global phenomena, however
commonly used Markov models are unable to adequately describe these phenomena
due to their strong locality assumptions. We propose a novel hierarchical model
for structured prediction over sequences and trees which exploits global
context by conditioning each generation decision on an unbounded context of
prior decisions. This builds on the success of Markov models but without
imposing a fixed bound in order to better represent global phenomena. To
facilitate learning of this large and unbounded model, we use a hierarchical
Pitman-Yor process prior which provides a recursive form of smoothing. We
propose prediction algorithms based on A* and Markov Chain Monte Carlo
sampling. Empirical results demonstrate the potential of our model compared to
baseline finite-context Markov models on part-of-speech tagging and syntactic
parsing.
| [
"Ehsan Shareghi, Gholamreza Haffari, Trevor Cohn, Ann Nicholson",
"['Ehsan Shareghi' 'Gholamreza Haffari' 'Trevor Cohn' 'Ann Nicholson']"
]
|
cs.CL cs.LG cs.NE | null | 1503.02427 | null | null | http://arxiv.org/pdf/1503.02427v6 | 2015-06-12T08:26:01Z | 2015-03-09T11:11:15Z | Syntax-based Deep Matching of Short Texts | Many tasks in natural language processing, ranging from machine translation
to question answering, can be reduced to the problem of matching two sentences
or more generally two short texts. We propose a new approach to the problem,
called Deep Match Tree (DeepMatch$_{tree}$), under a general setting. The
approach consists of two components, 1) a mining algorithm to discover patterns
for matching two short-texts, defined in the product space of dependency trees,
and 2) a deep neural network for matching short texts using the mined patterns,
as well as a learning algorithm to build the network having a sparse structure.
We test our algorithm on the problem of matching a tweet and a response in
social media, a hard matching problem proposed in [Wang et al., 2013], and show
that DeepMatch$_{tree}$ can outperform a number of competitor models including
one without using dependency trees and one based on word-embedding, all with
large margins
| [
"['Mingxuan Wang' 'Zhengdong Lu' 'Hang Li' 'Qun Liu']",
"Mingxuan Wang and Zhengdong Lu and Hang Li and Qun Liu"
]
|
cs.CL cs.AI cs.LG | null | 1503.02510 | null | null | http://arxiv.org/pdf/1503.02510v2 | 2015-04-17T23:54:37Z | 2015-03-09T15:13:38Z | Compositional Distributional Semantics with Long Short Term Memory | We are proposing an extension of the recursive neural network that makes use
of a variant of the long short-term memory architecture. The extension allows
information low in parse trees to be stored in a memory register (the `memory
cell') and used much later higher up in the parse tree. This provides a
solution to the vanishing gradient problem and allows the network to capture
long range dependencies. Experimental results show that our composition
outperformed the traditional neural-network composition on the Stanford
Sentiment Treebank.
| [
"Phong Le and Willem Zuidema",
"['Phong Le' 'Willem Zuidema']"
]
|
stat.ML cs.LG cs.NE | null | 1503.02531 | null | null | http://arxiv.org/pdf/1503.02531v1 | 2015-03-09T15:44:49Z | 2015-03-09T15:44:49Z | Distilling the Knowledge in a Neural Network | A very simple way to improve the performance of almost any machine learning
algorithm is to train many different models on the same data and then to
average their predictions. Unfortunately, making predictions using a whole
ensemble of models is cumbersome and may be too computationally expensive to
allow deployment to a large number of users, especially if the individual
models are large neural nets. Caruana and his collaborators have shown that it
is possible to compress the knowledge in an ensemble into a single model which
is much easier to deploy and we develop this approach further using a different
compression technique. We achieve some surprising results on MNIST and we show
that we can significantly improve the acoustic model of a heavily used
commercial system by distilling the knowledge in an ensemble of models into a
single model. We also introduce a new type of ensemble composed of one or more
full models and many specialist models which learn to distinguish fine-grained
classes that the full models confuse. Unlike a mixture of experts, these
specialist models can be trained rapidly and in parallel.
| [
"Geoffrey Hinton, Oriol Vinyals, Jeff Dean",
"['Geoffrey Hinton' 'Oriol Vinyals' 'Jeff Dean']"
]
|
stat.ML cs.LG | null | 1503.02551 | null | null | http://arxiv.org/pdf/1503.02551v2 | 2015-06-09T09:17:38Z | 2015-03-09T16:30:17Z | Kernel-Based Just-In-Time Learning for Passing Expectation Propagation
Messages | We propose an efficient nonparametric strategy for learning a message
operator in expectation propagation (EP), which takes as input the set of
incoming messages to a factor node, and produces an outgoing message as output.
This learned operator replaces the multivariate integral required in classical
EP, which may not have an analytic expression. We use kernel-based regression,
which is trained on a set of probability distributions representing the
incoming messages, and the associated outgoing messages. The kernel approach
has two main advantages: first, it is fast, as it is implemented using a novel
two-layer random feature representation of the input message distributions;
second, it has principled uncertainty estimates, and can be cheaply updated
online, meaning it can request and incorporate new training data when it
encounters inputs on which it is uncertain. In experiments, our approach is
able to solve learning problems where a single message operator is required for
multiple, substantially different data sets (logistic regression for a variety
of classification problems), where it is essential to accurately assess
uncertainty and to efficiently and robustly update the message operator.
| [
"['Wittawat Jitkrittum' 'Arthur Gretton' 'Nicolas Heess' 'S. M. Ali Eslami'\n 'Balaji Lakshminarayanan' 'Dino Sejdinovic' 'Zoltán Szabó']",
"Wittawat Jitkrittum, Arthur Gretton, Nicolas Heess, S. M. Ali Eslami,\n Balaji Lakshminarayanan, Dino Sejdinovic, Zolt\\'an Szab\\'o"
]
|
cs.LG cs.AI cs.SD | 10.1007/s00034-016-0310-y | 1503.02578 | null | null | http://arxiv.org/abs/1503.02578v2 | 2016-10-05T12:05:10Z | 2015-03-09T17:40:08Z | Modeling State-Conditional Observation Distribution using Weighted
Stereo Samples for Factorial Speech Processing Models | This paper investigates the effectiveness of factorial speech processing
models in noise-robust automatic speech recognition tasks. For this purpose,
the paper proposes an idealistic approach for modeling state-conditional
observation distribution of factorial models based on weighted stereo samples.
This approach is an extension to previous single pass retraining for ideal
model compensation which is extended here to support multiple audio sources.
Non-stationary noises can be considered as one of these audio sources with
multiple states. Experiments of this paper over the set A of the Aurora 2
dataset show that recognition performance can be improved by this
consideration. The improvement is significant in low signal to noise energy
conditions, up to 4% absolute word recognition accuracy. In addition to the
power of the proposed method in accurate representation of state-conditional
observation distribution, it has an important advantage over previous methods
by providing the opportunity to independently select feature spaces for both
source and corrupted features. This opens a new window for seeking better
feature spaces appropriate for noisy speech, independent from clean speech
features.
| [
"Mahdi Khademian, Mohammad Mehdi Homayounpour",
"['Mahdi Khademian' 'Mohammad Mehdi Homayounpour']"
]
|
stat.ML cs.LG math.AG | 10.1109/JSTSP.2016.2537145 | 1503.02596 | null | null | http://arxiv.org/abs/1503.02596v3 | 2016-10-11T13:54:25Z | 2015-03-09T18:12:58Z | A Characterization of Deterministic Sampling Patterns for Low-Rank
Matrix Completion | Low-rank matrix completion (LRMC) problems arise in a wide variety of
applications. Previous theory mainly provides conditions for completion under
missing-at-random samplings. This paper studies deterministic conditions for
completion. An incomplete $d \times N$ matrix is finitely rank-$r$ completable
if there are at most finitely many rank-$r$ matrices that agree with all its
observed entries. Finite completability is the tipping point in LRMC, as a few
additional samples of a finitely completable matrix guarantee its unique
completability. The main contribution of this paper is a deterministic sampling
condition for finite completability. We use this to also derive deterministic
sampling conditions for unique completability that can be efficiently verified.
We also show that under uniform random sampling schemes, these conditions are
satisfied with high probability if $O(\max\{r,\log d\})$ entries per column are
observed. These findings have several implications on LRMC regarding lower
bounds, sample and computational complexity, the role of coherence, adaptive
settings and the validation of any completion algorithm. We complement our
theoretical results with experiments that support our findings and motivate
future analysis of uncharted sampling regimes.
| [
"Daniel L. Pimentel-Alarc\\'on, Nigel Boston, Robert D. Nowak",
"['Daniel L. Pimentel-Alarcón' 'Nigel Boston' 'Robert D. Nowak']"
]
|
stat.ML cs.LG | null | 1503.02761 | null | null | http://arxiv.org/pdf/1503.02761v2 | 2015-03-13T01:36:18Z | 2015-03-10T03:27:34Z | An Adaptive Online HDP-HMM for Segmentation and Classification of
Sequential Data | In the recent years, the desire and need to understand sequential data has
been increasing, with particular interest in sequential contexts such as
patient monitoring, understanding daily activities, video surveillance, stock
market and the like. Along with the constant flow of data, it is critical to
classify and segment the observations on-the-fly, without being limited to a
rigid number of classes. In addition, the model needs to be capable of updating
its parameters to comply with possible evolutions. This interesting problem,
however, is not adequately addressed in the literature since many studies focus
on offline classification over a pre-defined class set. In this paper, we
propose a principled solution to this gap by introducing an adaptive online
system based on Markov switching models with hierarchical Dirichlet process
priors. This infinite adaptive online approach is capable of segmenting and
classifying the sequential data over unlimited number of classes, while meeting
the memory and delay constraints of streaming contexts. The model is further
enhanced by introducing a learning rate, responsible for balancing the extent
to which the model sustains its previous learning (parameters) or adapts to the
new streaming observations. Experimental results on several variants of
stationary and evolving synthetic data and two video datasets, TUM Assistive
Kitchen and collatedWeizmann, show remarkable performance in segmentation and
classification, particularly for evolutionary sequences with changing
distributions and/or containing new, unseen classes.
| [
"['Ava Bargi' 'Richard Yi Da Xu' 'Massimo Piccardi']",
"Ava Bargi, Richard Yi Da Xu, Massimo Piccardi"
]
|
cs.LG cs.NA | null | 1503.02828 | null | null | http://arxiv.org/pdf/1503.02828v2 | 2015-03-18T02:20:44Z | 2015-03-10T09:42:17Z | Scalable Nuclear-norm Minimization by Subspace Pursuit Proximal
Riemannian Gradient | Nuclear-norm regularization plays a vital role in many learning tasks, such
as low-rank matrix recovery (MR), and low-rank representation (LRR). Solving
this problem directly can be computationally expensive due to the unknown rank
of variables or large-rank singular value decompositions (SVDs). To address
this, we propose a proximal Riemannian gradient (PRG) scheme which can
efficiently solve trace-norm regularized problems defined on real-algebraic
variety $\mMLr$ of real matrices of rank at most $r$. Based on PRG, we further
present a simple and novel subspace pursuit (SP) paradigm for general
trace-norm regularized problems without the explicit rank constraint $\mMLr$.
The proposed paradigm is very scalable by avoiding large-rank SVDs. Empirical
studies on several tasks, such as matrix completion and LRR based subspace
clustering, demonstrate the superiority of the proposed paradigms over existing
methods.
| [
"Mingkui Tan and Shijie Xiao and Junbin Gao and Dong Xu and Anton Van\n Den Hengel and Qinfeng Shi",
"['Mingkui Tan' 'Shijie Xiao' 'Junbin Gao' 'Dong Xu' 'Anton Van Den Hengel'\n 'Qinfeng Shi']"
]
|
cs.NE cs.LG | 10.1109/ICASSP.2015.7178129 | 1503.02852 | null | null | http://arxiv.org/abs/1503.02852v1 | 2015-03-10T10:27:55Z | 2015-03-10T10:27:55Z | Single stream parallelization of generalized LSTM-like RNNs on a GPU | Recurrent neural networks (RNNs) have shown outstanding performance on
processing sequence data. However, they suffer from long training time, which
demands parallel implementations of the training procedure. Parallelization of
the training algorithms for RNNs are very challenging because internal
recurrent paths form dependencies between two different time frames. In this
paper, we first propose a generalized graph-based RNN structure that covers the
most popular long short-term memory (LSTM) network. Then, we present a
parallelization approach that automatically explores parallelisms of arbitrary
RNNs by analyzing the graph structure. The experimental results show that the
proposed approach shows great speed-up even with a single training stream, and
further accelerates the training when combined with multiple parallel training
streams.
| [
"Kyuyeon Hwang and Wonyong Sung",
"['Kyuyeon Hwang' 'Wonyong Sung']"
]
|
cs.LG | null | 1503.02946 | null | null | http://arxiv.org/pdf/1503.02946v2 | 2015-03-15T15:38:07Z | 2015-03-10T15:09:25Z | apsis - Framework for Automated Optimization of Machine Learning Hyper
Parameters | The apsis toolkit presented in this paper provides a flexible framework for
hyperparameter optimization and includes both random search and a bayesian
optimizer. It is implemented in Python and its architecture features
adaptability to any desired machine learning code. It can easily be used with
common Python ML frameworks such as scikit-learn. Published under the MIT
License other researchers are heavily encouraged to check out the code,
contribute or raise any suggestions. The code can be found at
github.com/FrederikDiehl/apsis.
| [
"['Frederik Diehl' 'Andreas Jauch']",
"Frederik Diehl, Andreas Jauch"
]
|
stat.ML cond-mat.dis-nn cs.LG | 10.7566/JPSJ.84.054801 | 1503.03132 | null | null | http://arxiv.org/abs/1503.03132v1 | 2015-03-11T00:21:51Z | 2015-03-11T00:21:51Z | L_1-regularized Boltzmann machine learning using majorizer minimization | We propose an inference method to estimate sparse interactions and biases
according to Boltzmann machine learning. The basis of this method is $L_1$
regularization, which is often used in compressed sensing, a technique for
reconstructing sparse input signals from undersampled outputs. $L_1$
regularization impedes the simple application of the gradient method, which
optimizes the cost function that leads to accurate estimations, owing to the
cost function's lack of smoothness. In this study, we utilize the majorizer
minimization method, which is a well-known technique implemented in
optimization problems, to avoid the non-smoothness of the cost function. By
using the majorizer minimization method, we elucidate essentially relevant
biases and interactions from given data with seemingly strongly-correlated
components.
| [
"Masayuki Ohzeki",
"['Masayuki Ohzeki']"
]
|
cs.LG stat.ML | 10.1016/j.neunet.2020.08.013 | 1503.03148 | null | null | http://arxiv.org/abs/1503.03148v1 | 2015-03-11T02:10:26Z | 2015-03-11T02:10:26Z | A Neurodynamical System for finding a Minimal VC Dimension Classifier | The recently proposed Minimal Complexity Machine (MCM) finds a hyperplane
classifier by minimizing an exact bound on the Vapnik-Chervonenkis (VC)
dimension. The VC dimension measures the capacity of a learning machine, and a
smaller VC dimension leads to improved generalization. On many benchmark
datasets, the MCM generalizes better than SVMs and uses far fewer support
vectors than the number used by SVMs. In this paper, we describe a neural
network based on a linear dynamical system, that converges to the MCM solution.
The proposed MCM dynamical system is conducive to an analogue circuit
implementation on a chip or simulation using Ordinary Differential Equation
(ODE) solvers. Numerical experiments on benchmark datasets from the UCI
repository show that the proposed approach is scalable and accurate, as we
obtain improved accuracies and fewer number of support vectors (upto 74.3%
reduction) with the MCM dynamical system.
| [
"Jayadeva, Sumit Soman, Amit Bhaya",
"['Jayadeva' 'Sumit Soman' 'Amit Bhaya']"
]
|
cs.CV cs.LG | null | 1503.03163 | null | null | http://arxiv.org/pdf/1503.03163v1 | 2015-03-11T03:31:53Z | 2015-03-11T03:31:53Z | Learning Classifiers from Synthetic Data Using a Multichannel
Autoencoder | We propose a method for using synthetic data to help learning classifiers.
Synthetic data, even is generated based on real data, normally results in a
shift from the distribution of real data in feature space. To bridge the gap
between the real and synthetic data, and jointly learn from synthetic and real
data, this paper proposes a Multichannel Autoencoder(MCAE). We show that by
suing MCAE, it is possible to learn a better feature representation for
classification. To evaluate the proposed approach, we conduct experiments on
two types of datasets. Experimental results on two datasets validate the
efficiency of our MCAE model and our methodology of generating synthetic data.
| [
"['Xi Zhang' 'Yanwei Fu' 'Andi Zang' 'Leonid Sigal' 'Gady Agam']",
"Xi Zhang, Yanwei Fu, Andi Zang, Leonid Sigal, Gady Agam"
]
|
cs.CV cs.GR cs.LG cs.NE | null | 1503.03167 | null | null | http://arxiv.org/pdf/1503.03167v4 | 2015-06-22T02:10:00Z | 2015-03-11T04:08:42Z | Deep Convolutional Inverse Graphics Network | This paper presents the Deep Convolution Inverse Graphics Network (DC-IGN), a
model that learns an interpretable representation of images. This
representation is disentangled with respect to transformations such as
out-of-plane rotations and lighting variations. The DC-IGN model is composed of
multiple layers of convolution and de-convolution operators and is trained
using the Stochastic Gradient Variational Bayes (SGVB) algorithm. We propose a
training procedure to encourage neurons in the graphics code layer to represent
a specific transformation (e.g. pose or light). Given a single input image, our
model can generate new images of the same object with variations in pose and
lighting. We present qualitative and quantitative results of the model's
efficacy at learning a 3D rendering engine.
| [
"['Tejas D. Kulkarni' 'Will Whitney' 'Pushmeet Kohli' 'Joshua B. Tenenbaum']",
"Tejas D. Kulkarni, Will Whitney, Pushmeet Kohli, Joshua B. Tenenbaum"
]
|
cs.LG | null | 1503.03238 | null | null | http://arxiv.org/pdf/1503.03238v1 | 2015-03-11T09:38:49Z | 2015-03-11T09:38:49Z | Scalable Discovery of Time-Series Shapelets | Time-series classification is an important problem for the data mining
community due to the wide range of application domains involving time-series
data. A recent paradigm, called shapelets, represents patterns that are highly
predictive for the target variable. Shapelets are discovered by measuring the
prediction accuracy of a set of potential (shapelet) candidates. The candidates
typically consist of all the segments of a dataset, therefore, the discovery of
shapelets is computationally expensive. This paper proposes a novel method that
avoids measuring the prediction accuracy of similar candidates in Euclidean
distance space, through an online clustering pruning technique. In addition,
our algorithm incorporates a supervised shapelet selection that filters out
only those candidates that improve classification accuracy. Empirical evidence
on 45 datasets from the UCR collection demonstrate that our method is 3-4
orders of magnitudes faster than the fastest existing shapelet-discovery
method, while providing better prediction accuracy.
| [
"Josif Grabocka, Martin Wistuba, Lars Schmidt-Thieme",
"['Josif Grabocka' 'Martin Wistuba' 'Lars Schmidt-Thieme']"
]
|
cs.CL cs.LG cs.NE | null | 1503.03244 | null | null | http://arxiv.org/pdf/1503.03244v1 | 2015-03-11T09:46:36Z | 2015-03-11T09:46:36Z | Convolutional Neural Network Architectures for Matching Natural Language
Sentences | Semantic matching is of central importance to many natural language tasks
\cite{bordes2014semantic,RetrievalQA}. A successful matching algorithm needs to
adequately model the internal structures of language objects and the
interaction between them. As a step toward this goal, we propose convolutional
neural network models for matching two sentences, by adapting the convolutional
strategy in vision and speech. The proposed models not only nicely represent
the hierarchical structures of sentences with their layer-by-layer composition
and pooling, but also capture the rich matching patterns at different levels.
Our models are rather generic, requiring no prior knowledge on language, and
can hence be applied to matching tasks of different nature and in different
languages. The empirical study on a variety of matching tasks demonstrates the
efficacy of the proposed model on a variety of matching tasks and its
superiority to competitor models.
| [
"Baotian Hu, Zhengdong Lu, Hang Li, Qingcai Chen",
"['Baotian Hu' 'Zhengdong Lu' 'Hang Li' 'Qingcai Chen']"
]
|
stat.ML cs.LG cs.NA stat.AP | null | 1503.03355 | null | null | http://arxiv.org/pdf/1503.03355v1 | 2015-03-11T14:34:46Z | 2015-03-11T14:34:46Z | Automatic Unsupervised Tensor Mining with Quality Assessment | A popular tool for unsupervised modelling and mining multi-aspect data is
tensor decomposition. In an exploratory setting, where and no labels or ground
truth are available how can we automatically decide how many components to
extract? How can we assess the quality of our results, so that a domain expert
can factor this quality measure in the interpretation of our results? In this
paper, we introduce AutoTen, a novel automatic unsupervised tensor mining
algorithm with minimal user intervention, which leverages and improves upon
heuristics that assess the result quality. We extensively evaluate AutoTen's
performance on synthetic data, outperforming existing baselines on this very
hard problem. Finally, we apply AutoTen on a variety of real datasets,
providing insights and discoveries. We view this work as a step towards a fully
automated, unsupervised tensor mining tool that can be easily adopted by
practitioners in academia and industry.
| [
"['Evangelos E. Papalexakis']",
"Evangelos E. Papalexakis"
]
|
cs.LG cs.NE stat.ML | null | 1503.03438 | null | null | http://arxiv.org/pdf/1503.03438v3 | 2015-12-12T19:04:02Z | 2015-03-11T18:24:13Z | A mathematical motivation for complex-valued convolutional networks | A complex-valued convolutional network (convnet) implements the repeated
application of the following composition of three operations, recursively
applying the composition to an input vector of nonnegative real numbers: (1)
convolution with complex-valued vectors followed by (2) taking the absolute
value of every entry of the resulting vectors followed by (3) local averaging.
For processing real-valued random vectors, complex-valued convnets can be
viewed as "data-driven multiscale windowed power spectra," "data-driven
multiscale windowed absolute spectra," "data-driven multiwavelet absolute
values," or (in their most general configuration) "data-driven nonlinear
multiwavelet packets." Indeed, complex-valued convnets can calculate multiscale
windowed spectra when the convnet filters are windowed complex-valued
exponentials. Standard real-valued convnets, using rectified linear units
(ReLUs), sigmoidal (for example, logistic or tanh) nonlinearities, max.
pooling, etc., do not obviously exhibit the same exact correspondence with
data-driven wavelets (whereas for complex-valued convnets, the correspondence
is much more than just a vague analogy). Courtesy of the exact correspondence,
the remarkably rich and rigorous body of mathematical analysis for wavelets
applies directly to (complex-valued) convnets.
| [
"['Joan Bruna' 'Soumith Chintala' 'Yann LeCun' 'Serkan Piantino'\n 'Arthur Szlam' 'Mark Tygert']",
"Joan Bruna, Soumith Chintala, Yann LeCun, Serkan Piantino, Arthur\n Szlam, and Mark Tygert"
]
|
cs.LG | null | 1503.03488 | null | null | http://arxiv.org/pdf/1503.03488v2 | 2016-02-10T22:28:16Z | 2015-03-07T22:45:54Z | Estimating the Mean Number of K-Means Clusters to Form | Utilizing the sample size of a dataset, the random cluster model is employed
in order to derive an estimate of the mean number of K-Means clusters to form
during classification of a dataset.
| [
"['Robert A. Murphy']",
"Robert A. Murphy"
]
|
cs.LG cs.AI cs.CV | null | 1503.03506 | null | null | http://arxiv.org/pdf/1503.03506v1 | 2015-03-11T21:09:28Z | 2015-03-11T21:09:28Z | Diverse Landmark Sampling from Determinantal Point Processes for
Scalable Manifold Learning | High computational costs of manifold learning prohibit its application for
large point sets. A common strategy to overcome this problem is to perform
dimensionality reduction on selected landmarks and to successively embed the
entire dataset with the Nystr\"om method. The two main challenges that arise
are: (i) the landmarks selected in non-Euclidean geometries must result in a
low reconstruction error, (ii) the graph constructed from sparsely sampled
landmarks must approximate the manifold well. We propose the sampling of
landmarks from determinantal distributions on non-Euclidean spaces. Since
current determinantal sampling algorithms have the same complexity as those for
manifold learning, we present an efficient approximation running in linear
time. Further, we recover the local geometry after the sparsification by
assigning each landmark a local covariance matrix, estimated from the original
point set. The resulting neighborhood selection based on the Bhattacharyya
distance improves the embedding of sparsely sampled manifolds. Our experiments
show a significant performance improvement compared to state-of-the-art
landmark selection techniques.
| [
"Christian Wachinger and Polina Golland",
"['Christian Wachinger' 'Polina Golland']"
]
|
cs.LG math.OC stat.ML | null | 1503.03517 | null | null | http://arxiv.org/pdf/1503.03517v1 | 2015-03-11T22:05:50Z | 2015-03-11T22:05:50Z | Switching to Learn | A network of agents attempt to learn some unknown state of the world drawn by
nature from a finite set. Agents observe private signals conditioned on the
true state, and form beliefs about the unknown state accordingly. Each agent
may face an identification problem in the sense that she cannot distinguish the
truth in isolation. However, by communicating with each other, agents are able
to benefit from side observations to learn the truth collectively. Unlike many
distributed algorithms which rely on all-time communication protocols, we
propose an efficient method by switching between Bayesian and non-Bayesian
regimes. In this model, agents exchange information only when their private
signals are not informative enough; thence, by switching between the two
regimes, agents efficiently learn the truth using only a few rounds of
communications. The proposed algorithm preserves learnability while incurring a
lower communication cost. We also verify our theoretical findings by simulation
examples.
| [
"Shahin Shahrampour, Mohammad Amin Rahimian, Ali Jadbabaie",
"['Shahin Shahrampour' 'Mohammad Amin Rahimian' 'Ali Jadbabaie']"
]
|
cs.NE cs.CV cs.LG | null | 1503.03562 | null | null | http://arxiv.org/pdf/1503.03562v3 | 2015-03-22T21:47:56Z | 2015-03-12T02:24:31Z | Training Binary Multilayer Neural Networks for Image Classification
using Expectation Backpropagation | Compared to Multilayer Neural Networks with real weights, Binary Multilayer
Neural Networks (BMNNs) can be implemented more efficiently on dedicated
hardware. BMNNs have been demonstrated to be effective on binary classification
tasks with Expectation BackPropagation (EBP) algorithm on high dimensional text
datasets. In this paper, we investigate the capability of BMNNs using the EBP
algorithm on multiclass image classification tasks. The performances of binary
neural networks with multiple hidden layers and different numbers of hidden
units are examined on MNIST. We also explore the effectiveness of image spatial
filters and the dropout technique in BMNNs. Experimental results on MNIST
dataset show that EBP can obtain 2.12% test error with binary weights and 1.66%
test error with real weights, which is comparable to the results of standard
BackPropagation algorithm on fully connected MNNs.
| [
"Zhiyong Cheng, Daniel Soudry, Zexi Mao, Zhenzhong Lan",
"['Zhiyong Cheng' 'Daniel Soudry' 'Zexi Mao' 'Zhenzhong Lan']"
]
|
cs.LG | 10.1145/2736277.2741093 | 1503.03578 | null | null | http://arxiv.org/abs/1503.03578v1 | 2015-03-12T04:07:32Z | 2015-03-12T04:07:32Z | LINE: Large-scale Information Network Embedding | This paper studies the problem of embedding very large information networks
into low-dimensional vector spaces, which is useful in many tasks such as
visualization, node classification, and link prediction. Most existing graph
embedding methods do not scale for real world information networks which
usually contain millions of nodes. In this paper, we propose a novel network
embedding method called the "LINE," which is suitable for arbitrary types of
information networks: undirected, directed, and/or weighted. The method
optimizes a carefully designed objective function that preserves both the local
and global network structures. An edge-sampling algorithm is proposed that
addresses the limitation of the classical stochastic gradient descent and
improves both the effectiveness and the efficiency of the inference. Empirical
experiments prove the effectiveness of the LINE on a variety of real-world
information networks, including language networks, social networks, and
citation networks. The algorithm is very efficient, which is able to learn the
embedding of a network with millions of vertices and billions of edges in a few
hours on a typical single machine. The source code of the LINE is available
online.
| [
"Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, Qiaozhu Mei",
"['Jian Tang' 'Meng Qu' 'Mingzhe Wang' 'Ming Zhang' 'Jun Yan' 'Qiaozhu Mei']"
]
|
cs.LG cond-mat.dis-nn q-bio.NC stat.ML | null | 1503.03585 | null | null | http://arxiv.org/pdf/1503.03585v8 | 2015-11-18T21:50:51Z | 2015-03-12T04:51:37Z | Deep Unsupervised Learning using Nonequilibrium Thermodynamics | A central problem in machine learning involves modeling complex data-sets
using highly flexible families of probability distributions in which learning,
sampling, inference, and evaluation are still analytically or computationally
tractable. Here, we develop an approach that simultaneously achieves both
flexibility and tractability. The essential idea, inspired by non-equilibrium
statistical physics, is to systematically and slowly destroy structure in a
data distribution through an iterative forward diffusion process. We then learn
a reverse diffusion process that restores structure in data, yielding a highly
flexible and tractable generative model of the data. This approach allows us to
rapidly learn, sample from, and evaluate probabilities in deep generative
models with thousands of layers or time steps, as well as to compute
conditional and posterior probabilities under the learned model. We
additionally release an open source reference implementation of the algorithm.
| [
"Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, Surya\n Ganguli",
"['Jascha Sohl-Dickstein' 'Eric A. Weiss' 'Niru Maheswaranathan'\n 'Surya Ganguli']"
]
|
cs.LG cs.CC | null | 1503.03594 | null | null | http://arxiv.org/pdf/1503.03594v1 | 2015-03-12T05:38:19Z | 2015-03-12T05:38:19Z | Efficient Learning of Linear Separators under Bounded Noise | We study the learnability of linear separators in $\Re^d$ in the presence of
bounded (a.k.a Massart) noise. This is a realistic generalization of the random
classification noise model, where the adversary can flip each example $x$ with
probability $\eta(x) \leq \eta$. We provide the first polynomial time algorithm
that can learn linear separators to arbitrarily small excess error in this
noise model under the uniform distribution over the unit ball in $\Re^d$, for
some constant value of $\eta$. While widely studied in the statistical learning
theory community in the context of getting faster convergence rates,
computationally efficient algorithms in this model had remained elusive. Our
work provides the first evidence that one can indeed design algorithms
achieving arbitrarily small excess error in polynomial time under this
realistic noise model and thus opens up a new and exciting line of research.
We additionally provide lower bounds showing that popular algorithms such as
hinge loss minimization and averaging cannot lead to arbitrarily small excess
error under Massart noise, even under the uniform distribution. Our work
instead, makes use of a margin based technique developed in the context of
active learning. As a result, our algorithm is also an active learning
algorithm with label complexity that is only a logarithmic the desired excess
error $\epsilon$.
| [
"Pranjal Awasthi, Maria-Florina Balcan, Nika Haghtalab, Ruth Urner",
"['Pranjal Awasthi' 'Maria-Florina Balcan' 'Nika Haghtalab' 'Ruth Urner']"
]
|
stat.ML cs.IT cs.LG math.IT math.PR math.ST stat.TH | null | 1503.03613 | null | null | http://arxiv.org/pdf/1503.03613v1 | 2015-03-12T07:27:24Z | 2015-03-12T07:27:24Z | On the Impossibility of Learning the Missing Mass | This paper shows that one cannot learn the probability of rare events without
imposing further structural assumptions. The event of interest is that of
obtaining an outcome outside the coverage of an i.i.d. sample from a discrete
distribution. The probability of this event is referred to as the "missing
mass". The impossibility result can then be stated as: the missing mass is not
distribution-free PAC-learnable in relative error. The proof is
semi-constructive and relies on a coupling argument using a dithered geometric
distribution. This result formalizes the folklore that in order to predict rare
events, one necessarily needs distributions with "heavy tails".
| [
"['Elchanan Mossel' 'Mesrob I. Ohannessian']",
"Elchanan Mossel and Mesrob I. Ohannessian"
]
|
stat.ML cs.IR cs.LG | null | 1503.03701 | null | null | http://arxiv.org/pdf/1503.03701v4 | 2016-06-08T15:05:38Z | 2015-03-12T12:59:25Z | Hierarchical learning of grids of microtopics | The counting grid is a grid of microtopics, sparse word/feature
distributions. The generative model associated with the grid does not use these
microtopics individually. Rather, it groups them in overlapping rectangular
windows and uses these grouped microtopics as either mixture or admixture
components. This paper builds upon the basic counting grid model and it shows
that hierarchical reasoning helps avoid bad local minima, produces better
classification accuracy and, most interestingly, allows for extraction of large
numbers of coherent microtopics even from small datasets. We evaluate this in
terms of consistency, diversity and clarity of the indexed content, as well as
in a user study on word intrusion tasks. We demonstrate that these models work
well as a technique for embedding raw images and discuss interesting parallels
between hierarchical CG models and other deep architectures.
| [
"Nebojsa Jojic and Alessandro Perina and Dongwoo Kim",
"['Nebojsa Jojic' 'Alessandro Perina' 'Dongwoo Kim']"
]
|
cs.LG math.OC | null | 1503.03712 | null | null | http://arxiv.org/pdf/1503.03712v2 | 2015-07-08T05:14:22Z | 2015-03-12T13:39:28Z | On Graduated Optimization for Stochastic Non-Convex Problems | The graduated optimization approach, also known as the continuation method,
is a popular heuristic to solving non-convex problems that has received renewed
interest over the last decade. Despite its popularity, very little is known in
terms of theoretical convergence analysis. In this paper we describe a new
first-order algorithm based on graduated optimiza- tion and analyze its
performance. We characterize a parameterized family of non- convex functions
for which this algorithm provably converges to a global optimum. In particular,
we prove that the algorithm converges to an {\epsilon}-approximate solution
within O(1/\epsilon^2) gradient-based steps. We extend our algorithm and
analysis to the setting of stochastic non-convex optimization with noisy
gradient feedback, attaining the same convergence rate. Additionally, we
discuss the setting of zero-order optimization, and devise a a variant of our
algorithm which converges at rate of O(d^2/\epsilon^4).
| [
"['Elad Hazan' 'Kfir Y. Levy' 'Shai Shalev-Shwartz']",
"Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz"
]
|
stat.ML cs.LG | null | 1503.03893 | null | null | http://arxiv.org/pdf/1503.03893v1 | 2015-03-12T21:19:13Z | 2015-03-12T21:19:13Z | Compact Nonlinear Maps and Circulant Extensions | Kernel approximation via nonlinear random feature maps is widely used in
speeding up kernel machines. There are two main challenges for the conventional
kernel approximation methods. First, before performing kernel approximation, a
good kernel has to be chosen. Picking a good kernel is a very challenging
problem in itself. Second, high-dimensional maps are often required in order to
achieve good performance. This leads to high computational cost in both
generating the nonlinear maps, and in the subsequent learning and prediction
process. In this work, we propose to optimize the nonlinear maps directly with
respect to the classification objective in a data-dependent fashion. The
proposed approach achieves kernel approximation and kernel learning in a joint
framework. This leads to much more compact maps without hurting the
performance. As a by-product, the same framework can also be used to achieve
more compact kernel maps to approximate a known kernel. We also introduce
Circulant Nonlinear Maps, which uses a circulant-structured projection matrix
to speed up the nonlinear maps for high-dimensional data.
| [
"Felix X. Yu, Sanjiv Kumar, Henry Rowley, Shih-Fu Chang",
"['Felix X. Yu' 'Sanjiv Kumar' 'Henry Rowley' 'Shih-Fu Chang']"
]
|
cs.LG cs.IT cs.NA math.IT stat.ML | null | 1503.03903 | null | null | http://arxiv.org/pdf/1503.03903v1 | 2015-03-12T22:16:55Z | 2015-03-12T22:16:55Z | Approximating Sparse PCA from Incomplete Data | We study how well one can recover sparse principal components of a data
matrix using a sketch formed from a few of its elements. We show that for a
wide class of optimization problems, if the sketch is close (in the spectral
norm) to the original data matrix, then one can recover a near optimal solution
to the optimization problem by using the sketch. In particular, we use this
approach to obtain sparse principal components and show that for \math{m} data
points in \math{n} dimensions, \math{O(\epsilon^{-2}\tilde k\max\{m,n\})}
elements gives an \math{\epsilon}-additive approximation to the sparse PCA
problem (\math{\tilde k} is the stable rank of the data matrix). We demonstrate
our algorithms extensively on image, text, biological and financial data. The
results show that not only are we able to recover the sparse PCAs from the
incomplete data, but by using our sparse sketch, the running time drops by a
factor of five or more.
| [
"['Abhisek Kundu' 'Petros Drineas' 'Malik Magdon-Ismail']",
"Abhisek Kundu, Petros Drineas, Malik Magdon-Ismail"
]
|
cs.AI cs.LG physics.data-an stat.ML | 10.1007/s00354-016-0306-y | 1503.03964 | null | null | http://arxiv.org/abs/1503.03964v1 | 2015-03-13T06:53:01Z | 2015-03-13T06:53:01Z | Interactive Restless Multi-armed Bandit Game and Swarm Intelligence
Effect | We obtain the conditions for the emergence of the swarm intelligence effect
in an interactive game of restless multi-armed bandit (rMAB). A player competes
with multiple agents. Each bandit has a payoff that changes with a probability
$p_{c}$ per round. The agents and player choose one of three options: (1)
Exploit (a good bandit), (2) Innovate (asocial learning for a good bandit among
$n_{I}$ randomly chosen bandits), and (3) Observe (social learning for a good
bandit). Each agent has two parameters $(c,p_{obs})$ to specify the decision:
(i) $c$, the threshold value for Exploit, and (ii) $p_{obs}$, the probability
for Observe in learning. The parameters $(c,p_{obs})$ are uniformly
distributed. We determine the optimal strategies for the player using complete
knowledge about the rMAB. We show whether or not social or asocial learning is
more optimal in the $(p_{c},n_{I})$ space and define the swarm intelligence
effect. We conduct a laboratory experiment (67 subjects) and observe the swarm
intelligence effect only if $(p_{c},n_{I})$ are chosen so that social learning
is far more optimal than asocial learning.
| [
"['Shunsuke Yoshida' 'Masato Hisakado' 'Shintaro Mori']",
"Shunsuke Yoshida, Masato Hisakado and Shintaro Mori"
]
|
cs.NE cs.LG | 10.1109/TNNLS.2016.2582924 | 1503.04069 | null | null | http://arxiv.org/abs/1503.04069v2 | 2017-10-04T11:40:31Z | 2015-03-13T14:01:38Z | LSTM: A Search Space Odyssey | Several variants of the Long Short-Term Memory (LSTM) architecture for
recurrent neural networks have been proposed since its inception in 1995. In
recent years, these networks have become the state-of-the-art models for a
variety of machine learning problems. This has led to a renewed interest in
understanding the role and utility of various computational components of
typical LSTM variants. In this paper, we present the first large-scale analysis
of eight LSTM variants on three representative tasks: speech recognition,
handwriting recognition, and polyphonic music modeling. The hyperparameters of
all LSTM variants for each task were optimized separately using random search,
and their importance was assessed using the powerful fANOVA framework. In
total, we summarize the results of 5400 experimental runs ($\approx 15$ years
of CPU time), which makes our study the largest of its kind on LSTM networks.
Our results show that none of the variants can improve upon the standard LSTM
architecture significantly, and demonstrate the forget gate and the output
activation function to be its most critical components. We further observe that
the studied hyperparameters are virtually independent and derive guidelines for
their efficient adjustment.
| [
"['Klaus Greff' 'Rupesh Kumar Srivastava' 'Jan Koutník'\n 'Bas R. Steunebrink' 'Jürgen Schmidhuber']",
"Klaus Greff, Rupesh Kumar Srivastava, Jan Koutn\\'ik, Bas R.\n Steunebrink, J\\\"urgen Schmidhuber"
]
|
cs.LG | null | 1503.04269 | null | null | http://arxiv.org/pdf/1503.04269v2 | 2015-04-21T02:21:57Z | 2015-03-14T04:44:20Z | An Emphatic Approach to the Problem of Off-policy Temporal-Difference
Learning | In this paper we introduce the idea of improving the performance of
parametric temporal-difference (TD) learning algorithms by selectively
emphasizing or de-emphasizing their updates on different time steps. In
particular, we show that varying the emphasis of linear TD($\lambda$)'s updates
in a particular way causes its expected update to become stable under
off-policy training. The only prior model-free TD methods to achieve this with
per-step computation linear in the number of function approximation parameters
are the gradient-TD family of methods including TDC, GTD($\lambda$), and
GQ($\lambda$). Compared to these methods, our _emphatic TD($\lambda$)_ is
simpler and easier to use; it has only one learned parameter vector and one
step-size parameter. Our treatment includes general state-dependent discounting
and bootstrapping functions, and a way of specifying varying degrees of
interest in accurately valuing different states.
| [
"['Richard S. Sutton' 'A. Rupam Mahmood' 'Martha White']",
"Richard S. Sutton, A. Rupam Mahmood, Martha White"
]
|
stat.ML cs.LG | null | 1503.04337 | null | null | http://arxiv.org/pdf/1503.04337v3 | 2015-08-11T17:16:01Z | 2015-03-14T19:43:30Z | Communication-efficient sparse regression: a one-shot approach | We devise a one-shot approach to distributed sparse regression in the
high-dimensional setting. The key idea is to average "debiased" or
"desparsified" lasso estimators. We show the approach converges at the same
rate as the lasso as long as the dataset is not split across too many machines.
We also extend the approach to generalized linear models.
| [
"['Jason D. Lee' 'Yuekai Sun' 'Qiang Liu' 'Jonathan E. Taylor']",
"Jason D. Lee, Yuekai Sun, Qiang Liu, Jonathan E. Taylor"
]
|
quant-ph cs.CV cs.LG | null | 1503.04400 | null | null | http://arxiv.org/pdf/1503.04400v1 | 2015-03-15T09:04:22Z | 2015-03-15T09:04:22Z | Separable and non-separable data representation for pattern
discrimination | We provide a complete work-flow, based on the language of quantum information
theory, suitable for processing data for the purpose of pattern recognition.
The main advantage of the introduced scheme is that it can be easily
implemented and applied to process real-world data using modest computation
resources. At the same time it can be used to investigate the difference in the
pattern recognition resulting from the utilization of the tensor product
structure of the space of quantum states. We illustrate this difference by
providing a simple example based on the classification of 2D data.
| [
"Jaros{\\l}aw Adam Miszczak",
"['Jarosław Adam Miszczak']"
]
|
cs.LG cs.SI stat.ML | null | 1503.04567 | null | null | http://arxiv.org/pdf/1503.04567v2 | 2015-04-22T21:29:55Z | 2015-03-16T08:27:54Z | Learning Mixed Membership Community Models in Social Tagging Networks
through Tensor Methods | Community detection in graphs has been extensively studied both in theory and
in applications. However, detecting communities in hypergraphs is more
challenging. In this paper, we propose a tensor decomposition approach for
guaranteed learning of communities in a special class of hypergraphs modeling
social tagging systems or folksonomies. A folksonomy is a tripartite 3-uniform
hypergraph consisting of (user, tag, resource) hyperedges. We posit a
probabilistic mixed membership community model, and prove that the tensor
method consistently learns the communities under efficient sample complexity
and separation requirements.
| [
"['Anima Anandkumar' 'Hanie Sedghi']",
"Anima Anandkumar and Hanie Sedghi"
]
|
cs.NE cs.CV cs.LG | null | 1503.04596 | null | null | http://arxiv.org/pdf/1503.04596v3 | 2015-08-15T13:02:08Z | 2015-03-16T10:41:30Z | Enhanced Image Classification With a Fast-Learning Shallow Convolutional
Neural Network | We present a neural network architecture and training method designed to
enable very rapid training and low implementation complexity. Due to its
training speed and very few tunable parameters, the method has strong potential
for applications requiring frequent retraining or online training. The approach
is characterized by (a) convolutional filters based on biologically inspired
visual processing filters, (b) randomly-valued classifier-stage input weights,
(c) use of least squares regression to train the classifier output weights in a
single batch, and (d) linear classifier-stage output units. We demonstrate the
efficacy of the method by applying it to image classification. Our results
match existing state-of-the-art results on the MNIST (0.37% error) and
NORB-small (2.2% error) image classification databases, but with very fast
training times compared to standard deep network approaches. The network's
performance on the Google Street View House Number (SVHN) (4% error) database
is also competitive with state-of-the art methods.
| [
"['Mark D. McDonnell' 'Tony Vladusich']",
"Mark D. McDonnell and Tony Vladusich"
]
|
cs.LG cs.DS | null | 1503.04843 | null | null | http://arxiv.org/pdf/1503.04843v2 | 2015-11-10T02:01:05Z | 2015-03-16T20:48:42Z | More General Queries and Less Generalization Error in Adaptive Data
Analysis | Adaptivity is an important feature of data analysis---typically the choice of
questions asked about a dataset depends on previous interactions with the same
dataset. However, generalization error is typically bounded in a non-adaptive
model, where all questions are specified before the dataset is drawn. Recent
work by Dwork et al. (STOC '15) and Hardt and Ullman (FOCS '14) initiated the
formal study of this problem, and gave the first upper and lower bounds on the
achievable generalization error for adaptive data analysis.
Specifically, suppose there is an unknown distribution $\mathcal{P}$ and a
set of $n$ independent samples $x$ is drawn from $\mathcal{P}$. We seek an
algorithm that, given $x$ as input, "accurately" answers a sequence of
adaptively chosen "queries" about the unknown distribution $\mathcal{P}$. How
many samples $n$ must we draw from the distribution, as a function of the type
of queries, the number of queries, and the desired level of accuracy?
In this work we make two new contributions towards resolving this question:
*We give upper bounds on the number of samples $n$ that are needed to answer
statistical queries that improve over the bounds of Dwork et al.
*We prove the first upper bounds on the number of samples required to answer
more general families of queries. These include arbitrary low-sensitivity
queries and the important class of convex risk minimization queries.
As in Dwork et al., our algorithms are based on a connection between
differential privacy and generalization error, but we feel that our analysis is
simpler and more modular, which may be useful for studying these questions in
the future.
| [
"Raef Bassily and Adam Smith and Thomas Steinke and Jonathan Ullman",
"['Raef Bassily' 'Adam Smith' 'Thomas Steinke' 'Jonathan Ullman']"
]
|
cs.CL cs.LG cs.NE | null | 1503.04881 | null | null | http://arxiv.org/pdf/1503.04881v1 | 2015-03-16T23:59:02Z | 2015-03-16T23:59:02Z | Long Short-Term Memory Over Tree Structures | The chain-structured long short-term memory (LSTM) has showed to be effective
in a wide range of problems such as speech recognition and machine translation.
In this paper, we propose to extend it to tree structures, in which a memory
cell can reflect the history memories of multiple child cells or multiple
descendant cells in a recursive process. We call the model S-LSTM, which
provides a principled way of considering long-distance interaction over
hierarchies, e.g., language or image parse structures. We leverage the models
for semantic composition to understand the meaning of text, a fundamental
problem in natural language understanding, and show that it outperforms a
state-of-the-art recursive model by replacing its composition layers with the
S-LSTM memory blocks. We also show that utilizing the given structures is
helpful in achieving a performance better than that without considering the
structures.
| [
"Xiaodan Zhu, Parinaz Sobhani, Hongyu Guo",
"['Xiaodan Zhu' 'Parinaz Sobhani' 'Hongyu Guo']"
]
|
cs.NI cs.LG | 10.1109/TCOMM.2015.2415777 | 1503.04964 | null | null | http://arxiv.org/abs/1503.04964v1 | 2015-03-17T09:32:29Z | 2015-03-17T09:32:29Z | Energy Sharing for Multiple Sensor Nodes with Finite Buffers | We consider the problem of finding optimal energy sharing policies that
maximize the network performance of a system comprising of multiple sensor
nodes and a single energy harvesting (EH) source. Sensor nodes periodically
sense the random field and generate data, which is stored in the corresponding
data queues. The EH source harnesses energy from ambient energy sources and the
generated energy is stored in an energy buffer. Sensor nodes receive energy for
data transmission from the EH source. The EH source has to efficiently share
the stored energy among the nodes in order to minimize the long-run average
delay in data transmission. We formulate the problem of energy sharing between
the nodes in the framework of average cost infinite-horizon Markov decision
processes (MDPs). We develop efficient energy sharing algorithms, namely
Q-learning algorithm with exploration mechanisms based on the $\epsilon$-greedy
method as well as upper confidence bound (UCB). We extend these algorithms by
incorporating state and action space aggregation to tackle state-action space
explosion in the MDP. We also develop a cross entropy based method that
incorporates policy parameterization in order to find near optimal energy
sharing policies. Through simulations, we show that our algorithms yield energy
sharing policies that outperform the heuristic greedy method.
| [
"['Sindhu Padakandla' 'Prabuchandran K. J' 'Shalabh Bhatnagar']",
"Sindhu Padakandla, Prabuchandran K.J and Shalabh Bhatnagar"
]
|
cs.LG | null | 1503.04996 | null | null | http://arxiv.org/pdf/1503.04996v1 | 2015-03-17T11:01:37Z | 2015-03-17T11:01:37Z | On Extreme Pruning of Random Forest Ensembles for Real-time Predictive
Applications | Random Forest (RF) is an ensemble supervised machine learning technique that
was developed by Breiman over a decade ago. Compared with other ensemble
techniques, it has proved its accuracy and superiority. Many researchers,
however, believe that there is still room for enhancing and improving its
performance accuracy. This explains why, over the past decade, there have been
many extensions of RF where each extension employed a variety of techniques and
strategies to improve certain aspect(s) of RF. Since it has been proven
empiricallthat ensembles tend to yield better results when there is a
significant diversity among the constituent models, the objective of this paper
is twofold. First, it investigates how data clustering (a well known diversity
technique) can be applied to identify groups of similar decision trees in an RF
in order to eliminate redundant trees by selecting a representative from each
group (cluster). Second, these likely diverse representatives are then used to
produce an extension of RF termed CLUB-DRF that is much smaller in size than
RF, and yet performs at least as good as RF, and mostly exhibits higher
performance in terms of accuracy. The latter refers to a known technique called
ensemble pruning. Experimental results on 15 real datasets from the UCI
repository prove the superiority of our proposed extension over the traditional
RF. Most of our experiments achieved at least 95% or above pruning level while
retaining or outperforming the RF accuracy.
| [
"['Khaled Fawagreh' 'Mohamad Medhat Gaber' 'Eyad Elyan']",
"Khaled Fawagreh, Mohamad Medhat Gaber, Eyad Elyan"
]
|
cs.LG | null | 1503.05018 | null | null | http://arxiv.org/pdf/1503.05018v1 | 2015-03-17T12:41:30Z | 2015-03-17T12:41:30Z | Ultra-Fast Shapelets for Time Series Classification | Time series shapelets are discriminative subsequences and their similarity to
a time series can be used for time series classification. Since the discovery
of time series shapelets is costly in terms of time, the applicability on long
or multivariate time series is difficult. In this work we propose Ultra-Fast
Shapelets that uses a number of random shapelets. It is shown that Ultra-Fast
Shapelets yield the same prediction quality as current state-of-the-art
shapelet-based time series classifiers that carefully select the shapelets by
being by up to three orders of magnitudes. Since this method allows a
ultra-fast shapelet discovery, using shapelets for long multivariate time
series classification becomes feasible.
A method for using shapelets for multivariate time series is proposed and
Ultra-Fast Shapelets is proven to be successful in comparison to
state-of-the-art multivariate time series classifiers on 15 multivariate time
series datasets from various domains. Finally, time series derivatives that
have proven to be useful for other time series classifiers are investigated for
the shapelet-based classifiers. It is shown that they have a positive impact
and that they are easy to integrate with a simple preprocessing step, without
the need of adapting the shapelet discovery algorithm.
| [
"['Martin Wistuba' 'Josif Grabocka' 'Lars Schmidt-Thieme']",
"Martin Wistuba, Josif Grabocka, Lars Schmidt-Thieme"
]
|
cs.LG stat.ML | null | 1503.05087 | null | null | http://arxiv.org/pdf/1503.05087v2 | 2016-08-31T22:00:25Z | 2015-03-17T15:26:15Z | Importance weighting without importance weights: An efficient algorithm
for combinatorial semi-bandits | We propose a sample-efficient alternative for importance weighting for
situations where one only has sample access to the probability distribution
that generates the observations. Our new method, called Geometric Resampling
(GR), is described and analyzed in the context of online combinatorial
optimization under semi-bandit feedback, where a learner sequentially selects
its actions from a combinatorial decision set so as to minimize its cumulative
loss. In particular, we show that the well-known Follow-the-Perturbed-Leader
(FPL) prediction method coupled with Geometric Resampling yields the first
computationally efficient reduction from offline to online optimization in this
setting. We provide a thorough theoretical analysis for the resulting
algorithm, showing that its performance is on par with previous, inefficient
solutions. Our main contribution is showing that, despite the relatively large
variance induced by the GR procedure, our performance guarantees hold with high
probability rather than only in expectation. As a side result, we also improve
the best known regret bounds for FPL in online combinatorial optimization with
full feedback, closing the perceived performance gap between FPL and
exponential weights in this setting.
| [
"Gergely Neu and G\\'abor Bart\\'ok",
"['Gergely Neu' 'Gábor Bartók']"
]
|
q-bio.QM cs.AI cs.LG q-bio.GN | 10.1371/journal.pone.0141287 | 1503.05140 | null | null | http://arxiv.org/abs/1503.05140v2 | 2016-05-26T20:17:51Z | 2015-03-17T17:55:22Z | ProtVec: A Continuous Distributed Representation of Biological Sequences | We introduce a new representation and feature extraction method for
biological sequences. Named bio-vectors (BioVec) to refer to biological
sequences in general with protein-vectors (ProtVec) for proteins (amino-acid
sequences) and gene-vectors (GeneVec) for gene sequences, this representation
can be widely used in applications of deep learning in proteomics and genomics.
In the present paper, we focus on protein-vectors that can be utilized in a
wide array of bioinformatics investigations such as family classification,
protein visualization, structure prediction, disordered protein identification,
and protein-protein interaction prediction. In this method, we adopt artificial
neural network approaches and represent a protein sequence with a single dense
n-dimensional vector. To evaluate this method, we apply it in classification of
324,018 protein sequences obtained from Swiss-Prot belonging to 7,027 protein
families, where an average family classification accuracy of 93%+-0.06% is
obtained, outperforming existing family classification methods. In addition, we
use ProtVec representation to predict disordered proteins from structured
proteins. Two databases of disordered sequences are used: the DisProt database
as well as a database featuring the disordered regions of nucleoporins rich
with phenylalanine-glycine repeats (FG-Nups). Using support vector machine
classifiers, FG-Nup sequences are distinguished from structured protein
sequences found in Protein Data Bank (PDB) with a 99.8% accuracy, and
unstructured DisProt sequences are differentiated from structured DisProt
sequences with 100.0% accuracy. These results indicate that by only providing
sequence data for various proteins into this model, accurate information about
protein structure can be determined.
| [
"Ehsaneddin Asgari and Mohammad R.K. Mofrad",
"['Ehsaneddin Asgari' 'Mohammad R. K. Mofrad']"
]
|
cs.LG | null | 1503.05187 | null | null | http://arxiv.org/pdf/1503.05187v1 | 2015-03-17T11:05:31Z | 2015-03-17T11:05:31Z | An Outlier Detection-based Tree Selection Approach to Extreme Pruning of
Random Forests | Random Forest (RF) is an ensemble classification technique that was developed
by Breiman over a decade ago. Compared with other ensemble techniques, it has
proved its accuracy and superiority. Many researchers, however, believe that
there is still room for enhancing and improving its performance in terms of
predictive accuracy. This explains why, over the past decade, there have been
many extensions of RF where each extension employed a variety of techniques and
strategies to improve certain aspect(s) of RF. Since it has been proven
empirically that ensembles tend to yield better results when there is a
significant diversity among the constituent models, the objective of this paper
is twofolds. First, it investigates how an unsupervised learning technique,
namely, Local Outlier Factor (LOF) can be used to identify diverse trees in the
RF. Second, trees with the highest LOF scores are then used to produce an
extension of RF termed LOFB-DRF that is much smaller in size than RF, and yet
performs at least as good as RF, but mostly exhibits higher performance in
terms of accuracy. The latter refers to a known technique called ensemble
pruning. Experimental results on 10 real datasets prove the superiority of our
proposed extension over the traditional RF. Unprecedented pruning levels
reaching 99% have been achieved at the time of boosting the predictive accuracy
of the ensemble. The notably high pruning level makes the technique a good
candidate for real-time applications.
| [
"['Khaled Fawagreh' 'Mohamad Medhat Gaber' 'Eyad Elyan']",
"Khaled Fawagreh, Mohamad Medhat Gaber, Eyad Elyan"
]
|
cs.DC cs.LG cs.NA | null | 1503.05214 | null | null | http://arxiv.org/pdf/1503.05214v2 | 2015-05-13T12:05:02Z | 2015-03-17T20:38:15Z | Analysis of PCA Algorithms in Distributed Environments | Classical machine learning algorithms often face scalability bottlenecks when
they are applied to large-scale data. Such algorithms were designed to work
with small data that is assumed to fit in the memory of one machine. In this
report, we analyze different methods for computing an important machine learing
algorithm, namely Principal Component Analysis (PCA), and we comment on its
limitations in supporting large datasets. The methods are analyzed and compared
across two important metrics: time complexity and communication complexity. We
consider the worst-case scenarios for both metrics, and we identify the
software libraries that implement each method. The analysis in this report
helps researchers and engineers in (i) understanding the main bottlenecks for
scalability in different PCA algorithms, (ii) choosing the most appropriate
method and software library for a given application and data set
characteristics, and (iii) designing new scalable PCA algorithms.
| [
"['Tarek Elgamal' 'Mohamed Hefeeda']",
"Tarek Elgamal, Mohamed Hefeeda"
]
|
cs.LG cs.AI | null | 1503.05296 | null | null | http://arxiv.org/pdf/1503.05296v1 | 2015-03-18T07:56:12Z | 2015-03-18T07:56:12Z | Efficient Machine Learning for Big Data: A Review | With the emerging technologies and all associated devices, it is predicted
that massive amount of data will be created in the next few years, in fact, as
much as 90% of current data were created in the last couple of years,a trend
that will continue for the foreseeable future. Sustainable computing studies
the process by which computer engineer/scientist designs computers and
associated subsystems efficiently and effectively with minimal impact on the
environment. However, current intelligent machine-learning systems are
performance driven, the focus is on the predictive/classification accuracy,
based on known properties learned from the training samples. For instance, most
machine-learning-based nonparametric models are known to require high
computational cost in order to find the global optima. With the learning task
in a large dataset, the number of hidden nodes within the network will
therefore increase significantly, which eventually leads to an exponential rise
in computational complexity. This paper thus reviews the theoretical and
experimental data-modeling literature, in large-scale data-intensive fields,
relating to: (1) model efficiency, including computational requirements in
learning, and data-intensive areas structure and design, and introduces (2) new
algorithmic approaches with the least memory requirements and processing to
minimize computational cost, while maintaining/improving its
predictive/classification accuracy and stability.
| [
"O. Y. Al-Jarrah, P. D. Yoo, S Muhaidat, G. K. Karagiannidis, and K.\n Taha",
"['O. Y. Al-Jarrah' 'P. D. Yoo' 'S Muhaidat' 'G. K. Karagiannidis'\n 'K. Taha']"
]
|
cs.LG cs.NE cs.SD stat.ML | null | 1503.05471 | null | null | http://arxiv.org/pdf/1503.05471v1 | 2015-03-18T16:28:18Z | 2015-03-18T16:28:18Z | Shared latent subspace modelling within Gaussian-Binary Restricted
Boltzmann Machines for NIST i-Vector Challenge 2014 | This paper presents a novel approach to speaker subspace modelling based on
Gaussian-Binary Restricted Boltzmann Machines (GRBM). The proposed model is
based on the idea of shared factors as in the Probabilistic Linear Discriminant
Analysis (PLDA). GRBM hidden layer is divided into speaker and channel factors,
herein the speaker factor is shared over all vectors of the speaker. Then
Maximum Likelihood Parameter Estimation (MLE) for proposed model is introduced.
Various new scoring techniques for speaker verification using GRBM are
proposed. The results for NIST i-vector Challenge 2014 dataset are presented.
| [
"Danila Doroshin, Alexander Yamshinin, Nikolay Lubimov, Marina\n Nastasenko, Mikhail Kotov, Maxim Tkachenko",
"['Danila Doroshin' 'Alexander Yamshinin' 'Nikolay Lubimov'\n 'Marina Nastasenko' 'Mikhail Kotov' 'Maxim Tkachenko']"
]
|
null | null | 1503.05479 | null | null | http://arxiv.org/pdf/1503.05479v2 | 2015-10-27T01:44:23Z | 2015-03-18T16:45:04Z | Interpolating Convex and Non-Convex Tensor Decompositions via the
Subspace Norm | We consider the problem of recovering a low-rank tensor from its noisy observation. Previous work has shown a recovery guarantee with signal to noise ratio $O(n^{lceil K/2 rceil /2})$ for recovering a $K$th order rank one tensor of size $ntimes cdots times n$ by recursive unfolding. In this paper, we first improve this bound to $O(n^{K/4})$ by a much simpler approach, but with a more careful analysis. Then we propose a new norm called the subspace norm, which is based on the Kronecker products of factors obtained by the proposed simple estimator. The imposed Kronecker structure allows us to show a nearly ideal $O(sqrt{n}+sqrt{H^{K-1}})$ bound, in which the parameter $H$ controls the blend from the non-convex estimator to mode-wise nuclear norm minimization. Furthermore, we empirically demonstrate that the subspace norm achieves the nearly ideal denoising performance even with $H=O(1)$. | [
"['Qinqing Zheng' 'Ryota Tomioka']"
]
|
stat.ML cs.LG math.ST stat.AP stat.TH | null | 1503.05526 | null | null | http://arxiv.org/pdf/1503.05526v1 | 2015-03-18T18:30:34Z | 2015-03-18T18:30:34Z | Interpretable Aircraft Engine Diagnostic via Expert Indicator
Aggregation | Detecting early signs of failures (anomalies) in complex systems is one of
the main goal of preventive maintenance. It allows in particular to avoid
actual failures by (re)scheduling maintenance operations in a way that
optimizes maintenance costs. Aircraft engine health monitoring is one
representative example of a field in which anomaly detection is crucial.
Manufacturers collect large amount of engine related data during flights which
are used, among other applications, to detect anomalies. This article
introduces and studies a generic methodology that allows one to build automatic
early signs of anomaly detection in a way that builds upon human expertise and
that remains understandable by human operators who make the final maintenance
decision. The main idea of the method is to generate a very large number of
binary indicators based on parametric anomaly scores designed by experts,
complemented by simple aggregations of those scores. A feature selection method
is used to keep only the most discriminant indicators which are used as inputs
of a Naive Bayes classifier. This give an interpretable classifier based on
interpretable anomaly detectors whose parameters have been optimized indirectly
by the selection process. The proposed methodology is evaluated on simulated
data designed to reproduce some of the anomaly types observed in real world
engines.
| [
"Tsirizo Rabenoro (SAMM), J\\'er\\^ome Lacaille, Marie Cottrell (SAMM),\n Fabrice Rossi (SAMM)",
"['Tsirizo Rabenoro' 'Jérôme Lacaille' 'Marie Cottrell' 'Fabrice Rossi']"
]
|
cs.LG | null | 1503.05571 | null | null | http://arxiv.org/pdf/1503.05571v2 | 2015-03-23T16:44:52Z | 2015-03-18T20:06:07Z | GSNs : Generative Stochastic Networks | We introduce a novel training principle for probabilistic models that is an
alternative to maximum likelihood. The proposed Generative Stochastic Networks
(GSN) framework is based on learning the transition operator of a Markov chain
whose stationary distribution estimates the data distribution. Because the
transition distribution is a conditional distribution generally involving a
small move, it has fewer dominant modes, being unimodal in the limit of small
moves. Thus, it is easier to learn, more like learning to perform supervised
function approximation, with gradients that can be obtained by
back-propagation. The theorems provided here generalize recent work on the
probabilistic interpretation of denoising auto-encoders and provide an
interesting justification for dependency networks and generalized
pseudolikelihood (along with defining an appropriate joint distribution and
sampling mechanism, even when the conditionals are not consistent). We study
how GSNs can be used with missing inputs and can be used to sample subsets of
variables given the rest. Successful experiments are conducted, validating
these theoretical results, on two image datasets and with a particular
architecture that mimics the Deep Boltzmann Machine Gibbs sampler but allows
training to proceed with backprop, without the need for layerwise pretraining.
| [
"Guillaume Alain, Yoshua Bengio, Li Yao, Jason Yosinski, Eric\n Thibodeau-Laufer, Saizheng Zhang, Pascal Vincent",
"['Guillaume Alain' 'Yoshua Bengio' 'Li Yao' 'Jason Yosinski'\n 'Eric Thibodeau-Laufer' 'Saizheng Zhang' 'Pascal Vincent']"
]
|
cs.CL cs.LG | null | 1503.05615 | null | null | http://arxiv.org/pdf/1503.05615v2 | 2015-05-07T22:12:11Z | 2015-03-18T23:33:17Z | Learning to Search for Dependencies | We demonstrate that a dependency parser can be built using a credit
assignment compiler which removes the burden of worrying about low-level
machine learning details from the parser implementation. The result is a simple
parser which robustly applies to many languages that provides similar
statistical and computational performance with best-to-date transition-based
parsing approaches, while avoiding various downsides including randomization,
extra feature requirements, and custom learning algorithms.
| [
"['Kai-Wei Chang' 'He He' 'Hal Daumé III' 'John Langford']",
"Kai-Wei Chang, He He, Hal Daum\\'e III, John Langford"
]
|
cs.LG cs.NE stat.ML | null | 1503.05671 | null | null | http://arxiv.org/pdf/1503.05671v7 | 2020-06-08T01:28:58Z | 2015-03-19T08:30:24Z | Optimizing Neural Networks with Kronecker-factored Approximate Curvature | We propose an efficient method for approximating natural gradient descent in
neural networks which we call Kronecker-Factored Approximate Curvature (K-FAC).
K-FAC is based on an efficiently invertible approximation of a neural network's
Fisher information matrix which is neither diagonal nor low-rank, and in some
cases is completely non-sparse. It is derived by approximating various large
blocks of the Fisher (corresponding to entire layers) as being the Kronecker
product of two much smaller matrices. While only several times more expensive
to compute than the plain stochastic gradient, the updates produced by K-FAC
make much more progress optimizing the objective, which results in an algorithm
that can be much faster than stochastic gradient descent with momentum in
practice. And unlike some previously proposed approximate
natural-gradient/Newton methods which use high-quality non-diagonal curvature
matrices (such as Hessian-free optimization), K-FAC works very well in highly
stochastic optimization regimes. This is because the cost of storing and
inverting K-FAC's approximation to the curvature matrix does not depend on the
amount of data used to estimate it, which is a feature typically associated
only with diagonal or low-rank approximations to the curvature matrix.
| [
"['James Martens' 'Roger Grosse']",
"James Martens, Roger Grosse"
]
|
stat.ML cs.LG cs.NE | null | 1503.05724 | null | null | http://arxiv.org/pdf/1503.05724v3 | 2016-03-29T14:45:03Z | 2015-03-19T11:48:14Z | A Neural Transfer Function for a Smooth and Differentiable Transition
Between Additive and Multiplicative Interactions | Existing approaches to combine both additive and multiplicative neural units
either use a fixed assignment of operations or require discrete optimization to
determine what function a neuron should perform. This leads either to an
inefficient distribution of computational resources or an extensive increase in
the computational complexity of the training procedure.
We present a novel, parameterizable transfer function based on the
mathematical concept of non-integer functional iteration that allows the
operation each neuron performs to be smoothly and, most importantly,
differentiablely adjusted between addition and multiplication. This allows the
decision between addition and multiplication to be integrated into the standard
backpropagation training procedure.
| [
"['Sebastian Urban' 'Patrick van der Smagt']",
"Sebastian Urban, Patrick van der Smagt"
]
|
cs.DC cs.LG cs.MS cs.NE stat.ML | null | 1503.05743 | null | null | http://arxiv.org/pdf/1503.05743v1 | 2015-03-19T12:41:29Z | 2015-03-19T12:41:29Z | Implementation of a Practical Distributed Calculation System with
Browsers and JavaScript, and Application to Distributed Deep Learning | Deep learning can achieve outstanding results in various fields. However, it
requires so significant computational power that graphics processing units
(GPUs) and/or numerous computers are often required for the practical
application. We have developed a new distributed calculation framework called
"Sashimi" that allows any computer to be used as a distribution node only by
accessing a website. We have also developed a new JavaScript neural network
framework called "Sukiyaki" that uses general purpose GPUs with web browsers.
Sukiyaki performs 30 times faster than a conventional JavaScript library for
deep convolutional neural networks (deep CNNs) learning. The combination of
Sashimi and Sukiyaki, as well as new distribution algorithms, demonstrates the
distributed deep learning of deep CNNs only with web browsers on various
devices. The libraries that comprise the proposed methods are available under
MIT license at http://mil-tokyo.github.io/.
| [
"Ken Miura and Tatsuya Harada",
"['Ken Miura' 'Tatsuya Harada']"
]
|
cs.CV cs.LG | null | 1503.05782 | null | null | http://arxiv.org/pdf/1503.05782v1 | 2015-03-19T14:31:56Z | 2015-03-19T14:31:56Z | Learning Hypergraph-regularized Attribute Predictors | We present a novel attribute learning framework named Hypergraph-based
Attribute Predictor (HAP). In HAP, a hypergraph is leveraged to depict the
attribute relations in the data. Then the attribute prediction problem is
casted as a regularized hypergraph cut problem in which HAP jointly learns a
collection of attribute projections from the feature space to a hypergraph
embedding space aligned with the attribute space. The learned projections
directly act as attribute classifiers (linear and kernelized). This formulation
leads to a very efficient approach. By considering our model as a multi-graph
cut task, our framework can flexibly incorporate other available information,
in particular class label. We apply our approach to attribute prediction,
Zero-shot and $N$-shot learning tasks. The results on AWA, USAA and CUB
databases demonstrate the value of our methods in comparison with the
state-of-the-art approaches.
| [
"['Sheng Huang' 'Mohamed Elhoseiny' 'Ahmed Elgammal' 'Dan Yang']",
"Sheng Huang and Mohamed Elhoseiny and Ahmed Elgammal and Dan Yang"
]
|
cs.NE cs.CE cs.LG | 10.1109/TSMCC.2012.2220963 | 1503.05831 | null | null | http://arxiv.org/abs/1503.05831v1 | 2015-03-19T16:30:21Z | 2015-03-19T16:30:21Z | Neural Network-Based Active Learning in Multivariate Calibration | In chemometrics, data from infrared or near-infrared (NIR) spectroscopy are
often used to identify a compound or to analyze the composition of amaterial.
This involves the calibration of models that predict the concentration
ofmaterial constituents from the measured NIR spectrum. An interesting aspect
of multivariate calibration is to achieve a particular accuracy level with a
minimum number of training samples, as this reduces the number of laboratory
tests and thus the cost of model building. In these chemometric models, the
input refers to a proper representation of the spectra and the output to the
concentrations of the sample constituents. The search for a most informative
new calibration sample thus has to be performed in the output space of the
model, rather than in the input space as in conventionalmodeling problems. In
this paper, we propose to solve the corresponding inversion problem by
utilizing the disagreements of an ensemble of neural networks to represent the
prediction error in the unexplored component space. The next calibration sample
is then chosen at a composition where the individual models of the ensemble
disagree most. The results obtained for a realistic chemometric calibration
example show that the proposed active learning can achieve a given calibration
accuracy with less training samples than random sampling.
| [
"['A. Ukil' 'J. Bernasconi']",
"A. Ukil, J. Bernasconi"
]
|
cs.SD cs.LG cs.NE | null | 1503.05849 | null | null | http://arxiv.org/pdf/1503.05849v1 | 2015-03-19T17:24:16Z | 2015-03-19T17:24:16Z | Deep Transform: Time-Domain Audio Error Correction via Probabilistic
Re-Synthesis | In the process of recording, storage and transmission of time-domain audio
signals, errors may be introduced that are difficult to correct in an
unsupervised way. Here, we train a convolutional deep neural network to
re-synthesize input time-domain speech signals at its output layer. We then use
this abstract transformation, which we call a deep transform (DT), to perform
probabilistic re-synthesis on further speech (of the same speaker) which has
been degraded. Using the convolutive DT, we demonstrate the recovery of speech
audio that has been subject to extreme degradation. This approach may be useful
for correction of errors in communications devices.
| [
"Andrew J.R. Simpson",
"['Andrew J. R. Simpson']"
]
|
cs.LG | null | 1503.05938 | null | null | http://arxiv.org/pdf/1503.05938v1 | 2015-03-19T20:30:46Z | 2015-03-19T20:30:46Z | On Invariance and Selectivity in Representation Learning | We discuss data representation which can be learned automatically from data,
are invariant to transformations, and at the same time selective, in the sense
that two points have the same representation only if they are one the
transformation of the other. The mathematical results here sharpen some of the
key claims of i-theory -- a recent theory of feedforward processing in sensory
cortex.
| [
"['Fabio Anselmi' 'Lorenzo Rosasco' 'Tomaso Poggio']",
"Fabio Anselmi, Lorenzo Rosasco, Tomaso Poggio"
]
|
cs.LG cs.IR | null | 1503.05951 | null | null | http://arxiv.org/pdf/1503.05951v1 | 2015-03-19T21:34:33Z | 2015-03-19T21:34:33Z | Rank Subspace Learning for Compact Hash Codes | The era of Big Data has spawned unprecedented interests in developing hashing
algorithms for efficient storage and fast nearest neighbor search. Most
existing work learn hash functions that are numeric quantizations of feature
values in projected feature space. In this work, we propose a novel hash
learning framework that encodes feature's rank orders instead of numeric values
in a number of optimal low-dimensional ranking subspaces. We formulate the
ranking subspace learning problem as the optimization of a piece-wise linear
convex-concave function and present two versions of our algorithm: one with
independent optimization of each hash bit and the other exploiting a sequential
learning framework. Our work is a generalization of the Winner-Take-All (WTA)
hash family and naturally enjoys all the numeric stability benefits of rank
correlation measures while being optimized to achieve high precision at very
short code length. We compare with several state-of-the-art hashing algorithms
in both supervised and unsupervised domain, showing superior performance in a
number of data sets.
| [
"['Kai Li' 'Guojun Qi' 'Jun Ye' 'Kien A. Hua']",
"Kai Li, Guojun Qi, Jun Ye, Kien A. Hua"
]
|
cs.SD cs.LG cs.NE | null | 1503.06046 | null | null | http://arxiv.org/pdf/1503.06046v1 | 2015-03-20T12:00:44Z | 2015-03-20T12:00:44Z | Deep Transform: Cocktail Party Source Separation via Probabilistic
Re-Synthesis | In cocktail party listening scenarios, the human brain is able to separate
competing speech signals. However, the signal processing implemented by the
brain to perform cocktail party listening is not well understood. Here, we
trained two separate convolutive autoencoder deep neural networks (DNN) to
separate monaural and binaural mixtures of two concurrent speech streams. We
then used these DNNs as convolutive deep transform (CDT) devices to perform
probabilistic re-synthesis. The CDTs operated directly in the time-domain. Our
simulations demonstrate that very simple neural networks are capable of
exploiting monaural and binaural information available in a cocktail party
listening scenario.
| [
"Andrew J.R. Simpson",
"['Andrew J. R. Simpson']"
]
|
cs.LG | null | 1503.06169 | null | null | http://arxiv.org/pdf/1503.06169v1 | 2015-03-20T17:21:12Z | 2015-03-20T17:21:12Z | Networked Stochastic Multi-Armed Bandits with Combinatorial Strategies | In this paper, we investigate a largely extended version of classical MAB
problem, called networked combinatorial bandit problems. In particular, we
consider the setting of a decision maker over a networked bandits as follows:
each time a combinatorial strategy, e.g., a group of arms, is chosen, and the
decision maker receives a reward resulting from her strategy and also receives
a side bonus resulting from that strategy for each arm's neighbor. This is
motivated by many real applications such as on-line social networks where
friends can provide their feedback on shared content, therefore if we promote a
product to a user, we can also collect feedback from her friends on that
product. To this end, we consider two types of side bonus in this study: side
observation and side reward. Upon the number of arms pulled at each time slot,
we study two cases: single-play and combinatorial-play. Consequently, this
leaves us four scenarios to investigate in the presence of side bonus:
Single-play with Side Observation, Combinatorial-play with Side Observation,
Single-play with Side Reward, and Combinatorial-play with Side Reward. For each
case, we present and analyze a series of \emph{zero regret} polices where the
expect of regret over time approaches zero as time goes to infinity. Extensive
simulations validate the effectiveness of our results.
| [
"['Shaojie Tang' 'Yaqin Zhou']",
"Shaojie Tang, Yaqin Zhou"
]
|
cs.LG cs.AI stat.ME stat.ML | null | 1503.06239 | null | null | http://arxiv.org/pdf/1503.06239v1 | 2015-03-20T22:01:45Z | 2015-03-20T22:01:45Z | Block-Wise MAP Inference for Determinantal Point Processes with
Application to Change-Point Detection | Existing MAP inference algorithms for determinantal point processes (DPPs)
need to calculate determinants or conduct eigenvalue decomposition generally at
the scale of the full kernel, which presents a great challenge for real-world
applications. In this paper, we introduce a class of DPPs, called BwDPPs, that
are characterized by an almost block diagonal kernel matrix and thus can allow
efficient block-wise MAP inference. Furthermore, BwDPPs are successfully
applied to address the difficulty of selecting change-points in the problem of
change-point detection (CPD), which results in a new BwDPP-based CPD method,
named BwDppCpd. In BwDppCpd, a preliminary set of change-point candidates is
first created based on existing well-studied metrics. Then, these change-point
candidates are treated as DPP items, and DPP-based subset selection is
conducted to give the final estimate of the change-points that favours both
quality and diversity. The effectiveness of BwDppCpd is demonstrated through
extensive experiments on five real-world datasets.
| [
"['Jinye Zhang' 'Zhijian Ou']",
"Jinye Zhang, Zhijian Ou"
]
|
stat.ML cs.LG | null | 1503.06250 | null | null | http://arxiv.org/pdf/1503.06250v1 | 2015-03-21T00:13:54Z | 2015-03-21T00:13:54Z | Fast Imbalanced Classification of Healthcare Data with Missing Values | In medical domain, data features often contain missing values. This can
create serious bias in the predictive modeling. Typical standard data mining
methods often produce poor performance measures. In this paper, we propose a
new method to simultaneously classify large datasets and reduce the effects of
missing values. The proposed method is based on a multilevel framework of the
cost-sensitive SVM and the expected maximization imputation method for missing
values, which relies on iterated regression analyses. We compare classification
results of multilevel SVM-based algorithms on public benchmark datasets with
imbalanced classes and missing values as well as real data in health
applications, and show that our multilevel SVM-based method produces fast, and
more accurate and robust classification results.
| [
"['Talayeh Razzaghi' 'Oleg Roderick' 'Ilya Safro' 'Nick Marko']",
"Talayeh Razzaghi and Oleg Roderick and Ilya Safro and Nick Marko"
]
|
cs.CV cs.AI cs.LG | null | 1503.06350 | null | null | http://arxiv.org/pdf/1503.06350v1 | 2015-03-21T20:54:39Z | 2015-03-21T20:54:39Z | Boosting Convolutional Features for Robust Object Proposals | Deep Convolutional Neural Networks (CNNs) have demonstrated excellent
performance in image classification, but still show room for improvement in
object-detection tasks with many categories, in particular for cluttered scenes
and occlusion. Modern detection algorithms like Regions with CNNs (Girshick et
al., 2014) rely on Selective Search (Uijlings et al., 2013) to propose regions
which with high probability represent objects, where in turn CNNs are deployed
for classification. Selective Search represents a family of sophisticated
algorithms that are engineered with multiple segmentation, appearance and
saliency cues, typically coming with a significant run-time overhead.
Furthermore, (Hosang et al., 2014) have shown that most methods suffer from low
reproducibility due to unstable superpixels, even for slight image
perturbations. Although CNNs are subsequently used for classification in
top-performing object-detection pipelines, current proposal methods are
agnostic to how these models parse objects and their rich learned
representations. As a result they may propose regions which may not resemble
high-level objects or totally miss some of them. To overcome these drawbacks we
propose a boosting approach which directly takes advantage of hierarchical CNN
features for detecting regions of interest fast. We demonstrate its performance
on ImageNet 2013 detection benchmark and compare it with state-of-the-art
methods.
| [
"['Nikolaos Karianakis' 'Thomas J. Fuchs' 'Stefano Soatto']",
"Nikolaos Karianakis, Thomas J. Fuchs and Stefano Soatto"
]
|
cs.IT cs.LG math.IT stat.ML | null | 1503.06379 | null | null | http://arxiv.org/pdf/1503.06379v3 | 2016-04-07T08:13:52Z | 2015-03-22T03:27:15Z | Relaxed Leverage Sampling for Low-rank Matrix Completion | We consider the problem of exact recovery of any $m\times n$ matrix of rank
$\varrho$ from a small number of observed entries via the standard nuclear norm
minimization framework. Such low-rank matrices have degrees of freedom
$(m+n)\varrho - \varrho^2$. We show that any arbitrary low-rank matrices can be
recovered exactly from a $\Theta\left(((m+n)\varrho -
\varrho^2)\log^2(m+n)\right)$ randomly sampled entries, thus matching the lower
bound on the required number of entries (in terms of degrees of freedom), with
an additional factor of $O(\log^2(m+n))$. To achieve this bound on sample size
we observe each entry with probabilities proportional to the sum of
corresponding row and column leverage scores, minus their product. We show that
this relaxation in sampling probabilities (as opposed to sum of leverage scores
in Chen et al, 2014) can give us an $O(\varrho^2\log^2(m+n))$ additive
improvement on the (best known) sample size obtained by Chen et al, 2014, for
the nuclear norm minimization. Experiments on real data corroborate the
theoretical improvement on sample size. Further, exact recovery of $(a)$
incoherent matrices (with restricted leverage scores), and $(b)$ matrices with
only one of the row or column spaces to be incoherent, can be performed using
our relaxed leverage score sampling, via nuclear norm minimization, without
knowing the leverage scores a priori. In such settings also we can achieve
improvement on sample size.
| [
"Abhisek Kundu",
"['Abhisek Kundu']"
]
|
cs.DC cs.LG | null | 1503.06384 | null | null | http://arxiv.org/pdf/1503.06384v1 | 2015-03-22T05:00:08Z | 2015-03-22T05:00:08Z | Costing Generated Runtime Execution Plans for Large-Scale Machine
Learning Programs | Declarative large-scale machine learning (ML) aims at the specification of ML
algorithms in a high-level language and automatic generation of hybrid runtime
execution plans ranging from single node, in-memory computations to distributed
computations on MapReduce (MR) or similar frameworks like Spark. The
compilation of large-scale ML programs exhibits many opportunities for
automatic optimization. Advanced cost-based optimization techniques
require---as a fundamental precondition---an accurate cost model for evaluating
the impact of optimization decisions. In this paper, we share insights into a
simple and robust yet accurate technique for costing alternative runtime
execution plans of ML programs. Our cost model relies on generating and costing
runtime plans in order to automatically reflect all successive optimization
phases. Costing runtime plans also captures control flow structures such as
loops and branches, and a variety of cost factors like IO, latency, and
computation costs. Finally, we linearize all these cost factors into a single
measure of expected execution time. Within SystemML, this cost model is
leveraged by several advanced optimizers like resource optimization and global
data flow optimization. We share our lessons learned in order to provide
foundations for the optimization of ML programs.
| [
"Matthias Boehm",
"['Matthias Boehm']"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.