categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
cs.CL cs.LG | null | 1312.5542 | null | null | http://arxiv.org/pdf/1312.5542v3 | 2017-01-04T17:01:11Z | 2013-12-19T13:31:11Z | Word Emdeddings through Hellinger PCA | Word embeddings resulting from neural language models have been shown to be
successful for a large variety of NLP tasks. However, such architecture might
be difficult to train and time-consuming. Instead, we propose to drastically
simplify the word embeddings computation through a Hellinger PCA of the word
co-occurence matrix. We compare those new word embeddings with some well-known
embeddings on NER and movie review tasks and show that we can reach similar or
even better performance. Although deep learning is not really necessary for
generating good word embeddings, we show that it can provide an easy way to
adapt embeddings to specific tasks.
| [
"['Rémi Lebret' 'Ronan Collobert']",
"R\\'emi Lebret and Ronan Collobert"
] |
cs.LG stat.ML | null | 1312.5578 | null | null | http://arxiv.org/pdf/1312.5578v4 | 2014-01-24T22:24:15Z | 2013-12-19T15:08:37Z | Multimodal Transitions for Generative Stochastic Networks | Generative Stochastic Networks (GSNs) have been recently introduced as an
alternative to traditional probabilistic modeling: instead of parametrizing the
data distribution directly, one parametrizes a transition operator for a Markov
chain whose stationary distribution is an estimator of the data generating
distribution. The result of training is therefore a machine that generates
samples through this Markov chain. However, the previously introduced GSN
consistency theorems suggest that in order to capture a wide class of
distributions, the transition operator in general should be multimodal,
something that has not been done before this paper. We introduce for the first
time multimodal transition distributions for GSNs, in particular using models
in the NADE family (Neural Autoregressive Density Estimator) as output
distributions of the transition operator. A NADE model is related to an RBM
(and can thus model multimodal distributions) but its likelihood (and
likelihood gradient) can be computed easily. The parameters of the NADE are
obtained as a learned function of the previous state of the learned Markov
chain. Experiments clearly illustrate the advantage of such multimodal
transition distributions over unimodal GSNs.
| [
"['Sherjil Ozair' 'Li Yao' 'Yoshua Bengio']",
"Sherjil Ozair, Li Yao and Yoshua Bengio"
] |
cs.LG | null | 1312.5602 | null | null | http://arxiv.org/pdf/1312.5602v1 | 2013-12-19T16:00:08Z | 2013-12-19T16:00:08Z | Playing Atari with Deep Reinforcement Learning | We present the first deep learning model to successfully learn control
policies directly from high-dimensional sensory input using reinforcement
learning. The model is a convolutional neural network, trained with a variant
of Q-learning, whose input is raw pixels and whose output is a value function
estimating future rewards. We apply our method to seven Atari 2600 games from
the Arcade Learning Environment, with no adjustment of the architecture or
learning algorithm. We find that it outperforms all previous approaches on six
of the games and surpasses a human expert on three of them.
| [
"['Volodymyr Mnih' 'Koray Kavukcuoglu' 'David Silver' 'Alex Graves'\n 'Ioannis Antonoglou' 'Daan Wierstra' 'Martin Riedmiller']",
"Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis\n Antonoglou, Daan Wierstra, Martin Riedmiller"
] |
cs.CV cs.LG stat.ML | null | 1312.5604 | null | null | http://arxiv.org/pdf/1312.5604v2 | 2014-02-06T12:24:54Z | 2013-12-19T16:01:41Z | Learning Transformations for Classification Forests | This work introduces a transformation-based learner model for classification
forests. The weak learner at each split node plays a crucial role in a
classification tree. We propose to optimize the splitting objective by learning
a linear transformation on subspaces using nuclear norm as the optimization
criteria. The learned linear transformation restores a low-rank structure for
data from the same class, and, at the same time, maximizes the separation
between different classes, thereby improving the performance of the split
function. Theoretical and experimental results support the proposed framework.
| [
"Qiang Qiu, Guillermo Sapiro",
"['Qiang Qiu' 'Guillermo Sapiro']"
] |
cs.LG | null | 1312.5650 | null | null | http://arxiv.org/pdf/1312.5650v3 | 2014-03-21T23:47:20Z | 2013-12-19T17:30:31Z | Zero-Shot Learning by Convex Combination of Semantic Embeddings | Several recent publications have proposed methods for mapping images into
continuous semantic embedding spaces. In some cases the embedding space is
trained jointly with the image transformation. In other cases the semantic
embedding space is established by an independent natural language processing
task, and then the image transformation into that space is learned in a second
stage. Proponents of these image embedding systems have stressed their
advantages over the traditional \nway{} classification framing of image
understanding, particularly in terms of the promise for zero-shot learning --
the ability to correctly annotate images of previously unseen object
categories. In this paper, we propose a simple method for constructing an image
embedding system from any existing \nway{} image classifier and a semantic word
embedding model, which contains the $\n$ class labels in its vocabulary. Our
method maps images into the semantic embedding space via convex combination of
the class label embedding vectors, and requires no additional training. We show
that this simple and direct method confers many of the advantages associated
with more complex image embedding schemes, and indeed outperforms state of the
art methods on the ImageNet zero-shot learning task.
| [
"['Mohammad Norouzi' 'Tomas Mikolov' 'Samy Bengio' 'Yoram Singer'\n 'Jonathon Shlens' 'Andrea Frome' 'Greg S. Corrado' 'Jeffrey Dean']",
"Mohammad Norouzi and Tomas Mikolov and Samy Bengio and Yoram Singer\n and Jonathon Shlens and Andrea Frome and Greg S. Corrado and Jeffrey Dean"
] |
cs.LG | null | 1312.5663 | null | null | http://arxiv.org/pdf/1312.5663v2 | 2014-03-22T17:12:07Z | 2013-12-19T17:46:46Z | k-Sparse Autoencoders | Recently, it has been observed that when representations are learnt in a way
that encourages sparsity, improved performance is obtained on classification
tasks. These methods involve combinations of activation functions, sampling
steps and different kinds of penalties. To investigate the effectiveness of
sparsity by itself, we propose the k-sparse autoencoder, which is an
autoencoder with linear activation function, where in hidden layers only the k
highest activities are kept. When applied to the MNIST and NORB datasets, we
find that this method achieves better classification results than denoising
autoencoders, networks trained with dropout, and RBMs. k-sparse autoencoders
are simple to train and the encoding stage is very fast, making them
well-suited to large problem sizes, where conventional sparse coding algorithms
cannot be applied.
| [
"Alireza Makhzani, Brendan Frey",
"['Alireza Makhzani' 'Brendan Frey']"
] |
cs.CV cs.LG | null | 1312.5697 | null | null | http://arxiv.org/pdf/1312.5697v2 | 2013-12-20T18:12:16Z | 2013-12-19T18:53:47Z | Using Web Co-occurrence Statistics for Improving Image Categorization | Object recognition and localization are important tasks in computer vision.
The focus of this work is the incorporation of contextual information in order
to improve object recognition and localization. For instance, it is natural to
expect not to see an elephant to appear in the middle of an ocean. We consider
a simple approach to encapsulate such common sense knowledge using
co-occurrence statistics from web documents. By merely counting the number of
times nouns (such as elephants, sharks, oceans, etc.) co-occur in web
documents, we obtain a good estimate of expected co-occurrences in visual data.
We then cast the problem of combining textual co-occurrence statistics with the
predictions of image-based classifiers as an optimization problem. The
resulting optimization problem serves as a surrogate for our inference
procedure. Albeit the simplicity of the resulting optimization problem, it is
effective in improving both recognition and localization accuracy. Concretely,
we observe significant improvements in recognition and localization rates for
both ImageNet Detection 2012 and Sun 2012 datasets.
| [
"['Samy Bengio' 'Jeff Dean' 'Dumitru Erhan' 'Eugene Ie' 'Quoc Le'\n 'Andrew Rabinovich' 'Jonathon Shlens' 'Yoram Singer']",
"Samy Bengio, Jeff Dean, Dumitru Erhan, Eugene Ie, Quoc Le, Andrew\n Rabinovich, Jonathon Shlens, Yoram Singer"
] |
stat.ML cs.LG math.OC stat.AP | null | 1312.5734 | null | null | http://arxiv.org/pdf/1312.5734v1 | 2013-12-19T20:44:44Z | 2013-12-19T20:44:44Z | Time-varying Learning and Content Analytics via Sparse Factor Analysis | We propose SPARFA-Trace, a new machine learning-based framework for
time-varying learning and content analytics for education applications. We
develop a novel message passing-based, blind, approximate Kalman filter for
sparse factor analysis (SPARFA), that jointly (i) traces learner concept
knowledge over time, (ii) analyzes learner concept knowledge state transitions
(induced by interacting with learning resources, such as textbook sections,
lecture videos, etc, or the forgetting effect), and (iii) estimates the content
organization and intrinsic difficulty of the assessment questions. These
quantities are estimated solely from binary-valued (correct/incorrect) graded
learner response data and a summary of the specific actions each learner
performs (e.g., answering a question or studying a learning resource) at each
time instance. Experimental results on two online course datasets demonstrate
that SPARFA-Trace is capable of tracing each learner's concept knowledge
evolution over time, as well as analyzing the quality and content organization
of learning resources, the question-concept associations, and the question
intrinsic difficulties. Moreover, we show that SPARFA-Trace achieves comparable
or better performance in predicting unobserved learner responses than existing
collaborative filtering and knowledge tracing approaches for personalized
education.
| [
"Andrew S. Lan, Christoph Studer and Richard G. Baraniuk",
"['Andrew S. Lan' 'Christoph Studer' 'Richard G. Baraniuk']"
] |
astro-ph.IM astro-ph.CO cs.LG stat.ML | 10.1093/mnras/stt2456 | 1312.5753 | null | null | http://arxiv.org/abs/1312.5753v1 | 2013-12-18T20:18:33Z | 2013-12-18T20:18:33Z | SOMz: photometric redshift PDFs with self organizing maps and random
atlas | In this paper we explore the applicability of the unsupervised machine
learning technique of Self Organizing Maps (SOM) to estimate galaxy photometric
redshift probability density functions (PDFs). This technique takes a
spectroscopic training set, and maps the photometric attributes, but not the
redshifts, to a two dimensional surface by using a process of competitive
learning where neurons compete to more closely resemble the training data
multidimensional space. The key feature of a SOM is that it retains the
topology of the input set, revealing correlations between the attributes that
are not easily identified. We test three different 2D topological mapping:
rectangular, hexagonal, and spherical, by using data from the DEEP2 survey. We
also explore different implementations and boundary conditions on the map and
also introduce the idea of a random atlas where a large number of different
maps are created and their individual predictions are aggregated to produce a
more robust photometric redshift PDF. We also introduced a new metric, the
$I$-score, which efficiently incorporates different metrics, making it easier
to compare different results (from different parameters or different
photometric redshift codes). We find that by using a spherical topology mapping
we obtain a better representation of the underlying multidimensional topology,
which provides more accurate results that are comparable to other,
state-of-the-art machine learning algorithms. Our results illustrate that
unsupervised approaches have great potential for many astronomical problems,
and in particular for the computation of photometric redshifts.
| [
"M. Carrasco Kind and R. J. Brunner (Department of Astronomy,\n University of Illinois at Urbana-Champaign)",
"['M. Carrasco Kind' 'R. J. Brunner']"
] |
stat.ML cs.LG | null | 1312.5766 | null | null | http://arxiv.org/pdf/1312.5766v2 | 2013-12-30T06:20:05Z | 2013-12-19T22:05:11Z | Structure-Aware Dynamic Scheduler for Parallel Machine Learning | Training large machine learning (ML) models with many variables or parameters
can take a long time if one employs sequential procedures even with stochastic
updates. A natural solution is to turn to distributed computing on a cluster;
however, naive, unstructured parallelization of ML algorithms does not usually
lead to a proportional speedup and can even result in divergence, because
dependencies between model elements can attenuate the computational gains from
parallelization and compromise correctness of inference. Recent efforts toward
this issue have benefited from exploiting the static, a priori block structures
residing in ML algorithms. In this paper, we take this path further by
exploring the dynamic block structures and workloads therein present during ML
program execution, which offers new opportunities for improving convergence,
correctness, and load balancing in distributed ML. We propose and showcase a
general-purpose scheduler, STRADS, for coordinating distributed updates in ML
algorithms, which harnesses the aforementioned opportunities in a systematic
way. We provide theoretical guarantees for our scheduler, and demonstrate its
efficacy versus static block structures on Lasso and Matrix Factorization.
| [
"Seunghak Lee, Jin Kyu Kim, Qirong Ho, Garth A. Gibson, Eric P. Xing",
"['Seunghak Lee' 'Jin Kyu Kim' 'Qirong Ho' 'Garth A. Gibson' 'Eric P. Xing']"
] |
cs.LG stat.ML | null | 1312.5770 | null | null | http://arxiv.org/pdf/1312.5770v3 | 2014-02-05T03:37:30Z | 2013-12-19T22:15:40Z | Consistency of Causal Inference under the Additive Noise Model | We analyze a family of methods for statistical causal inference from sample
under the so-called Additive Noise Model. While most work on the subject has
concentrated on establishing the soundness of the Additive Noise Model, the
statistical consistency of the resulting inference methods has received little
attention. We derive general conditions under which the given family of
inference methods consistently infers the causal direction in a nonparametric
setting.
| [
"Samory Kpotufe, Eleni Sgouritsa, Dominik Janzing, and Bernhard\n Sch\\\"olkopf",
"['Samory Kpotufe' 'Eleni Sgouritsa' 'Dominik Janzing' 'Bernhard Schölkopf']"
] |
cs.LG cs.CV cs.NE | null | 1312.5783 | null | null | http://arxiv.org/pdf/1312.5783v1 | 2013-12-20T00:21:36Z | 2013-12-20T00:21:36Z | Unsupervised Feature Learning by Deep Sparse Coding | In this paper, we propose a new unsupervised feature learning framework,
namely Deep Sparse Coding (DeepSC), that extends sparse coding to a multi-layer
architecture for visual object recognition tasks. The main innovation of the
framework is that it connects the sparse-encoders from different layers by a
sparse-to-dense module. The sparse-to-dense module is a composition of a local
spatial pooling step and a low-dimensional embedding process, which takes
advantage of the spatial smoothness information in the image. As a result, the
new method is able to learn several levels of sparse representation of the
image which capture features at a variety of abstraction levels and
simultaneously preserve the spatial smoothness between the neighboring image
patches. Combining the feature representations from multiple layers, DeepSC
achieves the state-of-the-art performance on multiple object recognition tasks.
| [
"Yunlong He, Koray Kavukcuoglu, Yun Wang, Arthur Szlam, Yanjun Qi",
"['Yunlong He' 'Koray Kavukcuoglu' 'Yun Wang' 'Arthur Szlam' 'Yanjun Qi']"
] |
cs.LG cs.NE | null | 1312.5813 | null | null | http://arxiv.org/pdf/1312.5813v2 | 2014-06-09T08:39:37Z | 2013-12-20T05:22:20Z | Unsupervised Pretraining Encourages Moderate-Sparseness | It is well known that direct training of deep neural networks will generally
lead to poor results. A major progress in recent years is the invention of
various pretraining methods to initialize network parameters and it was shown
that such methods lead to good prediction performance. However, the reason for
the success of pretraining has not been fully understood, although it was
argued that regularization and better optimization play certain roles. This
paper provides another explanation for the effectiveness of pretraining, where
we show pretraining leads to a sparseness of hidden unit activation in the
resulting neural networks. The main reason is that the pretraining models can
be interpreted as an adaptive sparse coding. Compared to deep neural network
with sigmoid function, our experimental results on MNIST and Birdsong further
support this sparseness observation.
| [
"['Jun Li' 'Wei Luo' 'Jian Yang' 'Xiaotong Yuan']",
"Jun Li, Wei Luo, Jian Yang, Xiaotong Yuan"
] |
cs.NE cs.CV cs.LG stat.ML | null | 1312.5845 | null | null | http://arxiv.org/pdf/1312.5845v7 | 2015-02-16T09:37:18Z | 2013-12-20T08:24:48Z | Competitive Learning with Feedforward Supervisory Signal for Pre-trained
Multilayered Networks | We propose a novel learning method for multilayered neural networks which
uses feedforward supervisory signal and associates classification of a new
input with that of pre-trained input. The proposed method effectively uses rich
input information in the earlier layer for robust leaning and revising internal
representation in a multilayer neural network.
| [
"Takashi Shinozaki and Yasushi Naruse",
"['Takashi Shinozaki' 'Yasushi Naruse']"
] |
cs.NE cs.LG stat.ML | null | 1312.5847 | null | null | http://arxiv.org/pdf/1312.5847v3 | 2014-02-19T16:00:08Z | 2013-12-20T08:30:55Z | Deep learning for neuroimaging: a validation study | Deep learning methods have recently made notable advances in the tasks of
classification and representation learning. These tasks are important for brain
imaging and neuroscience discovery, making the methods attractive for porting
to a neuroimager's toolbox. Success of these methods is, in part, explained by
the flexibility of deep learning models. However, this flexibility makes the
process of porting to new areas a difficult parameter optimization problem. In
this work we demonstrate our results (and feasible parameter ranges) in
application of deep learning methods to structural and functional brain imaging
data. We also describe a novel constraint-based approach to visualizing high
dimensional data. We use it to analyze the effect of parameter choices on data
transformations. Our results show that deep learning methods are able to learn
physiologically important representations and detect latent relations in
neuroimaging data.
| [
"['Sergey M. Plis' 'Devon R. Hjelm' 'Ruslan Salakhutdinov'\n 'Vince D. Calhoun']",
"Sergey M. Plis and Devon R. Hjelm and Ruslan Salakhutdinov and Vince\n D. Calhoun"
] |
cs.CV cs.LG cs.NE | null | 1312.5851 | null | null | http://arxiv.org/pdf/1312.5851v5 | 2014-03-06T23:27:18Z | 2013-12-20T08:42:21Z | Fast Training of Convolutional Networks through FFTs | Convolutional networks are one of the most widely employed architectures in
computer vision and machine learning. In order to leverage their ability to
learn complex functions, large amounts of data are required for training.
Training a large convolutional network to produce state-of-the-art results can
take weeks, even when using modern GPUs. Producing labels using a trained
network can also be costly when dealing with web-scale datasets. In this work,
we present a simple algorithm which accelerates training and inference by a
significant factor, and can yield improvements of over an order of magnitude
compared to existing state-of-the-art implementations. This is done by
computing convolutions as pointwise products in the Fourier domain while
reusing the same transformed feature map many times. The algorithm is
implemented on a GPU architecture and addresses a number of related challenges.
| [
"['Michael Mathieu' 'Mikael Henaff' 'Yann LeCun']",
"Michael Mathieu, Mikael Henaff, Yann LeCun"
] |
cs.LG cs.NE | null | 1312.5853 | null | null | http://arxiv.org/pdf/1312.5853v4 | 2014-02-18T21:35:13Z | 2013-12-20T08:45:07Z | Multi-GPU Training of ConvNets | In this work we evaluate different approaches to parallelize computation of
convolutional neural networks across several GPUs.
| [
"['Omry Yadan' 'Keith Adams' 'Yaniv Taigman' \"Marc'Aurelio Ranzato\"]",
"Omry Yadan, Keith Adams, Yaniv Taigman, Marc'Aurelio Ranzato"
] |
stat.ML cs.LG | null | 1312.5857 | null | null | http://arxiv.org/pdf/1312.5857v5 | 2014-11-25T22:26:12Z | 2013-12-20T08:59:36Z | A Generative Product-of-Filters Model of Audio | We propose the product-of-filters (PoF) model, a generative model that
decomposes audio spectra as sparse linear combinations of "filters" in the
log-spectral domain. PoF makes similar assumptions to those used in the classic
homomorphic filtering approach to signal processing, but replaces hand-designed
decompositions built of basic signal processing operations with a learned
decomposition based on statistical inference. This paper formulates the PoF
model and derives a mean-field method for posterior inference and a variational
EM algorithm to estimate the model's free parameters. We demonstrate PoF's
potential for audio processing on a bandwidth expansion task, and show that PoF
can serve as an effective unsupervised feature extractor for a speaker
identification task.
| [
"['Dawen Liang' 'Matthew D. Hoffman' 'Gautham J. Mysore']",
"Dawen Liang, Matthew D. Hoffman, Gautham J. Mysore"
] |
cs.LG | null | 1312.5869 | null | null | http://arxiv.org/pdf/1312.5869v2 | 2014-02-18T17:25:43Z | 2013-12-20T10:16:13Z | Principled Non-Linear Feature Selection | Recent non-linear feature selection approaches employing greedy optimisation
of Centred Kernel Target Alignment(KTA) exhibit strong results in terms of
generalisation accuracy and sparsity. However, they are computationally
prohibitive for large datasets. We propose randSel, a randomised feature
selection algorithm, with attractive scaling properties. Our theoretical
analysis of randSel provides strong probabilistic guarantees for correct
identification of relevant features. RandSel's characteristics make it an ideal
candidate for identifying informative learned representations. We've conducted
experimentation to establish the performance of this approach, and present
encouraging results, including a 3rd position result in the recent ICML black
box learning challenge as well as competitive results for signal peptide
prediction, an important problem in bioinformatics.
| [
"Dimitrios Athanasakis, John Shawe-Taylor, Delmiro Fernandez-Reyes",
"['Dimitrios Athanasakis' 'John Shawe-Taylor' 'Delmiro Fernandez-Reyes']"
] |
stat.ML cs.LG | null | 1312.5921 | null | null | http://arxiv.org/pdf/1312.5921v2 | 2014-02-18T09:44:23Z | 2013-12-20T12:42:15Z | Group-sparse Embeddings in Collective Matrix Factorization | CMF is a technique for simultaneously learning low-rank representations based
on a collection of matrices with shared entities. A typical example is the
joint modeling of user-item, item-property, and user-feature matrices in a
recommender system. The key idea in CMF is that the embeddings are shared
across the matrices, which enables transferring information between them. The
existing solutions, however, break down when the individual matrices have
low-rank structure not shared with others. In this work we present a novel CMF
solution that allows each of the matrices to have a separate low-rank structure
that is independent of the other matrices, as well as structures that are
shared only by a subset of them. We compare MAP and variational Bayesian
solutions based on alternating optimization algorithms and show that the model
automatically infers the nature of each factor using group-wise sparsity. Our
approach supports in a principled way continuous, binary and count observations
and is efficient for sparse matrices involving missing data. We illustrate the
solution on a number of examples, focusing in particular on an interesting
use-case of augmented multi-view learning.
| [
"['Arto Klami' 'Guillaume Bouchard' 'Abhishek Tripathi']",
"Arto Klami, Guillaume Bouchard and Abhishek Tripathi"
] |
cs.LG | 10.1007/978-3-319-31750-2_24 | 1312.5946 | null | null | http://arxiv.org/abs/1312.5946v3 | 2017-05-30T07:44:37Z | 2013-12-20T14:08:48Z | Adaptive Seeding for Gaussian Mixture Models | We present new initialization methods for the expectation-maximization
algorithm for multivariate Gaussian mixture models. Our methods are adaptions
of the well-known $K$-means++ initialization and the Gonzalez algorithm.
Thereby we aim to close the gap between simple random, e.g. uniform, and
complex methods, that crucially depend on the right choice of hyperparameters.
Our extensive experiments indicate the usefulness of our methods compared to
common techniques and methods, which e.g. apply the original $K$-means++ and
Gonzalez directly, with respect to artificial as well as real-world data sets.
| [
"Johannes Bl\\\"omer and Kathrin Bujna",
"['Johannes Blömer' 'Kathrin Bujna']"
] |
cs.CL cs.LG | null | 1312.5985 | null | null | http://arxiv.org/pdf/1312.5985v2 | 2014-02-18T15:27:24Z | 2013-12-20T15:21:15Z | Learning Type-Driven Tensor-Based Meaning Representations | This paper investigates the learning of 3rd-order tensors representing the
semantics of transitive verbs. The meaning representations are part of a
type-driven tensor-based semantic framework, from the newly emerging field of
compositional distributional semantics. Standard techniques from the neural
networks literature are used to learn the tensors, which are tested on a
selectional preference-style task with a simple 2-dimensional sentence space.
Promising results are obtained against a competitive corpus-based baseline. We
argue that extending this work beyond transitive verbs, and to
higher-dimensional sentence spaces, is an interesting and challenging problem
for the machine learning community to consider.
| [
"Tamara Polajnar and Luana Fagarasan and Stephen Clark",
"['Tamara Polajnar' 'Luana Fagarasan' 'Stephen Clark']"
] |
cs.NE cs.LG stat.ML | null | 1312.6002 | null | null | http://arxiv.org/pdf/1312.6002v3 | 2014-02-14T09:47:11Z | 2013-12-20T16:13:54Z | Stochastic Gradient Estimate Variance in Contrastive Divergence and
Persistent Contrastive Divergence | Contrastive Divergence (CD) and Persistent Contrastive Divergence (PCD) are
popular methods for training the weights of Restricted Boltzmann Machines.
However, both methods use an approximate method for sampling from the model
distribution. As a side effect, these approximations yield significantly
different biases and variances for stochastic gradient estimates of individual
data points. It is well known that CD yields a biased gradient estimate. In
this paper we however show empirically that CD has a lower stochastic gradient
estimate variance than exact sampling, while the mean of subsequent PCD
estimates has a higher variance than exact sampling. The results give one
explanation to the finding that CD can be used with smaller minibatches or
higher learning rates than PCD.
| [
"Mathias Berglund, Tapani Raiko",
"['Mathias Berglund' 'Tapani Raiko']"
] |
cs.NE cs.LG stat.ML | null | 1312.6026 | null | null | http://arxiv.org/pdf/1312.6026v5 | 2014-04-24T15:17:07Z | 2013-12-20T16:39:39Z | How to Construct Deep Recurrent Neural Networks | In this paper, we explore different ways to extend a recurrent neural network
(RNN) to a \textit{deep} RNN. We start by arguing that the concept of depth in
an RNN is not as clear as it is in feedforward neural networks. By carefully
analyzing and understanding the architecture of an RNN, however, we find three
points of an RNN which may be made deeper; (1) input-to-hidden function, (2)
hidden-to-hidden transition and (3) hidden-to-output function. Based on this
observation, we propose two novel architectures of a deep RNN which are
orthogonal to an earlier attempt of stacking multiple recurrent layers to build
a deep RNN (Schmidhuber, 1992; El Hihi and Bengio, 1996). We provide an
alternative interpretation of these deep RNNs using a novel framework based on
neural operators. The proposed deep RNNs are empirically evaluated on the tasks
of polyphonic music prediction and language modeling. The experimental result
supports our claim that the proposed deep RNNs benefit from the depth and
outperform the conventional, shallow RNNs.
| [
"['Razvan Pascanu' 'Caglar Gulcehre' 'Kyunghyun Cho' 'Yoshua Bengio']",
"Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Yoshua Bengio"
] |
cs.LG | null | 1312.6042 | null | null | http://arxiv.org/pdf/1312.6042v4 | 2014-06-17T10:24:51Z | 2013-12-20T17:03:50Z | Learning States Representations in POMDP | We propose to deal with sequential processes where only partial observations
are available by learning a latent representation space on which policies may
be accurately learned.
| [
"Gabriella Contardo and Ludovic Denoyer and Thierry Artieres and\n Patrick Gallinari",
"['Gabriella Contardo' 'Ludovic Denoyer' 'Thierry Artieres'\n 'Patrick Gallinari']"
] |
cs.LG | null | 1312.6055 | null | null | http://arxiv.org/pdf/1312.6055v3 | 2014-02-25T18:16:54Z | 2013-12-20T17:44:06Z | Unit Tests for Stochastic Optimization | Optimization by stochastic gradient descent is an important component of many
large-scale machine learning algorithms. A wide variety of such optimization
algorithms have been devised; however, it is unclear whether these algorithms
are robust and widely applicable across many different optimization landscapes.
In this paper we develop a collection of unit tests for stochastic
optimization. Each unit test rapidly evaluates an optimization algorithm on a
small-scale, isolated, and well-understood difficulty, rather than in
real-world scenarios where many such issues are entangled. Passing these unit
tests is not sufficient, but absolutely necessary for any algorithms with
claims to generality or robustness. We give initial quantitative and
qualitative results on numerous established algorithms. The testing framework
is open-source, extensible, and easy to apply to new algorithms.
| [
"Tom Schaul, Ioannis Antonoglou, David Silver",
"['Tom Schaul' 'Ioannis Antonoglou' 'David Silver']"
] |
cs.LG | null | 1312.6062 | null | null | http://arxiv.org/pdf/1312.6062v2 | 2014-04-09T07:42:24Z | 2013-12-20T18:14:44Z | Stopping Criteria in Contrastive Divergence: Alternatives to the
Reconstruction Error | Restricted Boltzmann Machines (RBMs) are general unsupervised learning
devices to ascertain generative models of data distributions. RBMs are often
trained using the Contrastive Divergence learning algorithm (CD), an
approximation to the gradient of the data log-likelihood. A simple
reconstruction error is often used to decide whether the approximation provided
by the CD algorithm is good enough, though several authors (Schulz et al.,
2010; Fischer & Igel, 2010) have raised doubts concerning the feasibility of
this procedure. However, not many alternatives to the reconstruction error have
been used in the literature. In this manuscript we investigate simple
alternatives to the reconstruction error in order to detect as soon as possible
the decrease in the log-likelihood during learning.
| [
"David Buchaca, Enrique Romero, Ferran Mazzanti, Jordi Delgado",
"['David Buchaca' 'Enrique Romero' 'Ferran Mazzanti' 'Jordi Delgado']"
] |
cs.LG | null | 1312.6086 | null | null | http://arxiv.org/pdf/1312.6086v1 | 2013-12-20T19:33:26Z | 2013-12-20T19:33:26Z | The return of AdaBoost.MH: multi-class Hamming trees | Within the framework of AdaBoost.MH, we propose to train vector-valued
decision trees to optimize the multi-class edge without reducing the
multi-class problem to $K$ binary one-against-all classifications. The key
element of the method is a vector-valued decision stump, factorized into an
input-independent vector of length $K$ and label-independent scalar classifier.
At inner tree nodes, the label-dependent vector is discarded and the binary
classifier can be used for partitioning the input space into two regions. The
algorithm retains the conceptual elegance, power, and computational efficiency
of binary AdaBoost. In experiments it is on par with support vector machines
and with the best existing multi-class boosting algorithm AOSOLogitBoost, and
it is significantly better than other known implementations of AdaBoost.MH.
| [
"Bal\\'azs K\\'egl",
"['Balázs Kégl']"
] |
cs.LG cs.NE | null | 1312.6098 | null | null | http://arxiv.org/pdf/1312.6098v5 | 2014-02-14T17:52:12Z | 2013-12-20T20:22:31Z | On the number of response regions of deep feed forward networks with
piece-wise linear activations | This paper explores the complexity of deep feedforward networks with linear
pre-synaptic couplings and rectified linear activations. This is a contribution
to the growing body of work contrasting the representational power of deep and
shallow network architectures. In particular, we offer a framework for
comparing deep and shallow models that belong to the family of piecewise linear
functions based on computational geometry. We look at a deep rectifier
multi-layer perceptron (MLP) with linear outputs units and compare it with a
single layer version of the model. In the asymptotic regime, when the number of
inputs stays constant, if the shallow model has $kn$ hidden units and $n_0$
inputs, then the number of linear regions is $O(k^{n_0}n^{n_0})$. For a $k$
layer model with $n$ hidden units on each layer it is $\Omega(\left\lfloor
{n}/{n_0}\right\rfloor^{k-1}n^{n_0})$. The number
$\left\lfloor{n}/{n_0}\right\rfloor^{k-1}$ grows faster than $k^{n_0}$ when $n$
tends to infinity or when $k$ tends to infinity and $n \geq 2n_0$.
Additionally, even when $k$ is small, if we restrict $n$ to be $2n_0$, we can
show that a deep model has considerably more linear regions that a shallow one.
We consider this as a first step towards understanding the complexity of these
models and specifically towards providing suitable mathematical tools for
future analysis.
| [
"['Razvan Pascanu' 'Guido Montufar' 'Yoshua Bengio']",
"Razvan Pascanu and Guido Montufar and Yoshua Bengio"
] |
cs.NE cs.LG q-bio.NC | null | 1312.6108 | null | null | http://arxiv.org/pdf/1312.6108v3 | 2014-02-17T16:41:30Z | 2013-12-20T20:47:28Z | Modeling correlations in spontaneous activity of visual cortex with
centered Gaussian-binary deep Boltzmann machines | Spontaneous cortical activity -- the ongoing cortical activities in absence
of intentional sensory input -- is considered to play a vital role in many
aspects of both normal brain functions and mental dysfunctions. We present a
centered Gaussian-binary Deep Boltzmann Machine (GDBM) for modeling the
activity in early cortical visual areas and relate the random sampling in GDBMs
to the spontaneous cortical activity. After training the proposed model on
natural image patches, we show that the samples collected from the model's
probability distribution encompass similar activity patterns as found in the
spontaneous activity. Specifically, filters having the same orientation
preference tend to be active together during random sampling. Our work
demonstrates the centered GDBM is a meaningful model approach for basic
receptive field properties and the emergence of spontaneous activity patterns
in early cortical visual areas. Besides, we show empirically that centered
GDBMs do not suffer from the difficulties during training as GDBMs do and can
be properly trained without the layer-wise pretraining.
| [
"Nan Wang, Dirk Jancke, Laurenz Wiskott",
"['Nan Wang' 'Dirk Jancke' 'Laurenz Wiskott']"
] |
stat.ML cs.LG | null | 1312.6114 | null | null | null | null | null | Auto-Encoding Variational Bayes | How can we perform efficient inference and learning in directed probabilistic
models, in the presence of continuous latent variables with intractable
posterior distributions, and large datasets? We introduce a stochastic
variational inference and learning algorithm that scales to large datasets and,
under some mild differentiability conditions, even works in the intractable
case. Our contributions are two-fold. First, we show that a reparameterization
of the variational lower bound yields a lower bound estimator that can be
straightforwardly optimized using standard stochastic gradient methods. Second,
we show that for i.i.d. datasets with continuous latent variables per
datapoint, posterior inference can be made especially efficient by fitting an
approximate inference model (also called a recognition model) to the
intractable posterior using the proposed lower bound estimator. Theoretical
advantages are reflected in experimental results.
| [
"Diederik P Kingma, Max Welling"
] |
null | null | 1312.6114v | null | null | http://arxiv.org/pdf/1312.6114v11 | 2022-12-10T21:04:00Z | 2013-12-20T20:58:10Z | Auto-Encoding Variational Bayes | How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results. | [
"['Diederik P Kingma' 'Max Welling']"
] |
stat.ML cs.LG cs.NE q-bio.NC | null | 1312.6115 | null | null | http://arxiv.org/pdf/1312.6115v5 | 2014-03-22T20:25:27Z | 2013-12-20T20:59:11Z | Neuronal Synchrony in Complex-Valued Deep Networks | Deep learning has recently led to great successes in tasks such as image
recognition (e.g Krizhevsky et al., 2012). However, deep networks are still
outmatched by the power and versatility of the brain, perhaps in part due to
the richer neuronal computations available to cortical circuits. The challenge
is to identify which neuronal mechanisms are relevant, and to find suitable
abstractions to model them. Here, we show how aspects of spike timing, long
hypothesized to play a crucial role in cortical information processing, could
be incorporated into deep networks to build richer, versatile representations.
We introduce a neural network formulation based on complex-valued neuronal
units that is not only biologically meaningful but also amenable to a variety
of deep learning frameworks. Here, units are attributed both a firing rate and
a phase, the latter indicating properties of spike timing. We show how this
formulation qualitatively captures several aspects thought to be related to
neuronal synchrony, including gating of information processing and dynamic
binding of distributed object representations. Focusing on the latter, we
demonstrate the potential of the approach in several simple experiments. Thus,
neuronal synchrony could be a flexible mechanism that fulfills multiple
functional roles in deep networks.
| [
"David P. Reichert, Thomas Serre",
"['David P. Reichert' 'Thomas Serre']"
] |
stat.ML cs.LG cs.NE | null | 1312.6116 | null | null | http://arxiv.org/pdf/1312.6116v2 | 2014-02-19T11:13:48Z | 2013-12-20T20:59:15Z | Improving Deep Neural Networks with Probabilistic Maxout Units | We present a probabilistic variant of the recently introduced maxout unit.
The success of deep neural networks utilizing maxout can partly be attributed
to favorable performance under dropout, when compared to rectified linear
units. It however also depends on the fact that each maxout unit performs a
pooling operation over a group of linear transformations and is thus partially
invariant to changes in its input. Starting from this observation we ask the
question: Can the desirable properties of maxout units be preserved while
improving their invariance properties ? We argue that our probabilistic maxout
(probout) units successfully achieve this balance. We quantitatively verify
this claim and report classification performance matching or exceeding the
current state of the art on three challenging image classification benchmarks
(CIFAR-10, CIFAR-100 and SVHN).
| [
"Jost Tobias Springenberg, Martin Riedmiller",
"['Jost Tobias Springenberg' 'Martin Riedmiller']"
] |
cs.LG | null | 1312.6117 | null | null | http://arxiv.org/pdf/1312.6117v2 | 2014-11-13T05:52:05Z | 2013-12-19T21:45:10Z | Comparison three methods of clustering: k-means, spectral clustering and
hierarchical clustering | Comparison of three kind of the clustering and find cost function and loss
function and calculate them. Error rate of the clustering methods and how to
calculate the error percentage always be one on the important factor for
evaluating the clustering methods, so this paper introduce one way to calculate
the error rate of clustering methods. Clustering algorithms can be divided into
several categories including partitioning clustering algorithms, hierarchical
algorithms and density based algorithms. Generally speaking we should compare
clustering algorithms by Scalability, Ability to work with different attribute,
Clusters formed by conventional, Having minimal knowledge of the computer to
recognize the input parameters, Classes for dealing with noise and extra
deposition that same error rate for clustering a new data, Thus, there is no
effect on the input data, different dimensions of high levels, K-means is one
of the simplest approach to clustering that clustering is an unsupervised
problem.
| [
"['Kamran Kowsari']",
"Kamran Kowsari"
] |
cs.NE cond-mat.dis-nn cs.CV cs.LG q-bio.NC stat.ML | null | 1312.6120 | null | null | http://arxiv.org/pdf/1312.6120v3 | 2014-02-19T17:26:57Z | 2013-12-20T20:24:00Z | Exact solutions to the nonlinear dynamics of learning in deep linear
neural networks | Despite the widespread practical success of deep learning methods, our
theoretical understanding of the dynamics of learning in deep neural networks
remains quite sparse. We attempt to bridge the gap between the theory and
practice of deep learning by systematically analyzing learning dynamics for the
restricted case of deep linear neural networks. Despite the linearity of their
input-output map, such networks have nonlinear gradient descent dynamics on
weights that change with the addition of each new hidden layer. We show that
deep linear networks exhibit nonlinear learning phenomena similar to those seen
in simulations of nonlinear networks, including long plateaus followed by rapid
transitions to lower error solutions, and faster convergence from greedy
unsupervised pretraining initial conditions than from random initial
conditions. We provide an analytical description of these phenomena by finding
new exact solutions to the nonlinear dynamics of deep learning. Our theoretical
analysis also reveals the surprising finding that as the depth of a network
approaches infinity, learning speed can nevertheless remain finite: for a
special class of initial conditions on the weights, very deep networks incur
only a finite, depth independent, delay in learning speed relative to shallow
networks. We show that, under certain conditions on the training data,
unsupervised pretraining can find this special class of initial conditions,
while scaled random Gaussian initializations cannot. We further exhibit a new
class of random orthogonal initial conditions on weights that, like
unsupervised pre-training, enjoys depth independent learning times. We further
show that these initial conditions also lead to faithful propagation of
gradients even in deep nonlinear networks, as long as they operate in a special
regime known as the edge of chaos.
| [
"Andrew M. Saxe, James L. McClelland, Surya Ganguli",
"['Andrew M. Saxe' 'James L. McClelland' 'Surya Ganguli']"
] |
cs.LG cs.NE | null | 1312.6157 | null | null | http://arxiv.org/pdf/1312.6157v2 | 2014-01-02T17:06:25Z | 2013-12-20T21:52:08Z | Distinction between features extracted using deep belief networks | Data representation is an important pre-processing step in many machine
learning algorithms. There are a number of methods used for this task such as
Deep Belief Networks (DBNs) and Discrete Fourier Transforms (DFTs). Since some
of the features extracted using automated feature extraction methods may not
always be related to a specific machine learning task, in this paper we propose
two methods in order to make a distinction between extracted features based on
their relevancy to the task. We applied these two methods to a Deep Belief
Network trained for a face recognition task.
| [
"['Mohammad Pezeshki' 'Sajjad Gholami' 'Ahmad Nickabadi']",
"Mohammad Pezeshki, Sajjad Gholami, Ahmad Nickabadi"
] |
cs.LG cs.CV cs.NE | null | 1312.6158 | null | null | http://arxiv.org/pdf/1312.6158v2 | 2014-01-02T17:04:35Z | 2013-12-20T21:56:38Z | Deep Belief Networks for Image Denoising | Deep Belief Networks which are hierarchical generative models are effective
tools for feature representation and extraction. Furthermore, DBNs can be used
in numerous aspects of Machine Learning such as image denoising. In this paper,
we propose a novel method for image denoising which relies on the DBNs' ability
in feature representation. This work is based upon learning of the noise
behavior. Generally, features which are extracted using DBNs are presented as
the values of the last layer nodes. We train a DBN a way that the network
totally distinguishes between nodes presenting noise and nodes presenting image
content in the last later of DBN, i.e. the nodes in the last layer of trained
DBN are divided into two distinct groups of nodes. After detecting the nodes
which are presenting the noise, we are able to make the noise nodes inactive
and reconstruct a noiseless image. In section 4 we explore the results of
applying this method on the MNIST dataset of handwritten digits which is
corrupted with additive white Gaussian noise (AWGN). A reduction of 65.9% in
average mean square error (MSE) was achieved when the proposed method was used
for the reconstruction of the noisy images.
| [
"Mohammad Ali Keyvanrad, Mohammad Pezeshki, and Mohammad Ali\n Homayounpour",
"['Mohammad Ali Keyvanrad' 'Mohammad Pezeshki' 'Mohammad Ali Homayounpour']"
] |
cs.LG cs.CL | null | 1312.6168 | null | null | http://arxiv.org/pdf/1312.6168v3 | 2014-02-18T11:22:30Z | 2013-12-20T22:44:26Z | Factorial Hidden Markov Models for Learning Representations of Natural
Language | Most representation learning algorithms for language and image processing are
local, in that they identify features for a data point based on surrounding
points. Yet in language processing, the correct meaning of a word often depends
on its global context. As a step toward incorporating global context into
representation learning, we develop a representation learning algorithm that
incorporates joint prediction into its technique for producing features for a
word. We develop efficient variational methods for learning Factorial Hidden
Markov Models from large texts, and use variational distributions to produce
features for each word that are sensitive to the entire input sequence, not
just to a local context window. Experiments on part-of-speech tagging and
chunking indicate that the features are competitive with or better than
existing state-of-the-art representation learning methods.
| [
"['Anjan Nepal' 'Alexander Yates']",
"Anjan Nepal and Alexander Yates"
] |
cs.LG cs.SI physics.soc-ph | null | 1312.6169 | null | null | http://arxiv.org/pdf/1312.6169v2 | 2014-02-02T20:36:57Z | 2013-12-20T22:49:01Z | Learning Information Spread in Content Networks | We introduce a model for predicting the diffusion of content information on
social media. When propagation is usually modeled on discrete graph structures,
we introduce here a continuous diffusion model, where nodes in a diffusion
cascade are projected onto a latent space with the property that their
proximity in this space reflects the temporal diffusion process. We focus on
the task of predicting contaminated users for an initial initial information
source and provide preliminary results on differents datasets.
| [
"['Cédric Lagnier' 'Simon Bourigault' 'Sylvain Lamprier' 'Ludovic Denoyer'\n 'Patrick Gallinari']",
"C\\'edric Lagnier, Simon Bourigault, Sylvain Lamprier, Ludovic Denoyer\n and Patrick Gallinari"
] |
cs.NE cs.CV cs.LG | null | 1312.6171 | null | null | http://arxiv.org/pdf/1312.6171v2 | 2014-01-10T23:19:26Z | 2013-12-20T23:07:25Z | Learning Paired-associate Images with An Unsupervised Deep Learning
Architecture | This paper presents an unsupervised multi-modal learning system that learns
associative representation from two input modalities, or channels, such that
input on one channel will correctly generate the associated response at the
other and vice versa. In this way, the system develops a kind of supervised
classification model meant to simulate aspects of human associative memory. The
system uses a deep learning architecture (DLA) composed of two input/output
channels formed from stacked Restricted Boltzmann Machines (RBM) and an
associative memory network that combines the two channels. The DLA is trained
on pairs of MNIST handwritten digit images to develop hierarchical features and
associative representations that are able to reconstruct one image given its
paired-associate. Experiments show that the multi-modal learning system
generates models that are as accurate as back-propagation networks but with the
advantage of a bi-directional network and unsupervised learning from either
paired or non-paired training examples.
| [
"Ti Wang and Daniel L. Silver",
"['Ti Wang' 'Daniel L. Silver']"
] |
cs.LG cs.MM | null | 1312.6180 | null | null | http://arxiv.org/pdf/1312.6180v1 | 2013-12-21T00:32:24Z | 2013-12-21T00:32:24Z | Manifold regularized kernel logistic regression for web image annotation | With the rapid advance of Internet technology and smart devices, users often
need to manage large amounts of multimedia information using smart devices,
such as personal image and video accessing and browsing. These requirements
heavily rely on the success of image (video) annotation, and thus large scale
image annotation through innovative machine learning methods has attracted
intensive attention in recent years. One representative work is support vector
machine (SVM). Although it works well in binary classification, SVM has a
non-smooth loss function and can not naturally cover multi-class case. In this
paper, we propose manifold regularized kernel logistic regression (KLR) for web
image annotation. Compared to SVM, KLR has the following advantages: (1) the
KLR has a smooth loss function; (2) the KLR produces an explicit estimate of
the probability instead of class label; and (3) the KLR can naturally be
generalized to the multi-class case. We carefully conduct experiments on MIR
FLICKR dataset and demonstrate the effectiveness of manifold regularized kernel
logistic regression for image annotation.
| [
"W. Liu, H. Liu, D.Tao, Y. Wang, K. Lu",
"['W. Liu' 'H. Liu' 'D. Tao' 'Y. Wang' 'K. Lu']"
] |
cs.MS cs.LG cs.NA stat.ML | null | 1312.6182 | null | null | http://arxiv.org/pdf/1312.6182v1 | 2013-12-21T00:38:02Z | 2013-12-21T00:38:02Z | Large-Scale Paralleled Sparse Principal Component Analysis | Principal component analysis (PCA) is a statistical technique commonly used
in multivariate data analysis. However, PCA can be difficult to interpret and
explain since the principal components (PCs) are linear combinations of the
original variables. Sparse PCA (SPCA) aims to balance statistical fidelity and
interpretability by approximating sparse PCs whose projections capture the
maximal variance of original data. In this paper we present an efficient and
paralleled method of SPCA using graphics processing units (GPUs), which can
process large blocks of data in parallel. Specifically, we construct parallel
implementations of the four optimization formulations of the generalized power
method of SPCA (GP-SPCA), one of the most efficient and effective SPCA
approaches, on a GPU. The parallel GPU implementation of GP-SPCA (using CUBLAS)
is up to eleven times faster than the corresponding CPU implementation (using
CBLAS), and up to 107 times faster than a MatLab implementation. Extensive
comparative experiments in several real-world datasets confirm that SPCA offers
a practical advantage.
| [
"W. Liu, H. Zhang, D. Tao, Y. Wang, K. Lu",
"['W. Liu' 'H. Zhang' 'D. Tao' 'Y. Wang' 'K. Lu']"
] |
cs.LG cs.NE | null | 1312.6184 | null | null | http://arxiv.org/pdf/1312.6184v7 | 2014-10-11T00:19:10Z | 2013-12-21T00:47:43Z | Do Deep Nets Really Need to be Deep? | Currently, deep neural networks are the state of the art on problems such as
speech recognition and computer vision. In this extended abstract, we show that
shallow feed-forward networks can learn the complex functions previously
learned by deep nets and achieve accuracies previously only achievable with
deep models. Moreover, in some cases the shallow neural nets can learn these
deep functions using a total number of parameters similar to the original deep
model. We evaluate our method on the TIMIT phoneme recognition task and are
able to train shallow fully-connected nets that perform similarly to complex,
well-engineered, deep convolutional architectures. Our success in training
shallow neural nets to mimic deeper models suggests that there probably exist
better algorithms for training shallow feed-forward nets than those currently
available.
| [
"Lei Jimmy Ba, Rich Caruana",
"['Lei Jimmy Ba' 'Rich Caruana']"
] |
cs.CV cs.DC cs.LG cs.NE | null | 1312.6186 | null | null | http://arxiv.org/pdf/1312.6186v1 | 2013-12-21T00:56:56Z | 2013-12-21T00:56:56Z | GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network
Training | The ability to train large-scale neural networks has resulted in
state-of-the-art performance in many areas of computer vision. These results
have largely come from computational break throughs of two forms: model
parallelism, e.g. GPU accelerated training, which has seen quick adoption in
computer vision circles, and data parallelism, e.g. A-SGD, whose large scale
has been used mostly in industry. We report early experiments with a system
that makes use of both model parallelism and data parallelism, we call GPU
A-SGD. We show using GPU A-SGD it is possible to speed up training of large
convolutional neural networks useful for computer vision. We believe GPU A-SGD
will make it possible to train larger networks on larger training sets in a
reasonable amount of time.
| [
"Thomas Paine, Hailin Jin, Jianchao Yang, Zhe Lin, Thomas Huang",
"['Thomas Paine' 'Hailin Jin' 'Jianchao Yang' 'Zhe Lin' 'Thomas Huang']"
] |
cs.LG | null | 1312.6190 | null | null | http://arxiv.org/pdf/1312.6190v2 | 2014-05-28T16:35:17Z | 2013-12-21T01:50:08Z | Adaptive Feature Ranking for Unsupervised Transfer Learning | Transfer Learning is concerned with the application of knowledge gained from
solving a problem to a different but related problem domain. In this paper, we
propose a method and efficient algorithm for ranking and selecting
representations from a Restricted Boltzmann Machine trained on a source domain
to be transferred onto a target domain. Experiments carried out using the
MNIST, ICDAR and TiCC image datasets show that the proposed adaptive feature
ranking and transfer learning method offers statistically significant
improvements on the training of RBMs. Our method is general in that the
knowledge chosen by the ranking function does not depend on its relation to any
specific target domain, and it works with unsupervised learning and
knowledge-based transfer.
| [
"Son N. Tran, Artur d'Avila Garcez",
"['Son N. Tran' \"Artur d'Avila Garcez\"]"
] |
cs.CL cs.LG | null | 1312.6192 | null | null | http://arxiv.org/pdf/1312.6192v4 | 2014-02-15T20:59:04Z | 2013-12-21T02:29:42Z | Can recursive neural tensor networks learn logical reasoning? | Recursive neural network models and their accompanying vector representations
for words have seen success in an array of increasingly semantically
sophisticated tasks, but almost nothing is known about their ability to
accurately capture the aspects of linguistic meaning that are necessary for
interpretation or reasoning. To evaluate this, I train a recursive model on a
new corpus of constructed examples of logical reasoning in short sentences,
like the inference of "some animal walks" from "some dog walks" or "some cat
walks," given that dogs and cats are animals. This model learns representations
that generalize well to new types of reasoning pattern in all but a few cases,
a result which is promising for the ability of learned representation models to
capture logical reasoning.
| [
"Samuel R. Bowman",
"['Samuel R. Bowman']"
] |
stat.ML cs.LG cs.NE | null | 1312.6197 | null | null | http://arxiv.org/pdf/1312.6197v2 | 2014-01-02T12:26:53Z | 2013-12-21T03:19:33Z | An empirical analysis of dropout in piecewise linear networks | The recently introduced dropout training criterion for neural networks has
been the subject of much attention due to its simplicity and remarkable
effectiveness as a regularizer, as well as its interpretation as a training
procedure for an exponentially large ensemble of networks that share
parameters. In this work we empirically investigate several questions related
to the efficacy of dropout, specifically as it concerns networks employing the
popular rectified linear activation function. We investigate the quality of the
test time weight-scaling inference procedure by evaluating the geometric
average exactly in small models, as well as compare the performance of the
geometric mean to the arithmetic mean more commonly employed by ensemble
techniques. We explore the effect of tied weights on the ensemble
interpretation by training ensembles of masked networks without tied weights.
Finally, we investigate an alternative criterion based on a biased estimator of
the maximum likelihood ensemble gradient.
| [
"['David Warde-Farley' 'Ian J. Goodfellow' 'Aaron Courville'\n 'Yoshua Bengio']",
"David Warde-Farley, Ian J. Goodfellow, Aaron Courville and Yoshua\n Bengio"
] |
cs.CV cs.LG cs.NE | null | 1312.6199 | null | null | http://arxiv.org/pdf/1312.6199v4 | 2014-02-19T16:33:14Z | 2013-12-21T03:36:08Z | Intriguing properties of neural networks | Deep neural networks are highly expressive models that have recently achieved
state of the art performance on speech and visual recognition tasks. While
their expressiveness is the reason they succeed, it also causes them to learn
uninterpretable solutions that could have counter-intuitive properties. In this
paper we report two such properties.
First, we find that there is no distinction between individual high level
units and random linear combinations of high level units, according to various
methods of unit analysis. It suggests that it is the space, rather than the
individual units, that contains of the semantic information in the high layers
of neural networks.
Second, we find that deep neural networks learn input-output mappings that
are fairly discontinuous to a significant extend. We can cause the network to
misclassify an image by applying a certain imperceptible perturbation, which is
found by maximizing the network's prediction error. In addition, the specific
nature of these perturbations is not a random artifact of learning: the same
perturbation can cause a different network, that was trained on a different
subset of the dataset, to misclassify the same input.
| [
"['Christian Szegedy' 'Wojciech Zaremba' 'Ilya Sutskever' 'Joan Bruna'\n 'Dumitru Erhan' 'Ian Goodfellow' 'Rob Fergus']",
"Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna,\n Dumitru Erhan, Ian Goodfellow, Rob Fergus"
] |
cs.LG cs.CV cs.NE | null | 1312.6203 | null | null | http://arxiv.org/pdf/1312.6203v3 | 2014-05-21T16:27:09Z | 2013-12-21T04:25:53Z | Spectral Networks and Locally Connected Networks on Graphs | Convolutional Neural Networks are extremely efficient architectures in image
and audio recognition tasks, thanks to their ability to exploit the local
translational invariance of signal classes over their domain. In this paper we
consider possible generalizations of CNNs to signals defined on more general
domains without the action of a translation group. In particular, we propose
two constructions, one based upon a hierarchical clustering of the domain, and
another based on the spectrum of the graph Laplacian. We show through
experiments that for low-dimensional graphs it is possible to learn
convolutional layers with a number of parameters independent of the input size,
resulting in efficient deep architectures.
| [
"['Joan Bruna' 'Wojciech Zaremba' 'Arthur Szlam' 'Yann LeCun']",
"Joan Bruna, Wojciech Zaremba, Arthur Szlam and Yann LeCun"
] |
cs.CV cs.LG cs.NE | null | 1312.6204 | null | null | http://arxiv.org/pdf/1312.6204v2 | 2014-02-18T02:57:42Z | 2013-12-21T04:32:51Z | One-Shot Adaptation of Supervised Deep Convolutional Models | Dataset bias remains a significant barrier towards solving real world
computer vision tasks. Though deep convolutional networks have proven to be a
competitive approach for image classification, a question remains: have these
models have solved the dataset bias problem? In general, training or
fine-tuning a state-of-the-art deep model on a new domain requires a
significant amount of data, which for many applications is simply not
available. Transfer of models directly to new domains without adaptation has
historically led to poor recognition performance. In this paper, we pose the
following question: is a single image dataset, much larger than previously
explored for adaptation, comprehensive enough to learn general deep models that
may be effectively applied to new image domains? In other words, are deep CNNs
trained on large amounts of labeled data as susceptible to dataset bias as
previous methods have been shown to be? We show that a generic supervised deep
CNN model trained on a large dataset reduces, but does not remove, dataset
bias. Furthermore, we propose several methods for adaptation with deep models
that are able to operate with little (one example per category) or no labeled
domain specific data. Our experiments show that adaptation of deep models on
benchmark visual domain adaptation datasets can provide a significant
performance boost.
| [
"Judy Hoffman, Eric Tzeng, Jeff Donahue, Yangqing Jia, Kate Saenko,\n Trevor Darrell",
"['Judy Hoffman' 'Eric Tzeng' 'Jeff Donahue' 'Yangqing Jia' 'Kate Saenko'\n 'Trevor Darrell']"
] |
stat.ML cs.LG | null | 1312.6205 | null | null | http://arxiv.org/pdf/1312.6205v2 | 2014-01-02T07:50:44Z | 2013-12-21T04:53:56Z | Relaxations for inference in restricted Boltzmann machines | We propose a relaxation-based approximate inference algorithm that samples
near-MAP configurations of a binary pairwise Markov random field. We experiment
on MAP inference tasks in several restricted Boltzmann machines. We also use
our underlying sampler to estimate the log-partition function of restricted
Boltzmann machines and compare against other sampling-based methods.
| [
"Sida I. Wang, Roy Frostig, Percy Liang, Christopher D. Manning",
"['Sida I. Wang' 'Roy Frostig' 'Percy Liang' 'Christopher D. Manning']"
] |
stat.ML cs.LG cs.NE | null | 1312.6211 | null | null | http://arxiv.org/pdf/1312.6211v3 | 2015-03-04T01:43:31Z | 2013-12-21T06:31:41Z | An Empirical Investigation of Catastrophic Forgetting in Gradient-Based
Neural Networks | Catastrophic forgetting is a problem faced by many machine learning models
and algorithms. When trained on one task, then trained on a second task, many
machine learning models "forget" how to perform the first task. This is widely
believed to be a serious problem for neural networks. Here, we investigate the
extent to which the catastrophic forgetting problem occurs for modern neural
networks, comparing both established and recent gradient-based training
algorithms and activation functions. We also examine the effect of the
relationship between the first task and the second task on catastrophic
forgetting. We find that it is always best to train using the dropout
algorithm--the dropout algorithm is consistently best at adapting to the new
task, remembering the old task, and has the best tradeoff curve between these
two extremes. We find that different tasks and relationships between tasks
result in very different rankings of activation function performance. This
suggests the choice of activation function should always be cross-validated.
| [
"['Ian J. Goodfellow' 'Mehdi Mirza' 'Da Xiao' 'Aaron Courville'\n 'Yoshua Bengio']",
"Ian J. Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, Yoshua\n Bengio"
] |
cs.LG cs.AI cs.DS | null | 1312.6214 | null | null | http://arxiv.org/pdf/1312.6214v3 | 2014-05-25T11:57:08Z | 2013-12-21T06:51:50Z | Volumetric Spanners: an Efficient Exploration Basis for Learning | Numerous machine learning problems require an exploration basis - a mechanism
to explore the action space. We define a novel geometric notion of exploration
basis with low variance, called volumetric spanners, and give efficient
algorithms to construct such a basis.
We show how efficient volumetric spanners give rise to the first efficient
and optimal regret algorithm for bandit linear optimization over general convex
sets. Previously such results were known only for specific convex sets, or
under special conditions such as the existence of an efficient self-concordant
barrier for the underlying set.
| [
"['Elad Hazan' 'Zohar Karnin' 'Raghu Mehka']",
"Elad Hazan and Zohar Karnin and Raghu Mehka"
] |
cs.DC cs.LG stat.ML | null | 1312.6273 | null | null | http://arxiv.org/pdf/1312.6273v1 | 2013-12-21T16:51:26Z | 2013-12-21T16:51:26Z | Parallel architectures for fuzzy triadic similarity learning | In a context of document co-clustering, we define a new similarity measure
which iteratively computes similarity while combining fuzzy sets in a
three-partite graph. The fuzzy triadic similarity (FT-Sim) model can deal with
uncertainty offers by the fuzzy sets. Moreover, with the development of the Web
and the high availability of storage spaces, more and more documents become
accessible. Documents can be provided from multiple sites and make similarity
computation an expensive processing. This problem motivated us to use parallel
computing. In this paper, we introduce parallel architectures which are able to
treat large and multi-source data sets by a sequential, a merging or a
splitting-based process. Then, we proceed to a local and a central (or global)
computing using the basic FT-Sim measure. The idea behind these architectures
is to reduce both time and space complexities thanks to parallel computation.
| [
"Sonia Alouane-Ksouri, Minyar Sassi-Hidri, Kamel Barkaoui",
"['Sonia Alouane-Ksouri' 'Minyar Sassi-Hidri' 'Kamel Barkaoui']"
] |
cs.LG | null | 1312.6282 | null | null | http://arxiv.org/pdf/1312.6282v1 | 2013-12-21T18:10:59Z | 2013-12-21T18:10:59Z | Dimension-free Concentration Bounds on Hankel Matrices for Spectral
Learning | Learning probabilistic models over strings is an important issue for many
applications. Spectral methods propose elegant solutions to the problem of
inferring weighted automata from finite samples of variable-length strings
drawn from an unknown target distribution. These methods rely on a singular
value decomposition of a matrix $H_S$, called the Hankel matrix, that records
the frequencies of (some of) the observed strings. The accuracy of the learned
distribution depends both on the quantity of information embedded in $H_S$ and
on the distance between $H_S$ and its mean $H_r$. Existing concentration bounds
seem to indicate that the concentration over $H_r$ gets looser with the size of
$H_r$, suggesting to make a trade-off between the quantity of used information
and the size of $H_r$. We propose new dimension-free concentration bounds for
several variants of Hankel matrices. Experiments demonstrate that these bounds
are tight and that they significantly improve existing bounds. These results
suggest that the concentration rate of the Hankel matrix around its mean does
not constitute an argument for limiting its size.
| [
"['François Denis' 'Mattias Gybels' 'Amaury Habrard']",
"Fran\\c{c}ois Denis, Mattias Gybels and Amaury Habrard"
] |
cs.CV cs.LG stat.ML | null | 1312.6430 | null | null | http://arxiv.org/pdf/1312.6430v2 | 2014-07-15T02:51:13Z | 2013-12-22T22:10:42Z | Growing Regression Forests by Classification: Applications to Object
Pose Estimation | In this work, we propose a novel node splitting method for regression trees
and incorporate it into the regression forest framework. Unlike traditional
binary splitting, where the splitting rule is selected from a predefined set of
binary splitting rules via trial-and-error, the proposed node splitting method
first finds clusters of the training data which at least locally minimize the
empirical loss without considering the input space. Then splitting rules which
preserve the found clusters as much as possible are determined by casting the
problem into a classification problem. Consequently, our new node splitting
method enjoys more freedom in choosing the splitting rules, resulting in more
efficient tree structures. In addition to the Euclidean target space, we
present a variant which can naturally deal with a circular target space by the
proper use of circular statistics. We apply the regression forest employing our
node splitting to head pose estimation (Euclidean target space) and car
direction estimation (circular target space) and demonstrate that the proposed
method significantly outperforms state-of-the-art methods (38.5% and 22.5%
error reduction respectively).
| [
"['Kota Hara' 'Rama Chellappa']",
"Kota Hara and Rama Chellappa"
] |
cs.LG cs.NE | null | 1312.6461 | null | null | http://arxiv.org/pdf/1312.6461v3 | 2014-02-19T20:02:05Z | 2013-12-23T03:23:04Z | Nonparametric Weight Initialization of Neural Networks via Integral
Representation | A new initialization method for hidden parameters in a neural network is
proposed. Derived from the integral representation of the neural network, a
nonparametric probability distribution of hidden parameters is introduced. In
this proposal, hidden parameters are initialized by samples drawn from this
distribution, and output parameters are fitted by ordinary linear regression.
Numerical experiments show that backpropagation with proposed initialization
converges faster than uniformly random initialization. Also it is shown that
the proposed method achieves enough accuracy by itself without backpropagation
in some cases.
| [
"Sho Sonoda, Noboru Murata",
"['Sho Sonoda' 'Noboru Murata']"
] |
cs.CV cs.LG | null | 1312.6594 | null | null | http://arxiv.org/pdf/1312.6594v3 | 2014-02-11T17:07:21Z | 2013-12-20T16:36:40Z | Sequentially Generated Instance-Dependent Image Representations for
Classification | In this paper, we investigate a new framework for image classification that
adaptively generates spatial representations. Our strategy is based on a
sequential process that learns to explore the different regions of any image in
order to infer its category. In particular, the choice of regions is specific
to each image, directed by the actual content of previously selected
regions.The capacity of the system to handle incomplete image information as
well as its adaptive region selection allow the system to perform well in
budgeted classification tasks by exploiting a dynamicly generated
representation of each image. We demonstrate the system's abilities in a series
of image-based exploration and classification tasks that highlight its learned
exploration and inference abilities.
| [
"Gabriel Dulac-Arnold and Ludovic Denoyer and Nicolas Thome and\n Matthieu Cord and Patrick Gallinari",
"['Gabriel Dulac-Arnold' 'Ludovic Denoyer' 'Nicolas Thome' 'Matthieu Cord'\n 'Patrick Gallinari']"
] |
cs.LG cs.IR | null | 1312.6597 | null | null | http://arxiv.org/pdf/1312.6597v2 | 2014-01-24T23:09:17Z | 2013-12-23T16:52:56Z | Co-Multistage of Multiple Classifiers for Imbalanced Multiclass Learning | In this work, we propose two stochastic architectural models (CMC and CMC-M)
with two layers of classifiers applicable to datasets with one and multiple
skewed classes. This distinction becomes important when the datasets have a
large number of classes. Therefore, we present a novel solution to imbalanced
multiclass learning with several skewed majority classes, which improves
minority classes identification. This fact is particularly important for text
classification tasks, such as event detection. Our models combined with
pre-processing sampling techniques improved the classification results on six
well-known datasets. Finally, we have also introduced a new metric SG-Mean to
overcome the multiplication by zero limitation of G-Mean.
| [
"Luis Marujo, Anatole Gershman, Jaime Carbonell, David Martins de\n Matos, Jo\\~ao P. Neto",
"['Luis Marujo' 'Anatole Gershman' 'Jaime Carbonell'\n 'David Martins de Matos' 'João P. Neto']"
] |
math.PR cs.LG stat.ML | 10.1007/s10472-015-9470-x | 1312.6607 | null | null | http://arxiv.org/abs/1312.6607v1 | 2013-12-23T17:11:59Z | 2013-12-23T17:11:59Z | Using Latent Binary Variables for Online Reconstruction of Large Scale
Systems | We propose a probabilistic graphical model realizing a minimal encoding of
real variables dependencies based on possibly incomplete observation and an
empirical cumulative distribution function per variable. The target application
is a large scale partially observed system, like e.g. a traffic network, where
a small proportion of real valued variables are observed, and the other
variables have to be predicted. Our design objective is therefore to have good
scalability in a real-time setting. Instead of attempting to encode the
dependencies of the system directly in the description space, we propose a way
to encode them in a latent space of binary variables, reflecting a rough
perception of the observable (congested/non-congested for a traffic road). The
method relies in part on message passing algorithms, i.e. belief propagation,
but the core of the work concerns the definition of meaningful latent variables
associated to the variables of interest and their pairwise dependencies.
Numerical experiments demonstrate the applicability of the method in practice.
| [
"['Victorin Martin' 'Jean-Marc Lasgouttes' 'Cyril Furtlehner']",
"Victorin Martin, Jean-Marc Lasgouttes, Cyril Furtlehner"
] |
cs.DS cs.LG quant-ph | null | 1312.6652 | null | null | http://arxiv.org/pdf/1312.6652v1 | 2013-12-23T19:30:46Z | 2013-12-23T19:30:46Z | Rounding Sum-of-Squares Relaxations | We present a general approach to rounding semidefinite programming
relaxations obtained by the Sum-of-Squares method (Lasserre hierarchy). Our
approach is based on using the connection between these relaxations and the
Sum-of-Squares proof system to transform a *combining algorithm* -- an
algorithm that maps a distribution over solutions into a (possibly weaker)
solution -- into a *rounding algorithm* that maps a solution of the relaxation
to a solution of the original problem.
Using this approach, we obtain algorithms that yield improved results for
natural variants of three well-known problems:
1) We give a quasipolynomial-time algorithm that approximates the maximum of
a low degree multivariate polynomial with non-negative coefficients over the
Euclidean unit sphere. Beyond being of interest in its own right, this is
related to an open question in quantum information theory, and our techniques
have already led to improved results in this area (Brand\~{a}o and Harrow, STOC
'13).
2) We give a polynomial-time algorithm that, given a d dimensional subspace
of R^n that (almost) contains the characteristic function of a set of size n/k,
finds a vector $v$ in the subspace satisfying $|v|_4^4 > c(k/d^{1/3}) |v|_2^2$,
where $|v|_p = (E_i v_i^p)^{1/p}$. Aside from being a natural relaxation, this
is also motivated by a connection to the Small Set Expansion problem shown by
Barak et al. (STOC 2012) and our results yield a certain improvement for that
problem.
3) We use this notion of L_4 vs. L_2 sparsity to obtain a polynomial-time
algorithm with substantially improved guarantees for recovering a planted
$\mu$-sparse vector v in a random d-dimensional subspace of R^n. If v has mu n
nonzero coordinates, we can recover it with high probability whenever $\mu <
O(\min(1,n/d^2))$, improving for $d < n^{2/3}$ prior methods which
intrinsically required $\mu < O(1/\sqrt(d))$.
| [
"Boaz Barak, Jonathan Kelner, David Steurer",
"['Boaz Barak' 'Jonathan Kelner' 'David Steurer']"
] |
physics.data-an cs.LG math.ST q-bio.QM stat.ML stat.TH | 10.1103/PhysRevE.90.011301 | 1312.6661 | null | null | http://arxiv.org/abs/1312.6661v3 | 2014-04-18T21:29:41Z | 2013-12-23T20:13:35Z | Rapid and deterministic estimation of probability densities using
scale-free field theories | The question of how best to estimate a continuous probability density from
finite data is an intriguing open problem at the interface of statistics and
physics. Previous work has argued that this problem can be addressed in a
natural way using methods from statistical field theory. Here I describe new
results that allow this field-theoretic approach to be rapidly and
deterministically computed in low dimensions, making it practical for use in
day-to-day data analysis. Importantly, this approach does not impose a
privileged length scale for smoothness of the inferred probability density, but
rather learns a natural length scale from the data due to the tradeoff between
goodness-of-fit and an Occam factor. Open source software implementing this
method in one and two dimensions is provided.
| [
"['Justin B. Kinney']",
"Justin B. Kinney"
] |
cs.LG | 10.1007/s10618-014-0364-z | 1312.6712 | null | null | http://arxiv.org/abs/1312.6712v1 | 2013-12-23T22:15:59Z | 2013-12-23T22:15:59Z | Invariant Factorization Of Time-Series | Time-series classification is an important domain of machine learning and a
plethora of methods have been developed for the task. In comparison to existing
approaches, this study presents a novel method which decomposes a time-series
dataset into latent patterns and membership weights of local segments to those
patterns. The process is formalized as a constrained objective function and a
tailored stochastic coordinate descent optimization is applied. The time-series
are projected to a new feature representation consisting of the sums of the
membership weights, which captures frequencies of local patterns. Features from
various sliding window sizes are concatenated in order to encapsulate the
interaction of patterns from different sizes. Finally, a large-scale
experimental comparison against 6 state of the art baselines and 43 real life
datasets is conducted. The proposed method outperforms all the baselines with
statistically significant margins in terms of prediction accuracy.
| [
"['Josif Grabocka' 'Lars Schmidt-Thieme']",
"Josif Grabocka, Lars Schmidt-Thieme"
] |
cs.DS cs.LG | null | 1312.6724 | null | null | http://arxiv.org/pdf/1312.6724v3 | 2015-03-19T23:45:54Z | 2013-12-24T00:16:37Z | Local algorithms for interactive clustering | We study the design of interactive clustering algorithms for data sets
satisfying natural stability assumptions. Our algorithms start with any initial
clustering and only make local changes in each step; both are desirable
features in many applications. We show that in this constrained setting one can
still design provably efficient algorithms that produce accurate clusterings.
We also show that our algorithms perform well on real-world data.
| [
"['Pranjal Awasthi' 'Maria-Florina Balcan' 'Konstantin Voevodski']",
"Pranjal Awasthi and Maria-Florina Balcan and Konstantin Voevodski"
] |
cs.LG | null | 1312.6807 | null | null | http://arxiv.org/pdf/1312.6807v1 | 2013-12-24T12:24:30Z | 2013-12-24T12:24:30Z | Iterative Nearest Neighborhood Oversampling in Semisupervised Learning
from Imbalanced Data | Transductive graph-based semi-supervised learning methods usually build an
undirected graph utilizing both labeled and unlabeled samples as vertices.
Those methods propagate label information of labeled samples to neighbors
through their edges in order to get the predicted labels of unlabeled samples.
Most popular semi-supervised learning approaches are sensitive to initial label
distribution happened in imbalanced labeled datasets. The class boundary will
be severely skewed by the majority classes in an imbalanced classification. In
this paper, we proposed a simple and effective approach to alleviate the
unfavorable influence of imbalance problem by iteratively selecting a few
unlabeled samples and adding them into the minority classes to form a balanced
labeled dataset for the learning methods afterwards. The experiments on UCI
datasets and MNIST handwritten digits dataset showed that the proposed approach
outperforms other existing state-of-art methods.
| [
"['Fengqi Li' 'Chuang Yu' 'Nanhai Yang' 'Feng Xia' 'Guangming Li'\n 'Fatemeh Kaveh-Yazdy']",
"Fengqi Li, Chuang Yu, Nanhai Yang, Feng Xia, Guangming Li, Fatemeh\n Kaveh-Yazdy"
] |
cs.DS cs.LG stat.ML | null | 1312.6820 | null | null | http://arxiv.org/pdf/1312.6820v1 | 2013-12-24T14:19:43Z | 2013-12-24T14:19:43Z | A Fast Greedy Algorithm for Generalized Column Subset Selection | This paper defines a generalized column subset selection problem which is
concerned with the selection of a few columns from a source matrix A that best
approximate the span of a target matrix B. The paper then proposes a fast
greedy algorithm for solving this problem and draws connections to different
problems that can be efficiently solved using the proposed algorithm.
| [
"['Ahmed K. Farahat' 'Ali Ghodsi' 'Mohamed S. Kamel']",
"Ahmed K. Farahat, Ali Ghodsi, Mohamed S. Kamel"
] |
cs.DS cs.LG | null | 1312.6838 | null | null | http://arxiv.org/pdf/1312.6838v1 | 2013-12-24T15:10:23Z | 2013-12-24T15:10:23Z | Greedy Column Subset Selection for Large-scale Data Sets | In today's information systems, the availability of massive amounts of data
necessitates the development of fast and accurate algorithms to summarize these
data and represent them in a succinct format. One crucial problem in big data
analytics is the selection of representative instances from large and
massively-distributed data, which is formally known as the Column Subset
Selection (CSS) problem. The solution to this problem enables data analysts to
understand the insights of the data and explore its hidden structure. The
selected instances can also be used for data preprocessing tasks such as
learning a low-dimensional embedding of the data points or computing a low-rank
approximation of the corresponding matrix. This paper presents a fast and
accurate greedy algorithm for large-scale column subset selection. The
algorithm minimizes an objective function which measures the reconstruction
error of the data matrix based on the subset of selected columns. The paper
first presents a centralized greedy algorithm for column subset selection which
depends on a novel recursive formula for calculating the reconstruction error
of the data matrix. The paper then presents a MapReduce algorithm which selects
a few representative columns from a matrix whose columns are massively
distributed across several commodity machines. The algorithm first learns a
concise representation of all columns using random projection, and it then
solves a generalized column subset selection problem at each machine in which a
subset of columns are selected from the sub-matrix on that machine such that
the reconstruction error of the concise representation is minimized. The paper
demonstrates the effectiveness and efficiency of the proposed algorithm through
an empirical evaluation on benchmark data sets.
| [
"Ahmed K. Farahat, Ahmed Elgohary, Ali Ghodsi, Mohamed S. Kamel",
"['Ahmed K. Farahat' 'Ahmed Elgohary' 'Ali Ghodsi' 'Mohamed S. Kamel']"
] |
cs.CL cs.CV cs.LG | null | 1312.6849 | null | null | http://arxiv.org/pdf/1312.6849v2 | 2015-03-30T09:17:46Z | 2013-12-24T16:36:16Z | Speech Recognition Front End Without Information Loss | Speech representation and modelling in high-dimensional spaces of acoustic
waveforms, or a linear transformation thereof, is investigated with the aim of
improving the robustness of automatic speech recognition to additive noise. The
motivation behind this approach is twofold: (i) the information in acoustic
waveforms that is usually removed in the process of extracting low-dimensional
features might aid robust recognition by virtue of structured redundancy
analogous to channel coding, (ii) linear feature domains allow for exact noise
adaptation, as opposed to representations that involve non-linear processing
which makes noise adaptation challenging. Thus, we develop a generative
framework for phoneme modelling in high-dimensional linear feature domains, and
use it in phoneme classification and recognition tasks. Results show that
classification and recognition in this framework perform better than analogous
PLP and MFCC classifiers below 18 dB SNR. A combination of the high-dimensional
and MFCC features at the likelihood level performs uniformly better than either
of the individual representations across all noise levels.
| [
"Matthew Ager and Zoran Cvetkovic and Peter Sollich",
"['Matthew Ager' 'Zoran Cvetkovic' 'Peter Sollich']"
] |
cs.NA cs.LG | null | 1312.6872 | null | null | http://arxiv.org/pdf/1312.6872v1 | 2013-12-17T15:12:48Z | 2013-12-17T15:12:48Z | Matrix recovery using Split Bregman | In this paper we address the problem of recovering a matrix, with inherent
low rank structure, from its lower dimensional projections. This problem is
frequently encountered in wide range of areas including pattern recognition,
wireless sensor networks, control systems, recommender systems, image/video
reconstruction etc. Both in theory and practice, the most optimal way to solve
the low rank matrix recovery problem is via nuclear norm minimization. In this
paper, we propose a Split Bregman algorithm for nuclear norm minimization. The
use of Bregman technique improves the convergence speed of our algorithm and
gives a higher success rate. Also, the accuracy of reconstruction is much
better even for cases where small number of linear measurements are available.
Our claim is supported by empirical results obtained using our algorithm and
its comparison to other existing methods for matrix recovery. The algorithms
are compared on the basis of NMSE, execution time and success rate for varying
ranks and sampling ratios.
| [
"['Anupriya Gogna' 'Ankita Shukla' 'Angshul Majumdar']",
"Anupriya Gogna, Ankita Shukla and Angshul Majumdar"
] |
cs.CV cs.LG cs.NE | null | 1312.6885 | null | null | http://arxiv.org/pdf/1312.6885v1 | 2013-12-24T20:38:18Z | 2013-12-24T20:38:18Z | Deep learning for class-generic object detection | We investigate the use of deep neural networks for the novel task of class
generic object detection. We show that neural networks originally designed for
image recognition can be trained to detect objects within images, regardless of
their class, including objects for which no bounding box labels have been
provided. In addition, we show that bounding box labels yield a 1% performance
increase on the ImageNet recognition challenge.
| [
"['Brody Huval' 'Adam Coates' 'Andrew Ng']",
"Brody Huval, Adam Coates, Andrew Ng"
] |
stat.ML cs.LG | 10.1016/j.neucom.2013.04.003 | 1312.6956 | null | null | http://arxiv.org/abs/1312.6956v1 | 2013-12-25T11:08:32Z | 2013-12-25T11:08:32Z | Joint segmentation of multivariate time series with hidden process
regression for human activity recognition | The problem of human activity recognition is central for understanding and
predicting the human behavior, in particular in a prospective of assistive
services to humans, such as health monitoring, well being, security, etc. There
is therefore a growing need to build accurate models which can take into
account the variability of the human activities over time (dynamic models)
rather than static ones which can have some limitations in such a dynamic
context. In this paper, the problem of activity recognition is analyzed through
the segmentation of the multidimensional time series of the acceleration data
measured in the 3-d space using body-worn accelerometers. The proposed model
for automatic temporal segmentation is a specific statistical latent process
model which assumes that the observed acceleration sequence is governed by
sequence of hidden (unobserved) activities. More specifically, the proposed
approach is based on a specific multiple regression model incorporating a
hidden discrete logistic process which governs the switching from one activity
to another over time. The model is learned in an unsupervised context by
maximizing the observed-data log-likelihood via a dedicated
expectation-maximization (EM) algorithm. We applied it on a real-world
automatic human activity recognition problem and its performance was assessed
by performing comparisons with alternative approaches, including well-known
supervised static classifiers and the standard hidden Markov model (HMM). The
obtained results are very encouraging and show that the proposed approach is
quite competitive even it works in an entirely unsupervised way and does not
requires a feature extraction preprocessing step.
| [
"Faicel Chamroukhi, Samer Mohammed, Dorra Trabelsi, Latifa Oukhellou,\n Yacine Amirat",
"['Faicel Chamroukhi' 'Samer Mohammed' 'Dorra Trabelsi' 'Latifa Oukhellou'\n 'Yacine Amirat']"
] |
cs.IR cs.CL cs.LG | null | 1312.6962 | null | null | http://arxiv.org/pdf/1312.6962v1 | 2013-12-25T12:38:17Z | 2013-12-25T12:38:17Z | Subjectivity Classification using Machine Learning Techniques for Mining
Feature-Opinion Pairs from Web Opinion Sources | Due to flourish of the Web 2.0, web opinion sources are rapidly emerging
containing precious information useful for both customers and manufactures.
Recently, feature based opinion mining techniques are gaining momentum in which
customer reviews are processed automatically for mining product features and
user opinions expressed over them. However, customer reviews may contain both
opinionated and factual sentences. Distillations of factual contents improve
mining performance by preventing noisy and irrelevant extraction. In this
paper, combination of both supervised machine learning and rule-based
approaches are proposed for mining feasible feature-opinion pairs from
subjective review sentences. In the first phase of the proposed approach, a
supervised machine learning technique is applied for classifying subjective and
objective sentences from customer reviews. In the next phase, a rule based
method is implemented which applies linguistic and semantic analysis of texts
to mine feasible feature-opinion pairs from subjective sentences retained after
the first phase. The effectiveness of the proposed methods is established
through experimentation over customer reviews on different electronic products.
| [
"['Ahmad Kamal']",
"Ahmad Kamal"
] |
stat.ML cs.CV cs.LG | 10.1109/TASE.2013.2256349 | 1312.6965 | null | null | http://arxiv.org/abs/1312.6965v1 | 2013-12-25T13:03:12Z | 2013-12-25T13:03:12Z | An Unsupervised Approach for Automatic Activity Recognition based on
Hidden Markov Model Regression | Using supervised machine learning approaches to recognize human activities
from on-body wearable accelerometers generally requires a large amount of
labelled data. When ground truth information is not available, too expensive,
time consuming or difficult to collect, one has to rely on unsupervised
approaches. This paper presents a new unsupervised approach for human activity
recognition from raw acceleration data measured using inertial wearable
sensors. The proposed method is based upon joint segmentation of
multidimensional time series using a Hidden Markov Model (HMM) in a multiple
regression context. The model is learned in an unsupervised framework using the
Expectation-Maximization (EM) algorithm where no activity labels are needed.
The proposed method takes into account the sequential appearance of the data.
It is therefore adapted for the temporal acceleration data to accurately detect
the activities. It allows both segmentation and classification of the human
activities. Experimental results are provided to demonstrate the efficiency of
the proposed approach with respect to standard supervised and unsupervised
classification approaches
| [
"['Dorra Trabelsi' 'Samer Mohammed' 'Faicel Chamroukhi' 'Latifa Oukhellou'\n 'Yacine Amirat']",
"Dorra Trabelsi, Samer Mohammed, Faicel Chamroukhi, Latifa Oukhellou,\n Yacine Amirat"
] |
stat.ME cs.LG math.ST stat.ML stat.TH | 10.1016/j.neucom.2012.10.030 | 1312.6966 | null | null | http://arxiv.org/abs/1312.6966v1 | 2013-12-25T13:08:47Z | 2013-12-25T13:08:47Z | Model-based functional mixture discriminant analysis with hidden process
regression for curve classification | In this paper, we study the modeling and the classification of functional
data presenting regime changes over time. We propose a new model-based
functional mixture discriminant analysis approach based on a specific hidden
process regression model that governs the regime changes over time. Our
approach is particularly adapted to handle the problem of complex-shaped
classes of curves, where each class is potentially composed of several
sub-classes, and to deal with the regime changes within each homogeneous
sub-class. The proposed model explicitly integrates the heterogeneity of each
class of curves via a mixture model formulation, and the regime changes within
each sub-class through a hidden logistic process. Each class of complex-shaped
curves is modeled by a finite number of homogeneous clusters, each of them
being decomposed into several regimes. The model parameters of each class are
learned by maximizing the observed-data log-likelihood by using a dedicated
expectation-maximization (EM) algorithm. Comparisons are performed with
alternative curve classification approaches, including functional linear
discriminant analysis and functional mixture discriminant analysis with
polynomial regression mixtures and spline regression mixtures. Results obtained
on simulated data and real data show that the proposed approach outperforms the
alternative approaches in terms of discrimination, and significantly improves
the curves approximation.
| [
"['Faicel Chamroukhi' 'Hervé Glotin' 'Allou Samé']",
"Faicel Chamroukhi, Herv\\'e Glotin, Allou Sam\\'e"
] |
stat.ME cs.LG math.ST stat.ML stat.TH | 10.1007/s11634-011-0096-5 | 1312.6967 | null | null | http://arxiv.org/abs/1312.6967v1 | 2013-12-25T13:11:04Z | 2013-12-25T13:11:04Z | Model-based clustering and segmentation of time series with changes in
regime | Mixture model-based clustering, usually applied to multidimensional data, has
become a popular approach in many data analysis problems, both for its good
statistical properties and for the simplicity of implementation of the
Expectation-Maximization (EM) algorithm. Within the context of a railway
application, this paper introduces a novel mixture model for dealing with time
series that are subject to changes in regime. The proposed approach consists in
modeling each cluster by a regression model in which the polynomial
coefficients vary according to a discrete hidden process. In particular, this
approach makes use of logistic functions to model the (smooth or abrupt)
transitions between regimes. The model parameters are estimated by the maximum
likelihood method solved by an Expectation-Maximization algorithm. The proposed
approach can also be regarded as a clustering approach which operates by
finding groups of time series having common changes in regime. In addition to
providing a time series partition, it therefore provides a time series
segmentation. The problem of selecting the optimal numbers of clusters and
segments is solved by means of the Bayesian Information Criterion (BIC). The
proposed approach is shown to be efficient using a variety of simulated time
series and real-world time series of electrical power consumption from rail
switching operations.
| [
"['Allou Samé' 'Faicel Chamroukhi' 'Gérard Govaert' 'Patrice Aknin']",
"Allou Sam\\'e, Faicel Chamroukhi, G\\'erard Govaert, Patrice Aknin"
] |
stat.ME cs.LG stat.ML | 10.1016/j.neucom.2009.12.023 | 1312.6968 | null | null | http://arxiv.org/abs/1312.6968v1 | 2013-12-25T13:13:09Z | 2013-12-25T13:13:09Z | A hidden process regression model for functional data description.
Application to curve discrimination | A new approach for functional data description is proposed in this paper. It
consists of a regression model with a discrete hidden logistic process which is
adapted for modeling curves with abrupt or smooth regime changes. The model
parameters are estimated in a maximum likelihood framework through a dedicated
Expectation Maximization (EM) algorithm. From the proposed generative model, a
curve discrimination rule is derived using the Maximum A Posteriori rule. The
proposed model is evaluated using simulated curves and real world curves
acquired during railway switch operations, by performing comparisons with the
piecewise regression approach in terms of curve modeling and classification.
| [
"Faicel Chamroukhi, Allou Sam\\'e, G\\'erard Govaert, Patrice Aknin",
"['Faicel Chamroukhi' 'Allou Samé' 'Gérard Govaert' 'Patrice Aknin']"
] |
stat.ME cs.LG math.ST stat.ML stat.TH | 10.1016/j.neunet.2009.06.040 | 1312.6969 | null | null | http://arxiv.org/abs/1312.6969v1 | 2013-12-25T13:13:55Z | 2013-12-25T13:13:55Z | Time series modeling by a regression approach based on a latent process | Time series are used in many domains including finance, engineering,
economics and bioinformatics generally to represent the change of a measurement
over time. Modeling techniques may then be used to give a synthetic
representation of such data. A new approach for time series modeling is
proposed in this paper. It consists of a regression model incorporating a
discrete hidden logistic process allowing for activating smoothly or abruptly
different polynomial regression models. The model parameters are estimated by
the maximum likelihood method performed by a dedicated Expectation Maximization
(EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative
Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process
parameters. To evaluate the proposed approach, an experimental study on
simulated data and real world data was performed using two alternative
approaches: a heteroskedastic piecewise regression model using a global
optimization algorithm based on dynamic programming, and a Hidden Markov
Regression Model whose parameters are estimated by the Baum-Welch algorithm.
Finally, in the context of the remote monitoring of components of the French
railway infrastructure, and more particularly the switch mechanism, the
proposed approach has been applied to modeling and classifying time series
representing the condition measurements acquired during switch operations.
| [
"Faicel Chamroukhi, Allou Sam\\'e, G\\'erard Govaert, Patrice Aknin",
"['Faicel Chamroukhi' 'Allou Samé' 'Gérard Govaert' 'Patrice Aknin']"
] |
stat.ME cs.LG math.ST stat.ML stat.TH | null | 1312.6974 | null | null | http://arxiv.org/pdf/1312.6974v2 | 2014-04-30T23:23:20Z | 2013-12-25T13:54:05Z | Piecewise regression mixture for simultaneous functional data clustering
and optimal segmentation | This paper introduces a novel mixture model-based approach for simultaneous
clustering and optimal segmentation of functional data which are curves
presenting regime changes. The proposed model consists in a finite mixture of
piecewise polynomial regression models. Each piecewise polynomial regression
model is associated with a cluster, and within each cluster, each piecewise
polynomial component is associated with a regime (i.e., a segment). We derive
two approaches for learning the model parameters. The former is an estimation
approach and consists in maximizing the observed-data likelihood via a
dedicated expectation-maximization (EM) algorithm. A fuzzy partition of the
curves in K clusters is then obtained at convergence by maximizing the
posterior cluster probabilities. The latter however is a classification
approach and optimizes a specific classification likelihood criterion through a
dedicated classification expectation-maximization (CEM) algorithm. The optimal
curve segmentation is performed by using dynamic programming. In the
classification approach, both the curve clustering and the optimal segmentation
are performed simultaneously as the CEM learning proceeds. We show that the
classification approach is the probabilistic version that generalizes the
deterministic K-means-like algorithm proposed in H\'ebrail et al. (2010). The
proposed approach is evaluated using simulated curves and real-world curves.
Comparisons with alternatives including regression mixture models and the
K-means like algorithm for piecewise regression demonstrate the effectiveness
of the proposed approach.
| [
"['Faicel Chamroukhi']",
"Faicel Chamroukhi"
] |
math.ST cs.LG stat.ME stat.ML stat.TH | null | 1312.6978 | null | null | http://arxiv.org/pdf/1312.6978v1 | 2013-12-25T14:21:48Z | 2013-12-25T14:21:48Z | Mod\`ele \`a processus latent et algorithme EM pour la r\'egression non
lin\'eaire | A non linear regression approach which consists of a specific regression
model incorporating a latent process, allowing various polynomial regression
models to be activated preferentially and smoothly, is introduced in this
paper. The model parameters are estimated by maximum likelihood performed via a
dedicated expecation-maximization (EM) algorithm. An experimental study using
simulated and real data sets reveals good performances of the proposed
approach.
| [
"Faicel Chamroukhi, Allou Sam\\'e, G\\'erard Govaert, Patrice Aknin",
"['Faicel Chamroukhi' 'Allou Samé' 'Gérard Govaert' 'Patrice Aknin']"
] |
stat.ME cs.LG stat.ML | null | 1312.6994 | null | null | http://arxiv.org/pdf/1312.6994v1 | 2013-12-25T18:07:41Z | 2013-12-25T18:07:41Z | A regression model with a hidden logistic process for signal
parametrization | A new approach for signal parametrization, which consists of a specific
regression model incorporating a discrete hidden logistic process, is proposed.
The model parameters are estimated by the maximum likelihood method performed
by a dedicated Expectation Maximization (EM) algorithm. The parameters of the
hidden logistic process, in the inner loop of the EM algorithm, are estimated
using a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm. An
experimental study using simulated and real data reveals good performances of
the proposed approach.
| [
"Faicel Chamroukhi, Allou Sam\\'e, G\\'erard Govaert, Patrice Aknin",
"['Faicel Chamroukhi' 'Allou Samé' 'Gérard Govaert' 'Patrice Aknin']"
] |
cs.LG cs.AI stat.ML | 10.1016/j.pmcj.2014.05.006 | 1312.6995 | null | null | http://arxiv.org/abs/1312.6995v3 | 2014-07-23T13:39:53Z | 2013-12-25T18:08:44Z | Towards Using Unlabeled Data in a Sparse-coding Framework for Human
Activity Recognition | We propose a sparse-coding framework for activity recognition in ubiquitous
and mobile computing that alleviates two fundamental problems of current
supervised learning approaches. (i) It automatically derives a compact, sparse
and meaningful feature representation of sensor data that does not rely on
prior expert knowledge and generalizes extremely well across domain boundaries.
(ii) It exploits unlabeled sample data for bootstrapping effective activity
recognizers, i.e., substantially reduces the amount of ground truth annotation
required for model estimation. Such unlabeled data is trivial to obtain, e.g.,
through contemporary smartphones carried by users as they go about their
everyday activities.
Based on the self-taught learning paradigm we automatically derive an
over-complete set of basis vectors from unlabeled data that captures inherent
patterns present within activity data. Through projecting raw sensor data onto
the feature space defined by such over-complete sets of basis vectors effective
feature extraction is pursued. Given these learned feature representations,
classification backends are then trained using small amounts of labeled
training data.
We study the new approach in detail using two datasets which differ in terms
of the recognition tasks and sensor modalities. Primarily we focus on
transportation mode analysis task, a popular task in mobile-phone based
sensing. The sparse-coding framework significantly outperforms the
state-of-the-art in supervised learning approaches. Furthermore, we demonstrate
the great practical potential of the new approach by successfully evaluating
its generalization capabilities across both domain and sensor modalities by
considering the popular Opportunity dataset. Our feature learning approach
outperforms state-of-the-art approaches to analyzing activities in daily
living.
| [
"Sourav Bhattacharya and Petteri Nurmi and Nils Hammerla and Thomas\n Pl\\\"otz",
"['Sourav Bhattacharya' 'Petteri Nurmi' 'Nils Hammerla' 'Thomas Plötz']"
] |
stat.ME cs.LG math.ST stat.ML stat.TH | null | 1312.7001 | null | null | http://arxiv.org/pdf/1312.7001v1 | 2013-12-25T18:48:12Z | 2013-12-25T18:48:12Z | A regression model with a hidden logistic process for feature extraction
from time series | A new approach for feature extraction from time series is proposed in this
paper. This approach consists of a specific regression model incorporating a
discrete hidden logistic process. The model parameters are estimated by the
maximum likelihood method performed by a dedicated Expectation Maximization
(EM) algorithm. The parameters of the hidden logistic process, in the inner
loop of the EM algorithm, are estimated using a multi-class Iterative
Reweighted Least-Squares (IRLS) algorithm. A piecewise regression algorithm and
its iterative variant have also been considered for comparisons. An
experimental study using simulated and real data reveals good performances of
the proposed approach.
| [
"['Faicel Chamroukhi' 'Allou Samé' 'Gérard Govaert' 'Patrice Aknin']",
"Faicel Chamroukhi, Allou Sam\\'e, G\\'erard Govaert and Patrice Aknin"
] |
stat.ML cs.LG stat.AP | null | 1312.7003 | null | null | http://arxiv.org/pdf/1312.7003v1 | 2013-12-25T18:55:59Z | 2013-12-25T18:55:59Z | Supervised learning of a regression model based on latent process.
Application to the estimation of fuel cell life time | This paper describes a pattern recognition approach aiming to estimate fuel
cell duration time from electrochemical impedance spectroscopy measurements. It
consists in first extracting features from both real and imaginary parts of the
impedance spectrum. A parametric model is considered in the case of the real
part, whereas regression model with latent variables is used in the latter
case. Then, a linear regression model using different subsets of extracted
features is used fo r the estimation of fuel cell time duration. The
performances of the proposed approach are evaluated on experimental data set to
show its feasibility. This could lead to interesting perspectives for
predictive maintenance policy of fuel cell.
| [
"Ra\\\"issa Onanena, Faicel Chamroukhi, Latifa Oukhellou, Denis Candusso,\n Patrice Aknin, Daniel Hissel",
"['Raïssa Onanena' 'Faicel Chamroukhi' 'Latifa Oukhellou' 'Denis Candusso'\n 'Patrice Aknin' 'Daniel Hissel']"
] |
stat.ML cs.IT cs.LG math.IT | null | 1312.7006 | null | null | http://arxiv.org/pdf/1312.7006v2 | 2015-02-13T10:04:51Z | 2013-12-25T19:23:22Z | A Convex Formulation for Mixed Regression with Two Components: Minimax
Optimal Rates | We consider the mixed regression problem with two components, under
adversarial and stochastic noise. We give a convex optimization formulation
that provably recovers the true solution, and provide upper bounds on the
recovery errors for both arbitrary noise and stochastic noise settings. We also
give matching minimax lower bounds (up to log factors), showing that under
certain assumptions, our algorithm is information-theoretically optimal. Our
results represent the first tractable algorithm guaranteeing successful
recovery with tight bounds on recovery errors and sample complexity.
| [
"Yudong Chen, Xinyang Yi, Constantine Caramanis",
"['Yudong Chen' 'Xinyang Yi' 'Constantine Caramanis']"
] |
stat.ME cs.LG stat.ML | null | 1312.7007 | null | null | http://arxiv.org/pdf/1312.7007v1 | 2013-12-25T19:23:39Z | 2013-12-25T19:23:39Z | Functional Mixture Discriminant Analysis with hidden process regression
for curve classification | We present a new mixture model-based discriminant analysis approach for
functional data using a specific hidden process regression model. The approach
allows for fitting flexible curve-models to each class of complex-shaped curves
presenting regime changes. The model parameters are learned by maximizing the
observed-data log-likelihood for each class by using a dedicated
expectation-maximization (EM) algorithm. Comparisons on simulated data with
alternative approaches show that the proposed approach provides better results.
| [
"['Faicel Chamroukhi' 'Heré Glotin' 'Céline Rabouy']",
"Faicel Chamroukhi, Her\\'e Glotin, C\\'eline Rabouy"
] |
stat.ME cs.LG stat.ML | 10.1109/IJCNN.2012.6252818 | 1312.7018 | null | null | http://arxiv.org/abs/1312.7018v1 | 2013-12-25T20:35:20Z | 2013-12-25T20:35:20Z | Mixture model-based functional discriminant analysis for curve
classification | Statistical approaches for Functional Data Analysis concern the paradigm for
which the individuals are functions or curves rather than finite dimensional
vectors. In this paper, we particularly focus on the modeling and the
classification of functional data which are temporal curves presenting regime
changes over time. More specifically, we propose a new mixture model-based
discriminant analysis approach for functional data using a specific hidden
process regression model. Our approach is particularly adapted to both handle
the problem of complex-shaped classes of curves, where each class is composed
of several sub-classes, and to deal with the regime changes within each
homogeneous sub-class. The model explicitly integrates the heterogeneity of
each class of curves via a mixture model formulation, and the regime changes
within each sub-class through a hidden logistic process. The approach allows
therefore for fitting flexible curve-models to each class of complex-shaped
curves presenting regime changes through an unsupervised learning scheme, to
automatically summarize it into a finite number of homogeneous clusters, each
of them is decomposed into several regimes. The model parameters are learned by
maximizing the observed-data log-likelihood for each class by using a dedicated
expectation-maximization (EM) algorithm. Comparisons on simulated data and real
data with alternative approaches, including functional linear discriminant
analysis and functional mixture discriminant analysis with polynomial
regression mixtures and spline regression mixtures, show that the proposed
approach provides better results regarding the discrimination results and
significantly improves the curves approximation.
| [
"['Faicel Chamroukhi' 'Hervé Glotin']",
"Faicel Chamroukhi, Herv\\'e Glotin"
] |
stat.ME cs.LG stat.ML | null | 1312.7022 | null | null | http://arxiv.org/pdf/1312.7022v1 | 2013-12-25T21:04:08Z | 2013-12-25T21:04:08Z | Robust EM algorithm for model-based curve clustering | Model-based clustering approaches concern the paradigm of exploratory data
analysis relying on the finite mixture model to automatically find a latent
structure governing observed data. They are one of the most popular and
successful approaches in cluster analysis. The mixture density estimation is
generally performed by maximizing the observed-data log-likelihood by using the
expectation-maximization (EM) algorithm. However, it is well-known that the EM
algorithm initialization is crucial. In addition, the standard EM algorithm
requires the number of clusters to be known a priori. Some solutions have been
provided in [31, 12] for model-based clustering with Gaussian mixture models
for multivariate data. In this paper we focus on model-based curve clustering
approaches, when the data are curves rather than vectorial data, based on
regression mixtures. We propose a new robust EM algorithm for clustering
curves. We extend the model-based clustering approach presented in [31] for
Gaussian mixture models, to the case of curve clustering by regression
mixtures, including polynomial regression mixtures as well as spline or
B-spline regressions mixtures. Our approach both handles the problem of
initialization and the one of choosing the optimal number of clusters as the EM
learning proceeds, rather than in a two-fold scheme. This is achieved by
optimizing a penalized log-likelihood criterion. A simulation study confirms
the potential benefit of the proposed algorithm in terms of robustness
regarding initialization and funding the actual number of clusters.
| [
"['Faicel Chamroukhi']",
"Faicel Chamroukhi"
] |
stat.ML cs.LG stat.ME | 10.1109/IJCNN.2011.6033590 | 1312.7024 | null | null | http://arxiv.org/abs/1312.7024v1 | 2013-12-25T21:25:41Z | 2013-12-25T21:25:41Z | Model-based clustering with Hidden Markov Model regression for time
series with regime changes | This paper introduces a novel model-based clustering approach for clustering
time series which present changes in regime. It consists of a mixture of
polynomial regressions governed by hidden Markov chains. The underlying hidden
process for each cluster activates successively several polynomial regimes
during time. The parameter estimation is performed by the maximum likelihood
method through a dedicated Expectation-Maximization (EM) algorithm. The
proposed approach is evaluated using simulated time series and real-world time
series issued from a railway diagnosis application. Comparisons with existing
approaches for time series clustering, including the stand EM for Gaussian
mixtures, $K$-means clustering, the standard mixture of regression models and
mixture of Hidden Markov Models, demonstrate the effectiveness of the proposed
approach.
| [
"['Faicel Chamroukhi' 'Allou Samé' 'Patrice Aknin' 'Gérard Govaert']",
"Faicel Chamroukhi, Allou Sam\\'e, Patrice Aknin, G\\'erard Govaert"
] |
cs.CL cs.LG stat.ML | null | 1312.7077 | null | null | http://arxiv.org/pdf/1312.7077v2 | 2014-10-03T08:28:03Z | 2013-12-26T09:45:02Z | Language Modeling with Power Low Rank Ensembles | We present power low rank ensembles (PLRE), a flexible framework for n-gram
language modeling where ensembles of low rank matrices and tensors are used to
obtain smoothed probability estimates of words in context. Our method can be
understood as a generalization of n-gram modeling to non-integer n, and
includes standard techniques such as absolute discounting and Kneser-Ney
smoothing as special cases. PLRE training is efficient and our approach
outperforms state-of-the-art modified Kneser Ney baselines in terms of
perplexity on large corpora as well as on BLEU score in a downstream machine
translation task.
| [
"['Ankur P. Parikh' 'Avneesh Saluja' 'Chris Dyer' 'Eric P. Xing']",
"Ankur P. Parikh, Avneesh Saluja, Chris Dyer, Eric P. Xing"
] |
stat.ML cs.CV cs.LG | null | 1312.7167 | null | null | http://arxiv.org/pdf/1312.7167v1 | 2013-12-27T01:10:00Z | 2013-12-27T01:10:00Z | Near-separable Non-negative Matrix Factorization with $\ell_1$- and
Bregman Loss Functions | Recently, a family of tractable NMF algorithms have been proposed under the
assumption that the data matrix satisfies a separability condition Donoho &
Stodden (2003); Arora et al. (2012). Geometrically, this condition reformulates
the NMF problem as that of finding the extreme rays of the conical hull of a
finite set of vectors. In this paper, we develop several extensions of the
conical hull procedures of Kumar et al. (2013) for robust ($\ell_1$)
approximations and Bregman divergences. Our methods inherit all the advantages
of Kumar et al. (2013) including scalability and noise-tolerance. We show that
on foreground-background separation problems in computer vision, robust
near-separable NMFs match the performance of Robust PCA, considered state of
the art on these problems, with an order of magnitude faster training time. We
also demonstrate applications in exemplar selection settings.
| [
"['Abhishek Kumar' 'Vikas Sindhwani']",
"Abhishek Kumar, Vikas Sindhwani"
] |
cs.LG cs.IT math.IT | null | 1312.7179 | null | null | http://arxiv.org/pdf/1312.7179v1 | 2013-12-27T03:21:34Z | 2013-12-27T03:21:34Z | Sub-Classifier Construction for Error Correcting Output Code Using
Minimum Weight Perfect Matching | Multi-class classification is mandatory for real world problems and one of
promising techniques for multi-class classification is Error Correcting Output
Code. We propose a method for constructing the Error Correcting Output Code to
obtain the suitable combination of positive and negative classes encoded to
represent binary classifiers. The minimum weight perfect matching algorithm is
applied to find the optimal pairs of subset of classes by using the
generalization performance as a weighting criterion. Based on our method, each
subset of classes with positive and negative labels is appropriately combined
for learning the binary classifiers. Experimental results show that our
technique gives significantly higher performance compared to traditional
methods including the dense random code and the sparse random code both in
terms of accuracy and classification times. Moreover, our method requires
significantly smaller number of binary classifiers while maintaining accuracy
compared to the One-Versus-One.
| [
"['Patoomsiri Songsiri' 'Thimaporn Phetkaew' 'Ryutaro Ichise'\n 'Boonserm Kijsirikul']",
"Patoomsiri Songsiri, Thimaporn Phetkaew, Ryutaro Ichise and Boonserm\n Kijsirikul"
] |
cs.LG cs.SI stat.ML | null | 1312.7258 | null | null | http://arxiv.org/pdf/1312.7258v2 | 2014-03-18T01:25:28Z | 2013-12-27T13:21:51Z | Active Discovery of Network Roles for Predicting the Classes of Network
Nodes | Nodes in real world networks often have class labels, or underlying
attributes, that are related to the way in which they connect to other nodes.
Sometimes this relationship is simple, for instance nodes of the same class are
may be more likely to be connected. In other cases, however, this is not true,
and the way that nodes link in a network exhibits a different, more complex
relationship to their attributes. Here, we consider networks in which we know
how the nodes are connected, but we do not know the class labels of the nodes
or how class labels relate to the network links. We wish to identify the best
subset of nodes to label in order to learn this relationship between node
attributes and network links. We can then use this discovered relationship to
accurately predict the class labels of the rest of the network nodes.
We present a model that identifies groups of nodes with similar link
patterns, which we call network roles, using a generative blockmodel. The model
then predicts labels by learning the mapping from network roles to class labels
using a maximum margin classifier. We choose a subset of nodes to label
according to an iterative margin-based active learning strategy. By integrating
the discovery of network roles with the classifier optimisation, the active
learning process can adapt the network roles to better represent the network
for node classification. We demonstrate the model by exploring a selection of
real world networks, including a marine food web and a network of English
words. We show that, in contrast to other network classifiers, this model
achieves good classification accuracy for a range of networks with different
relationships between class labels and network links.
| [
"Leto Peel",
"['Leto Peel']"
] |
cs.SY cs.LG | null | 1312.7292 | null | null | http://arxiv.org/pdf/1312.7292v2 | 2014-03-23T13:44:48Z | 2013-12-27T16:13:07Z | Two Timescale Convergent Q-learning for Sleep--Scheduling in Wireless
Sensor Networks | In this paper, we consider an intrusion detection application for Wireless
Sensor Networks (WSNs). We study the problem of scheduling the sleep times of
the individual sensors to maximize the network lifetime while keeping the
tracking error to a minimum. We formulate this problem as a
partially-observable Markov decision process (POMDP) with continuous
state-action spaces, in a manner similar to (Fuemmeler and Veeravalli [2008]).
However, unlike their formulation, we consider infinite horizon discounted and
average cost objectives as performance criteria. For each criterion, we propose
a convergent on-policy Q-learning algorithm that operates on two timescales,
while employing function approximation to handle the curse of dimensionality
associated with the underlying POMDP. Our proposed algorithm incorporates a
policy gradient update using a one-simulation simultaneous perturbation
stochastic approximation (SPSA) estimate on the faster timescale, while the
Q-value parameter (arising from a linear function approximation for the
Q-values) is updated in an on-policy temporal difference (TD) algorithm-like
fashion on the slower timescale. The feature selection scheme employed in each
of our algorithms manages the energy and tracking components in a manner that
assists the search for the optimal sleep-scheduling policy. For the sake of
comparison, in both discounted and average settings, we also develop a function
approximation analogue of the Q-learning algorithm. This algorithm, unlike the
two-timescale variant, does not possess theoretical convergence guarantees.
Finally, we also adapt our algorithms to include a stochastic iterative
estimation scheme for the intruder's mobility model. Our simulation results on
a 2-dimensional network setting suggest that our algorithms result in better
tracking accuracy at the cost of only a few additional sensors, in comparison
to a recent prior work.
| [
"['Prashanth L. A.' 'Abhranil Chatterjee' 'Shalabh Bhatnagar']",
"Prashanth L.A., Abhranil Chatterjee and Shalabh Bhatnagar"
] |
cs.CV cs.LG cs.NE | null | 1312.7302 | null | null | http://arxiv.org/pdf/1312.7302v6 | 2014-04-23T19:23:46Z | 2013-12-27T17:41:13Z | Learning Human Pose Estimation Features with Convolutional Networks | This paper introduces a new architecture for human pose estimation using a
multi- layer convolutional network architecture and a modified learning
technique that learns low-level features and higher-level weak spatial models.
Unconstrained human pose estimation is one of the hardest problems in computer
vision, and our new architecture and learning schema shows significant
improvement over the current state-of-the-art results. The main contribution of
this paper is showing, for the first time, that a specific variation of deep
learning is able to outperform all existing traditional architectures on this
task. The paper also discusses several lessons learned while researching
alternatives, most notably, that it is possible to learn strong low-level
feature detectors on features that might even just cover a few pixels in the
image. Higher-level spatial models improve somewhat the overall result, but to
a much lesser extent then expected. Many researchers previously argued that the
kinematic structure and top-down information is crucial for this domain, but
with our purely bottom up, and weak spatial model, we could improve other more
complicated architectures that currently produce the best results. This mirrors
what many other researchers, like those in the speech recognition, object
recognition, and other domains have experienced.
| [
"Arjun Jain, Jonathan Tompson, Mykhaylo Andriluka, Graham W. Taylor,\n Christoph Bregler",
"['Arjun Jain' 'Jonathan Tompson' 'Mykhaylo Andriluka' 'Graham W. Taylor'\n 'Christoph Bregler']"
] |
stat.ML cs.LG | null | 1312.7308 | null | null | http://arxiv.org/pdf/1312.7308v1 | 2013-12-27T18:20:09Z | 2013-12-27T18:20:09Z | lil' UCB : An Optimal Exploration Algorithm for Multi-Armed Bandits | The paper proposes a novel upper confidence bound (UCB) procedure for
identifying the arm with the largest mean in a multi-armed bandit game in the
fixed confidence setting using a small number of total samples. The procedure
cannot be improved in the sense that the number of samples required to identify
the best arm is within a constant factor of a lower bound based on the law of
the iterated logarithm (LIL). Inspired by the LIL, we construct our confidence
bounds to explicitly account for the infinite time horizon of the algorithm. In
addition, by using a novel stopping time for the algorithm we avoid a union
bound over the arms that has been observed in other UCB-type algorithms. We
prove that the algorithm is optimal up to constants and also show through
simulations that it provides superior performance with respect to the
state-of-the-art.
| [
"['Kevin Jamieson' 'Matthew Malloy' 'Robert Nowak' 'Sébastien Bubeck']",
"Kevin Jamieson, Matthew Malloy, Robert Nowak, S\\'ebastien Bubeck"
] |
cs.CV cs.LG stat.ML | null | 1312.7335 | null | null | http://arxiv.org/pdf/1312.7335v2 | 2014-02-16T23:17:39Z | 2013-12-20T19:36:51Z | Correlation-based construction of neighborhood and edge features | Motivated by an abstract notion of low-level edge detector filters, we
propose a simple method of unsupervised feature construction based on pairwise
statistics of features. In the first step, we construct neighborhoods of
features by regrouping features that correlate. Then we use these subsets as
filters to produce new neighborhood features. Next, we connect neighborhood
features that correlate, and construct edge features by subtracting the
correlated neighborhood features of each other. To validate the usefulness of
the constructed features, we ran AdaBoost.MH on four multi-class classification
problems. Our most significant result is a test error of 0.94% on MNIST with an
algorithm which is essentially free of any image-specific priors. On CIFAR-10
our method is suboptimal compared to today's best deep learning techniques,
nevertheless, we show that the proposed method outperforms not only boosting on
the raw pixels, but also boosting on Haar filters.
| [
"Bal\\'azs K\\'egl",
"['Balázs Kégl']"
] |
cs.LG | null | 1312.7381 | null | null | http://arxiv.org/pdf/1312.7381v2 | 2014-04-17T03:30:02Z | 2013-12-28T02:08:53Z | Rate-Distortion Auto-Encoders | A rekindled the interest in auto-encoder algorithms has been spurred by
recent work on deep learning. Current efforts have been directed towards
effective training of auto-encoder architectures with a large number of coding
units. Here, we propose a learning algorithm for auto-encoders based on a
rate-distortion objective that minimizes the mutual information between the
inputs and the outputs of the auto-encoder subject to a fidelity constraint.
The goal is to learn a representation that is minimally committed to the input
data, but that is rich enough to reconstruct the inputs up to certain level of
distortion. Minimizing the mutual information acts as a regularization term
whereas the fidelity constraint can be understood as a risk functional in the
conventional statistical learning setting. The proposed algorithm uses a
recently introduced measure of entropy based on infinitely divisible matrices
that avoids the plug in estimation of densities. Experiments using
over-complete bases show that the rate-distortion auto-encoders can learn a
regularized input-output mapping in an implicit manner.
| [
"['Luis G. Sanchez Giraldo' 'Jose C. Principe']",
"Luis G. Sanchez Giraldo and Jose C. Principe"
] |
stat.ML cs.CV cs.LG | null | 1312.7463 | null | null | http://arxiv.org/pdf/1312.7463v1 | 2013-12-28T19:18:44Z | 2013-12-28T19:18:44Z | Generalized Ambiguity Decomposition for Understanding Ensemble Diversity | Diversity or complementarity of experts in ensemble pattern recognition and
information processing systems is widely-observed by researchers to be crucial
for achieving performance improvement upon fusion. Understanding this link
between ensemble diversity and fusion performance is thus an important research
question. However, prior works have theoretically characterized ensemble
diversity and have linked it with ensemble performance in very restricted
settings. We present a generalized ambiguity decomposition (GAD) theorem as a
broad framework for answering these questions. The GAD theorem applies to a
generic convex ensemble of experts for any arbitrary twice-differentiable loss
function. It shows that the ensemble performance approximately decomposes into
a difference of the average expert performance and the diversity of the
ensemble. It thus provides a theoretical explanation for the
empirically-observed benefit of fusing outputs from diverse classifiers and
regressors. It also provides a loss function-dependent, ensemble-dependent, and
data-dependent definition of diversity. We present extensions of this
decomposition to common regression and classification loss functions, and
report a simulation-based analysis of the diversity term and the accuracy of
the decomposition. We finally present experiments on standard pattern
recognition data sets which indicate the accuracy of the decomposition for
real-world classification and regression problems.
| [
"Kartik Audhkhasi, Abhinav Sethy, Bhuvana Ramabhadran and Shrikanth S.\n Narayanan",
"['Kartik Audhkhasi' 'Abhinav Sethy' 'Bhuvana Ramabhadran'\n 'Shrikanth S. Narayanan']"
] |
stat.ME cs.LG | null | 1312.7567 | null | null | http://arxiv.org/pdf/1312.7567v1 | 2013-12-29T18:13:41Z | 2013-12-29T18:13:41Z | Nonparametric Inference For Density Modes | We derive nonparametric confidence intervals for the eigenvalues of the
Hessian at modes of a density estimate. This provides information about the
strength and shape of modes and can also be used as a significance test. We use
a data-splitting approach in which potential modes are identified using the
first half of the data and inference is done with the second half of the data.
To get valid confidence sets for the eigenvalues, we use a bootstrap based on
an elementary-symmetric-polynomial (ESP) transformation. This leads to valid
bootstrap confidence sets regardless of any multiplicities in the eigenvalues.
We also suggest a new method for bandwidth selection, namely, choosing the
bandwidth to maximize the number of significant modes. We show by example that
this method works well. Even when the true distribution is singular, and hence
does not have a density, (in which case cross validation chooses a zero
bandwidth), our method chooses a reasonable bandwidth.
| [
"Christopher Genovese, Marco Perone-Pacifico, Isabella Verdinelli and\n Larry Wasserman",
"['Christopher Genovese' 'Marco Perone-Pacifico' 'Isabella Verdinelli'\n 'Larry Wasserman']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.