categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
cs.CL cs.IR cs.LG cs.NE
null
1412.5335
null
null
http://arxiv.org/pdf/1412.5335v7
2015-05-27T06:40:09Z
2014-12-17T11:02:04Z
Ensemble of Generative and Discriminative Techniques for Sentiment Analysis of Movie Reviews
Sentiment analysis is a common task in natural language processing that aims to detect polarity of a text document (typically a consumer review). In the simplest settings, we discriminate only between positive and negative sentiment, turning the task into a standard binary classification problem. We compare several ma- chine learning approaches to this problem, and combine them to achieve the best possible results. We show how to use for this task the standard generative lan- guage models, which are slightly complementary to the state of the art techniques. We achieve strong results on a well-known dataset of IMDB movie reviews. Our results are easily reproducible, as we publish also the code needed to repeat the experiments. This should simplify further advance of the state of the art, as other researchers can combine their techniques with ours with little effort.
[ "['Grégoire Mesnil' 'Tomas Mikolov' \"Marc'Aurelio Ranzato\" 'Yoshua Bengio']", "Gr\\'egoire Mesnil, Tomas Mikolov, Marc'Aurelio Ranzato, Yoshua Bengio" ]
cs.NE cs.LG
null
1412.5474
null
null
http://arxiv.org/pdf/1412.5474v4
2015-11-20T05:50:23Z
2014-12-17T16:48:54Z
Flattened Convolutional Neural Networks for Feedforward Acceleration
We present flattened convolutional neural networks that are designed for fast feedforward execution. The redundancy of the parameters, especially weights of the convolutional filters in convolutional neural networks has been extensively studied and different heuristics have been proposed to construct a low rank basis of the filters after training. In this work, we train flattened networks that consist of consecutive sequence of one-dimensional filters across all directions in 3D space to obtain comparable performance as conventional convolutional networks. We tested flattened model on different datasets and found that the flattened layer can effectively substitute for the 3D filters without loss of accuracy. The flattened convolution pipelines provide around two times speed-up during feedforward pass compared to the baseline model due to the significant reduction of learning parameters. Furthermore, the proposed method does not require efforts in manual tuning or post processing once the model is trained.
[ "Jonghoon Jin, Aysegul Dundar, Eugenio Culurciello", "['Jonghoon Jin' 'Aysegul Dundar' 'Eugenio Culurciello']" ]
cs.CL cs.LG cs.NE
null
1412.5567
null
null
http://arxiv.org/pdf/1412.5567v2
2014-12-19T21:36:13Z
2014-12-17T20:39:45Z
Deep Speech: Scaling up end-to-end speech recognition
We present a state-of-the-art speech recognition system developed using end-to-end deep learning. Our architecture is significantly simpler than traditional speech systems, which rely on laboriously engineered processing pipelines; these traditional systems also tend to perform poorly when used in noisy environments. In contrast, our system does not need hand-designed components to model background noise, reverberation, or speaker variation, but instead directly learns a function that is robust to such effects. We do not need a phoneme dictionary, nor even the concept of a "phoneme." Key to our approach is a well-optimized RNN training system that uses multiple GPUs, as well as a set of novel data synthesis techniques that allow us to efficiently obtain a large amount of varied data for training. Our system, called Deep Speech, outperforms previously published results on the widely studied Switchboard Hub5'00, achieving 16.0% error on the full test set. Deep Speech also handles challenging noisy environments better than widely used, state-of-the-art commercial speech systems.
[ "['Awni Hannun' 'Carl Case' 'Jared Casper' 'Bryan Catanzaro' 'Greg Diamos'\n 'Erich Elsen' 'Ryan Prenger' 'Sanjeev Satheesh' 'Shubho Sengupta'\n 'Adam Coates' 'Andrew Y. Ng']", "Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos,\n Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates and\n Andrew Y. Ng" ]
cs.LG
null
1412.5617
null
null
http://arxiv.org/pdf/1412.5617v1
2014-12-17T21:15:06Z
2014-12-17T21:15:06Z
Learning from Data with Heterogeneous Noise using SGD
We consider learning from data of variable quality that may be obtained from different heterogeneous sources. Addressing learning from heterogeneous data in its full generality is a challenging problem. In this paper, we adopt instead a model in which data is observed through heterogeneous noise, where the noise level reflects the quality of the data source. We study how to use stochastic gradient algorithms to learn in this model. Our study is motivated by two concrete examples where this problem arises naturally: learning with local differential privacy based on data from multiple sources with different privacy requirements, and learning from data with labels of variable quality. The main contribution of this paper is to identify how heterogeneous noise impacts performance. We show that given two datasets with heterogeneous noise, the order in which to use them in standard SGD depends on the learning rate. We propose a method for changing the learning rate as a function of the heterogeneity, and prove new regret bounds for our method in two cases of interest. Experiments on real data show that our method performs better than using a single learning rate and using only the less noisy of the two datasets when the noise level is low to moderate.
[ "['Shuang Song' 'Kamalika Chaudhuri' 'Anand D. Sarwate']", "Shuang Song, Kamalika Chaudhuri, Anand D. Sarwate" ]
cs.CE cs.LG q-bio.QM
null
1412.5627
null
null
http://arxiv.org/pdf/1412.5627v1
2014-12-17T21:31:51Z
2014-12-17T21:31:51Z
Feature extraction from complex networks: A case of study in genomic sequences classification
This work presents a new approach for classification of genomic sequences from measurements of complex networks and information theory. For this, it is considered the nucleotides, dinucleotides and trinucleotides of a genomic sequence. For each of them, the entropy, sum entropy and maximum entropy values are calculated.For each of them is also generated a network, in which the nodes are the nucleotides, dinucleotides or trinucleotides and its edges are estimated by observing the respective adjacency among them in the genomic sequence. In this way, it is generated three networks, for which measures of complex networks are extracted.These measures together with measures of information theory comprise a feature vector representing a genomic sequence. Thus, the feature vector is used for classification by methods such as SVM, MultiLayer Perceptron, J48, IBK, Naive Bayes and Random Forest in order to evaluate the proposed approach.It was adopted coding sequences, intergenic sequences and TSS (Transcriptional Starter Sites) as datasets, for which the better results were obtained by the Random Forest with 91.2%, followed by J48 with 89.1% and SVM with 84.8% of accuracy. These results indicate that the new approach of feature extraction has its value, reaching good levels of classification even considering only the genomic sequences, i.e., no other a priori knowledge about them is considered.
[ "Bruno Mendes Moro Conque and Andr\\'e Yoshiaki Kashiwabara and\n Fabr\\'icio Martins Lopes", "['Bruno Mendes Moro Conque' 'André Yoshiaki Kashiwabara'\n 'Fabrício Martins Lopes']" ]
cs.CL cs.LG
null
1412.5659
null
null
http://arxiv.org/pdf/1412.5659v1
2014-12-17T22:41:14Z
2014-12-17T22:41:14Z
Effective sampling for large-scale automated writing evaluation systems
Automated writing evaluation (AWE) has been shown to be an effective mechanism for quickly providing feedback to students. It has already seen wide adoption in enterprise-scale applications and is starting to be adopted in large-scale contexts. Training an AWE model has historically required a single batch of several hundred writing examples and human scores for each of them. This requirement limits large-scale adoption of AWE since human-scoring essays is costly. Here we evaluate algorithms for ensuring that AWE models are consistently trained using the most informative essays. Our results show how to minimize training set sizes while maximizing predictive performance, thereby reducing cost without unduly sacrificing accuracy. We conclude with a discussion of how to integrate this approach into large-scale AWE systems.
[ "['Nicholas Dronen' 'Peter W. Foltz' 'Kyle Habermehl']", "Nicholas Dronen, Peter W. Foltz, Kyle Habermehl" ]
cs.CL cs.LG
null
1412.5673
null
null
http://arxiv.org/pdf/1412.5673v3
2015-04-28T14:14:44Z
2014-12-17T23:26:48Z
Entity-Augmented Distributional Semantics for Discourse Relations
Discourse relations bind smaller linguistic elements into coherent texts. However, automatically identifying discourse relations is difficult, because it requires understanding the semantics of the linked sentences. A more subtle challenge is that it is not enough to represent the meaning of each sentence of a discourse relation, because the relation may depend on links between lower-level elements, such as entity mentions. Our solution computes distributional meaning representations by composition up the syntactic parse tree. A key difference from previous work on compositional distributional semantics is that we also compute representations for entity mentions, using a novel downward compositional pass. Discourse relations are predicted not only from the distributional representations of the sentences, but also of their coreferent entity mentions. The resulting system obtains substantial improvements over the previous state-of-the-art in predicting implicit discourse relations in the Penn Discourse Treebank.
[ "['Yangfeng Ji' 'Jacob Eisenstein']", "Yangfeng Ji and Jacob Eisenstein" ]
cs.NE cs.LG
null
1412.5710
null
null
http://arxiv.org/pdf/1412.5710v1
2014-12-18T03:01:10Z
2014-12-18T03:01:10Z
Multiobjective Optimization of Classifiers by Means of 3-D Convex Hull Based Evolutionary Algorithm
Finding a good classifier is a multiobjective optimization problem with different error rates and the costs to be minimized. The receiver operating characteristic is widely used in the machine learning community to analyze the performance of parametric classifiers or sets of Pareto optimal classifiers. In order to directly compare two sets of classifiers the area (or volume) under the convex hull can be used as a scalar indicator for the performance of a set of classifiers in receiver operating characteristic space. Recently, the convex hull based multiobjective genetic programming algorithm was proposed and successfully applied to maximize the convex hull area for binary classification problems. The contribution of this paper is to extend this algorithm for dealing with higher dimensional problem formulations. In particular, we discuss problems where parsimony (or classifier complexity) is stated as a third objective and multi-class classification with three different true classification rates to be maximized. The design of the algorithm proposed in this paper is inspired by indicator-based evolutionary algorithms, where first a performance indicator for a solution set is established and then a selection operator is designed that complies with the performance indicator. In this case, the performance indicator will be the volume under the convex hull. The algorithm is tested and analyzed in a proof of concept study on different benchmarks that are designed for measuring its capability to capture relevant parts of a convex hull. Further benchmark and application studies on email classification and feature selection round up the analysis and assess robustness and usefulness of the new algorithm in real world settings.
[ "['Jiaqi Zhao' 'Vitor Basto Fernandes' 'Licheng Jiao' 'Iryna Yevseyeva'\n 'Asep Maulana' 'Rui Li' 'Thomas Bäck' 'Michael T. M. Emmerich']", "Jiaqi Zhao, Vitor Basto Fernandes, Licheng Jiao, Iryna Yevseyeva, Asep\n Maulana, Rui Li, Thomas B\\\"ack, and Michael T. M. Emmerich" ]
cs.DS cs.LG
null
1412.5721
null
null
http://arxiv.org/pdf/1412.5721v2
2015-02-23T17:30:23Z
2014-12-18T05:09:32Z
An Algorithm for Online K-Means Clustering
This paper shows that one can be competitive with the k-means objective while operating online. In this model, the algorithm receives vectors v_1,...,v_n one by one in an arbitrary order. For each vector the algorithm outputs a cluster identifier before receiving the next one. Our online algorithm generates ~O(k) clusters whose k-means cost is ~O(W*). Here, W* is the optimal k-means cost using k clusters and ~O suppresses poly-logarithmic factors. We also show that, experimentally, it is not much worse than k-means++ while operating in a strictly more constrained computational model.
[ "Edo Liberty, Ram Sriharsha, Maxim Sviridenko", "['Edo Liberty' 'Ram Sriharsha' 'Maxim Sviridenko']" ]
cs.LG
null
1412.5732
null
null
http://arxiv.org/pdf/1412.5732v2
2015-09-08T03:00:55Z
2014-12-18T06:37:50Z
Dynamic Structure Embedded Online Multiple-Output Regression for Stream Data
Online multiple-output regression is an important machine learning technique for modeling, predicting, and compressing multi-dimensional correlated data streams. In this paper, we propose a novel online multiple-output regression method, called MORES, for stream data. MORES can \emph{dynamically} learn the structure of the coefficients change in each update step to facilitate the model's continuous refinement. We observe that limited expressive ability of the regression model, especially in the preliminary stage of online update, often leads to the variables in the residual errors being dependent. In light of this point, MORES intends to \emph{dynamically} learn and leverage the structure of the residual errors to improve the prediction accuracy. Moreover, we define three statistical variables to \emph{exactly} represent all the seen samples for \emph{incrementally} calculating prediction loss in each online update round, which can avoid loading all the training data into memory for updating model, and also effectively prevent drastic fluctuation of the model in the presence of noise. Furthermore, we introduce a forgetting factor to set different weights on samples so as to track the data streams' evolving characteristics quickly from the latest samples. Experiments on one synthetic dataset and three real-world datasets validate the effectiveness of the proposed method. In addition, the update speed of MORES is at least 2000 samples processed per second on the three real-world datasets, more than 15 times faster than the state-of-the-art online learning algorithm.
[ "['Changsheng Li' 'Fan Wei' 'Weishan Dong' 'Qingshan Liu' 'Xiangfeng Wang'\n 'Xin Zhang']", "Changsheng Li and Fan Wei and Weishan Dong and Qingshan Liu and\n Xiangfeng Wang and Xin Zhang" ]
stat.ML cs.LG
null
1412.5744
null
null
http://arxiv.org/pdf/1412.5744v7
2017-04-19T21:17:10Z
2014-12-18T07:51:24Z
Stochastic Descent Analysis of Representation Learning Algorithms
Although stochastic approximation learning methods have been widely used in the machine learning literature for over 50 years, formal theoretical analyses of specific machine learning algorithms are less common because stochastic approximation theorems typically possess assumptions which are difficult to communicate and verify. This paper presents a new stochastic approximation theorem for state-dependent noise with easily verifiable assumptions applicable to the analysis and design of important deep learning algorithms including: adaptive learning, contrastive divergence learning, stochastic descent expectation maximization, and active learning.
[ "['Richard M. Golden']", "Richard M. Golden" ]
stat.ML cs.IT cs.LG cs.NE math.IT math.MG
null
1412.5896
null
null
http://arxiv.org/pdf/1412.5896v3
2015-06-03T14:38:57Z
2014-12-18T15:40:03Z
On the Stability of Deep Networks
In this work we study the properties of deep neural networks (DNN) with random weights. We formally prove that these networks perform a distance-preserving embedding of the data. Based on this we then draw conclusions on the size of the training data and the networks' structure. A longer version of this paper with more results and details can be found in (Giryes et al., 2015). In particular, we formally prove in the longer version that DNN with random Gaussian weights perform a distance-preserving embedding of the data, with a special treatment for in-class and out-of-class data.
[ "Raja Giryes and Guillermo Sapiro and Alex M. Bronstein", "['Raja Giryes' 'Guillermo Sapiro' 'Alex M. Bronstein']" ]
cs.LG cs.CV
null
1412.5902
null
null
http://arxiv.org/pdf/1412.5902v2
2018-01-25T14:57:28Z
2014-12-07T11:38:55Z
Nearest Descent, In-Tree, and Clustering
In this paper, we propose a physically inspired graph-theoretical clustering method, which first makes the data points organized into an attractive graph, called In-Tree, via a physically inspired rule, called Nearest Descent (ND). In particular, the rule of ND works to select the nearest node in the descending direction of potential as the parent node of each node, which is in essence different from the classical Gradient Descent or Steepest Descent. The constructed In-Tree proves a very good candidate for clustering due to its particular features and properties. In the In-Tree, the original clustering problem is reduced to a problem of removing a very few of undesired edges from this graph. Pleasingly, the undesired edges in In-Tree are so distinguishable that they can be easily determined in either automatic or interactive way, which is in stark contrast to the cases in the widely used Minimal Spanning Tree and k-nearest-neighbor graph. The cluster number in the proposed method can be easily determined based on some intermediate plots, and the cluster assignment for each node is easily made by quickly searching its root node in each sub-graph (also an In-Tree). The proposed method is extensively evaluated on both synthetic and real-world datasets. Overall, the proposed clustering method is a density-based one, but shows significant differences and advantages in comparison to the traditional ones. The proposed method is simple yet efficient and reliable, and is applicable to various datasets with diverse shapes, attributes and any high dimensionality
[ "['Teng Qiu' 'Kaifu Yang' 'Chaoyi Li' 'Yongjie Li']", "Teng Qiu, Kaifu Yang, Chaoyi Li, Yongjie Li" ]
cs.LG
null
1412.5949
null
null
http://arxiv.org/pdf/1412.5949v1
2014-12-18T17:14:34Z
2014-12-18T17:14:34Z
Large Scale Distributed Distance Metric Learning
In large scale machine learning and data mining problems with high feature dimensionality, the Euclidean distance between data points can be uninformative, and Distance Metric Learning (DML) is often desired to learn a proper similarity measure (using side information such as example data pairs being similar or dissimilar). However, high dimensionality and large volume of pairwise constraints in modern big data can lead to prohibitive computational cost for both the original DML formulation in Xing et al. (2002) and later extensions. In this paper, we present a distributed algorithm for DML, and a large-scale implementation on a parameter server architecture. Our approach builds on a parallelizable reformulation of Xing et al. (2002), and an asynchronous stochastic gradient descent optimization procedure. To our knowledge, this is the first distributed solution to DML, and we show that, on a system with 256 CPU cores, our program is able to complete a DML task on a dataset with 1 million data points, 22-thousand features, and 200 million labeled data pairs, in 15 hours; and the learned metric shows great effectiveness in properly measuring distances.
[ "['Pengtao Xie' 'Eric Xing']", "Pengtao Xie and Eric Xing" ]
stat.ML cs.LG
null
1412.5967
null
null
http://arxiv.org/pdf/1412.5967v1
2014-12-18T17:46:41Z
2014-12-18T17:46:41Z
Tag-Aware Ordinal Sparse Factor Analysis for Learning and Content Analytics
Machine learning offers novel ways and means to design personalized learning systems wherein each student's educational experience is customized in real time depending on their background, learning goals, and performance to date. SPARse Factor Analysis (SPARFA) is a novel framework for machine learning-based learning analytics, which estimates a learner's knowledge of the concepts underlying a domain, and content analytics, which estimates the relationships among a collection of questions and those concepts. SPARFA jointly learns the associations among the questions and the concepts, learner concept knowledge profiles, and the underlying question difficulties, solely based on the correct/incorrect graded responses of a population of learners to a collection of questions. In this paper, we extend the SPARFA framework significantly to enable: (i) the analysis of graded responses on an ordinal scale (partial credit) rather than a binary scale (correct/incorrect); (ii) the exploitation of tags/labels for questions that partially describe the question{concept associations. The resulting Ordinal SPARFA-Tag framework greatly enhances the interpretability of the estimated concepts. We demonstrate using real educational data that Ordinal SPARFA-Tag outperforms both SPARFA and existing collaborative filtering techniques in predicting missing learner responses.
[ "['Andrew S. Lan' 'Christoph Studer' 'Andrew E. Waters'\n 'Richard G. Baraniuk']", "Andrew S. Lan, Christoph Studer, Andrew E. Waters, Richard G. Baraniuk" ]
stat.ML cs.LG
null
1412.5968
null
null
http://arxiv.org/pdf/1412.5968v1
2014-12-18T17:48:17Z
2014-12-18T17:48:17Z
Quantized Matrix Completion for Personalized Learning
The recently proposed SPARse Factor Analysis (SPARFA) framework for personalized learning performs factor analysis on ordinal or binary-valued (e.g., correct/incorrect) graded learner responses to questions. The underlying factors are termed "concepts" (or knowledge components) and are used for learning analytics (LA), the estimation of learner concept-knowledge profiles, and for content analytics (CA), the estimation of question-concept associations and question difficulties. While SPARFA is a powerful tool for LA and CA, it requires a number of algorithm parameters (including the number of concepts), which are difficult to determine in practice. In this paper, we propose SPARFA-Lite, a convex optimization-based method for LA that builds on matrix completion, which only requires a single algorithm parameter and enables us to automatically identify the required number of concepts. Using a variety of educational datasets, we demonstrate that SPARFALite (i) achieves comparable performance in predicting unobserved learner responses to existing methods, including item response theory (IRT) and SPARFA, and (ii) is computationally more efficient.
[ "Andrew S. Lan, Christoph Studer, Richard G. Baraniuk", "['Andrew S. Lan' 'Christoph Studer' 'Richard G. Baraniuk']" ]
cs.CV cs.LG
null
1412.6018
null
null
http://arxiv.org/pdf/1412.6018v1
2014-10-09T04:32:20Z
2014-10-09T04:32:20Z
Automatic Training Data Synthesis for Handwriting Recognition Using the Structural Crossing-Over Technique
The paper presents a novel technique called "Structural Crossing-Over" to synthesize qualified data for training machine learning-based handwriting recognition. The proposed technique can provide a greater variety of patterns of training data than the existing approaches such as elastic distortion and tangent-based affine transformation. A couple of training characters are chosen, then they are analyzed by their similar and different structures, and finally are crossed over to generate the new characters. The experiments are set to compare the performances of tangent-based affine transformation and the proposed approach in terms of the variety of generated characters and percent of recognition errors. The standard MNIST corpus including 60,000 training characters and 10,000 test characters is employed in the experiments. The proposed technique uses 1,000 characters to synthesize 60,000 characters, and then uses these data to train and test the benchmark handwriting recognition system that exploits Histogram of Gradient (HOG) as features and Support Vector Machine (SVM) as recognizer. The experimental result yields 8.06% of errors. It significantly outperforms the tangent-based affine transformation and the original MNIST training data, which are 11.74% and 16.55%, respectively.
[ "['Sirisak Visessenee' 'Sanparith Marukatat' 'Rachada Kongkachandra']", "Sirisak Visessenee, Sanparith Marukatat, and Rachada Kongkachandra" ]
stat.ML cs.LG
null
1412.6039
null
null
http://arxiv.org/pdf/1412.6039v3
2015-02-22T18:13:29Z
2014-12-18T20:01:38Z
Generative Deep Deconvolutional Learning
A generative Bayesian model is developed for deep (multi-layer) convolutional dictionary learning. A novel probabilistic pooling operation is integrated into the deep model, yielding efficient bottom-up and top-down probabilistic learning. After learning the deep convolutional dictionary, testing is implemented via deconvolutional inference. To speed up this inference, a new statistical approach is proposed to project the top-layer dictionary elements to the data level. Following this, only one layer of deconvolution is required during testing. Experimental results demonstrate powerful capabilities of the model to learn multi-layer features from images. Excellent classification results are obtained on both the MNIST and Caltech 101 datasets.
[ "Yunchen Pu, Xin Yuan and Lawrence Carin", "['Yunchen Pu' 'Xin Yuan' 'Lawrence Carin']" ]
cs.LG cs.NE
null
1412.6093
null
null
http://arxiv.org/pdf/1412.6093v2
2014-12-23T18:44:33Z
2014-12-18T11:04:59Z
Learning Temporal Dependencies in Data Using a DBN-BLSTM
Since the advent of deep learning, it has been used to solve various problems using many different architectures. The application of such deep architectures to auditory data is also not uncommon. However, these architectures do not always adequately consider the temporal dependencies in data. We thus propose a new generic architecture called the Deep Belief Network - Bidirectional Long Short-Term Memory (DBN-BLSTM) network that models sequences by keeping track of the temporal information while enabling deep representations in the data. We demonstrate this new architecture by applying it to the task of music generation and obtain state-of-the-art results.
[ "Kratarth Goel and Raunaq Vohra", "['Kratarth Goel' 'Raunaq Vohra']" ]
cs.SY cs.LG math.OC stat.ML
null
1412.6095
null
null
http://arxiv.org/pdf/1412.6095v3
2015-05-15T18:41:08Z
2014-12-18T16:38:10Z
Theoretical and Numerical Analysis of Approximate Dynamic Programming with Approximation Errors
This study is aimed at answering the famous question of how the approximation errors at each iteration of Approximate Dynamic Programming (ADP) affect the quality of the final results considering the fact that errors at each iteration affect the next iteration. To this goal, convergence of Value Iteration scheme of ADP for deterministic nonlinear optimal control problems with undiscounted cost functions is investigated while considering the errors existing in approximating respective functions. The boundedness of the results around the optimal solution is obtained based on quantities which are known in a general optimal control problem and assumptions which are verifiable. Moreover, since the presence of the approximation errors leads to the deviation of the results from optimality, sufficient conditions for stability of the system operated by the result obtained after a finite number of value iterations, along with an estimation of its region of attraction, are derived in terms of a calculable upper bound of the control approximation error. Finally, the process of implementation of the method on an orbital maneuver problem is investigated through which the assumptions made in the theoretical developments are verified and the sufficient conditions are applied for guaranteeing stability and near optimality.
[ "Ali Heydari", "['Ali Heydari']" ]
cs.CV cs.LG cs.NE
null
1412.6115
null
null
http://arxiv.org/pdf/1412.6115v1
2014-12-18T21:09:01Z
2014-12-18T21:09:01Z
Compressing Deep Convolutional Networks using Vector Quantization
Deep convolutional neural networks (CNN) has become the most promising method for object recognition, repeatedly demonstrating record breaking results for image classification and object detection in recent years. However, a very deep CNN generally involves many layers with millions of parameters, making the storage of the network model to be extremely large. This prohibits the usage of deep CNNs on resource limited hardware, especially cell phones or other embedded devices. In this paper, we tackle this model storage issue by investigating information theoretical vector quantization methods for compressing the parameters of CNNs. In particular, we have found in terms of compressing the most storage demanding dense connected layers, vector quantization methods have a clear gain over existing matrix factorization methods. Simply applying k-means clustering to the weights or conducting product quantization can lead to a very good balance between model size and recognition accuracy. For the 1000-category classification task in the ImageNet challenge, we are able to achieve 16-24 times compression of the network with only 1% loss of classification accuracy using the state-of-the-art CNN.
[ "['Yunchao Gong' 'Liu Liu' 'Ming Yang' 'Lubomir Bourdev']", "Yunchao Gong and Liu Liu and Ming Yang and Lubomir Bourdev" ]
cs.AI cs.LG nlin.AO physics.data-an
10.1088/1367-2630/17/8/083023
1412.6141
null
null
http://arxiv.org/abs/1412.6141v1
2014-10-30T08:23:13Z
2014-10-30T08:23:13Z
Efficient Decision-Making by Volume-Conserving Physical Object
We demonstrate that any physical object, as long as its volume is conserved when coupled with suitable operations, provides a sophisticated decision-making capability. We consider the problem of finding, as accurately and quickly as possible, the most profitable option from a set of options that gives stochastic rewards. These decisions are made as dictated by a physical object, which is moved in a manner similar to the fluctuations of a rigid body in a tug-of-war game. Our analytical calculations validate statistical reasons why our method exhibits higher efficiency than conventional algorithms.
[ "Song-Ju Kim, Masashi Aono, and Etsushi Nameda", "['Song-Ju Kim' 'Masashi Aono' 'Etsushi Nameda']" ]
cs.LG cs.AI stat.ML
null
1412.6177
null
null
http://arxiv.org/pdf/1412.6177v3
2015-03-31T19:22:18Z
2014-12-18T23:25:22Z
Example Selection For Dictionary Learning
In unsupervised learning, an unbiased uniform sampling strategy is typically used, in order that the learned features faithfully encode the statistical structure of the training data. In this work, we explore whether active example selection strategies - algorithms that select which examples to use, based on the current estimate of the features - can accelerate learning. Specifically, we investigate effects of heuristic and saliency-inspired selection algorithms on the dictionary learning task with sparse activations. We show that some selection algorithms do improve the speed of learning, and we speculate on why they might work.
[ "Tomoki Tsuchida and Garrison W. Cottrell", "['Tomoki Tsuchida' 'Garrison W. Cottrell']" ]
cs.LG cs.CR cs.NE
null
1412.6181
null
null
http://arxiv.org/pdf/1412.6181v2
2014-12-24T06:16:14Z
2014-12-18T23:38:54Z
Crypto-Nets: Neural Networks over Encrypted Data
The problem we address is the following: how can a user employ a predictive model that is held by a third party, without compromising private information. For example, a hospital may wish to use a cloud service to predict the readmission risk of a patient. However, due to regulations, the patient's medical files cannot be revealed. The goal is to make an inference using the model, without jeopardizing the accuracy of the prediction or the privacy of the data. To achieve high accuracy, we use neural networks, which have been shown to outperform other learning models for many tasks. To achieve the privacy requirements, we use homomorphic encryption in the following protocol: the data owner encrypts the data and sends the ciphertexts to the third party to obtain a prediction from a trained model. The model operates on these ciphertexts and sends back the encrypted prediction. In this protocol, not only the data remains private, even the values predicted are available only to the data owner. Using homomorphic encryption and modifications to the activation functions and training algorithms of neural networks, we show that it is protocol is possible and may be feasible. This method paves the way to build a secure cloud-based neural network prediction services without invading users' privacy.
[ "Pengtao Xie and Misha Bilenko and Tom Finley and Ran Gilad-Bachrach\n and Kristin Lauter and Michael Naehrig", "['Pengtao Xie' 'Misha Bilenko' 'Tom Finley' 'Ran Gilad-Bachrach'\n 'Kristin Lauter' 'Michael Naehrig']" ]
cs.LG cs.CL
10.1142/S1793536914500125
1412.6211
null
null
http://arxiv.org/abs/1412.6211v1
2014-12-19T04:31:11Z
2014-12-19T04:31:11Z
Multiple Authors Detection: A Quantitative Analysis of Dream of the Red Chamber
Inspired by the authorship controversy of Dream of the Red Chamber and the application of machine learning in the study of literary stylometry, we develop a rigorous new method for the mathematical analysis of authorship by testing for a so-called chrono-divide in writing styles. Our method incorporates some of the latest advances in the study of authorship attribution, particularly techniques from support vector machines. By introducing the notion of relative frequency as a feature ranking metric our method proves to be highly effective and robust. Applying our method to the Cheng-Gao version of Dream of the Red Chamber has led to convincing if not irrefutable evidence that the first $80$ chapters and the last $40$ chapters of the book were written by two different authors. Furthermore, our analysis has unexpectedly provided strong support to the hypothesis that Chapter 67 was not the work of Cao Xueqin either. We have also tested our method to the other three Great Classical Novels in Chinese. As expected no chrono-divides have been found. This provides further evidence of the robustness of our method.
[ "Xianfeng Hu, Yang Wang and Qiang Wu", "['Xianfeng Hu' 'Yang Wang' 'Qiang Wu']" ]
cs.NE cs.LG
null
1412.6249
null
null
http://arxiv.org/pdf/1412.6249v5
2015-04-16T13:09:33Z
2014-12-19T08:20:10Z
Purine: A bi-graph based deep learning framework
In this paper, we introduce a novel deep learning framework, termed Purine. In Purine, a deep network is expressed as a bipartite graph (bi-graph), which is composed of interconnected operators and data tensors. With the bi-graph abstraction, networks are easily solvable with event-driven task dispatcher. We then demonstrate that different parallelism schemes over GPUs and/or CPUs on single or multiple PCs can be universally implemented by graph composition. This eases researchers from coding for various parallelization schemes, and the same dispatcher can be used for solving variant graphs. Scheduled by the task dispatcher, memory transfers are fully overlapped with other computations, which greatly reduce the communication overhead and help us achieve approximate linear acceleration.
[ "['Min Lin' 'Shuo Li' 'Xuan Luo' 'Shuicheng Yan']", "Min Lin, Shuo Li, Xuan Luo, Shuicheng Yan" ]
cs.LG cs.NE
null
1412.6257
null
null
http://arxiv.org/pdf/1412.6257v1
2014-12-19T09:30:33Z
2014-12-19T09:30:33Z
Gradual training of deep denoising auto encoders
Stacked denoising auto encoders (DAEs) are well known to learn useful deep representations, which can be used to improve supervised training by initializing a deep network. We investigate a training scheme of a deep DAE, where DAE layers are gradually added and keep adapting as additional layers are added. We show that in the regime of mid-sized datasets, this gradual training provides a small but consistent improvement over stacked training in both reconstruction quality and classification error over stacked training on MNIST and CIFAR datasets.
[ "['Alexander Kalmanovich' 'Gal Chechik']", "Alexander Kalmanovich and Gal Chechik" ]
cs.LG cs.AI stat.ML
null
1412.6285
null
null
http://arxiv.org/pdf/1412.6285v1
2014-12-19T10:50:14Z
2014-12-19T10:50:14Z
From dependency to causality: a machine learning approach
The relationship between statistical dependency and causality lies at the heart of all statistical approaches to causal inference. Recent results in the ChaLearn cause-effect pair challenge have shown that causal directionality can be inferred with good accuracy also in Markov indistinguishable configurations thanks to data driven approaches. This paper proposes a supervised machine learning approach to infer the existence of a directed causal link between two variables in multivariate settings with $n>2$ variables. The approach relies on the asymmetry of some conditional (in)dependence relations between the members of the Markov blankets of two variables causally connected. Our results show that supervised learning methods may be successfully used to extract causal information on the basis of asymmetric statistical descriptors also for $n>2$ variate distributions.
[ "['Gianluca Bontempi' 'Maxime Flauder']", "Gianluca Bontempi and Maxime Flauder" ]
cs.LG stat.ML
null
1412.6286
null
null
http://arxiv.org/pdf/1412.6286v3
2015-03-30T15:14:20Z
2014-12-19T11:01:21Z
Regression with Linear Factored Functions
Many applications that use empirically estimated functions face a curse of dimensionality, because the integrals over most function classes must be approximated by sampling. This paper introduces a novel regression-algorithm that learns linear factored functions (LFF). This class of functions has structural properties that allow to analytically solve certain integrals and to calculate point-wise products. Applications like belief propagation and reinforcement learning can exploit these properties to break the curse and speed up computation. We derive a regularized greedy optimization scheme, that learns factored basis functions during training. The novel regression algorithm performs competitively to Gaussian processes on benchmark tasks, and the learned LFF functions are with 4-9 factored basis functions on average very compact.
[ "['Wendelin Böhmer' 'Klaus Obermayer']", "Wendelin B\\\"ohmer and Klaus Obermayer" ]
cs.CV cs.LG cs.NE
null
1412.6296
null
null
http://arxiv.org/pdf/1412.6296v2
2015-04-09T15:07:06Z
2014-12-19T11:34:37Z
Generative Modeling of Convolutional Neural Networks
The convolutional neural networks (CNNs) have proven to be a powerful tool for discriminative learning. Recently researchers have also started to show interest in the generative aspects of CNNs in order to gain a deeper understanding of what they have learned and how to further improve them. This paper investigates generative modeling of CNNs. The main contributions include: (1) We construct a generative model for the CNN in the form of exponential tilting of a reference distribution. (2) We propose a generative gradient for pre-training CNNs by a non-parametric importance sampling scheme, which is fundamentally different from the commonly used discriminative gradient, and yet has the same computational architecture and cost as the latter. (3) We propose a generative visualization method for the CNNs by sampling from an explicit parametric image distribution. The proposed visualization method can directly draw synthetic samples for any given node in a trained CNN by the Hamiltonian Monte Carlo (HMC) algorithm, without resorting to any extra hold-out images. Experiments on the challenging ImageNet benchmark show that the proposed generative gradient pre-training consistently helps improve the performances of CNNs, and the proposed generative visualization method generates meaningful and varied samples of synthetic images from a large-scale deep CNN.
[ "Jifeng Dai, Yang Lu, Ying-Nian Wu", "['Jifeng Dai' 'Yang Lu' 'Ying-Nian Wu']" ]
cs.LG stat.ML
null
1412.6388
null
null
http://arxiv.org/pdf/1412.6388v1
2014-12-19T15:44:02Z
2014-12-19T15:44:02Z
Distributed Decision Trees
Recently proposed budding tree is a decision tree algorithm in which every node is part internal node and part leaf. This allows representing every decision tree in a continuous parameter space, and therefore a budding tree can be jointly trained with backpropagation, like a neural network. Even though this continuity allows it to be used in hierarchical representation learning, the learned representations are local: Activation makes a soft selection among all root-to-leaf paths in a tree. In this work we extend the budding tree and propose the distributed tree where the children use different and independent splits and hence multiple paths in a tree can be traversed at the same time. This ability to combine multiple paths gives the power of a distributed representation, as in a traditional perceptron layer. We show that distributed trees perform comparably or better than budding and traditional hard trees on classification and regression tasks.
[ "['Ozan İrsoy' 'Ethem Alpaydın']", "Ozan \\.Irsoy, Ethem Alpayd{\\i}n" ]
cs.CL cs.LG stat.ML
null
1412.6418
null
null
http://arxiv.org/pdf/1412.6418v3
2015-04-16T10:24:27Z
2014-12-19T16:30:33Z
Inducing Semantic Representation from Text by Jointly Predicting and Factorizing Relations
In this work, we propose a new method to integrate two recent lines of work: unsupervised induction of shallow semantics (e.g., semantic roles) and factorization of relations in text and knowledge bases. Our model consists of two components: (1) an encoding component: a semantic role labeling model which predicts roles given a rich set of syntactic and lexical features; (2) a reconstruction component: a tensor factorization model which relies on roles to predict argument fillers. When the components are estimated jointly to minimize errors in argument reconstruction, the induced roles largely correspond to roles defined in annotated resources. Our method performs on par with most accurate role induction methods on English, even though, unlike these previous approaches, we do not incorporate any prior linguistic knowledge about the language.
[ "['Ivan Titov' 'Ehsan Khoddam']", "Ivan Titov and Ehsan Khoddam" ]
cs.LG cs.AI cs.RO
null
1412.6451
null
null
http://arxiv.org/pdf/1412.6451v1
2014-12-19T17:41:59Z
2014-12-19T17:41:59Z
Grounding Hierarchical Reinforcement Learning Models for Knowledge Transfer
Methods of deep machine learning enable to to reuse low-level representations efficiently for generating more abstract high-level representations. Originally, deep learning has been applied passively (e.g., for classification purposes). Recently, it has been extended to estimate the value of actions for autonomous agents within the framework of reinforcement learning (RL). Explicit models of the environment can be learned to augment such a value function. Although "flat" connectionist methods have already been used for model-based RL, up to now, only model-free variants of RL have been equipped with methods from deep learning. We propose a variant of deep model-based RL that enables an agent to learn arbitrarily abstract hierarchical representations of its environment. In this paper, we present research on how such hierarchical representations can be grounded in sensorimotor interaction between an agent and its environment.
[ "['Mark Wernsdorfer' 'Ute Schmid']", "Mark Wernsdorfer, Ute Schmid" ]
cs.LG
null
1412.6452
null
null
http://arxiv.org/pdf/1412.6452v3
2015-03-31T11:10:43Z
2014-12-19T17:43:26Z
Algorithmic Robustness for Learning via $(\epsilon, \gamma, \tau)$-Good Similarity Functions
The notion of metric plays a key role in machine learning problems such as classification, clustering or ranking. However, it is worth noting that there is a severe lack of theoretical guarantees that can be expected on the generalization capacity of the classifier associated to a given metric. The theoretical framework of $(\epsilon, \gamma, \tau)$-good similarity functions (Balcan et al., 2008) has been one of the first attempts to draw a link between the properties of a similarity function and those of a linear classifier making use of it. In this paper, we extend and complete this theory by providing a new generalization bound for the associated classifier based on the algorithmic robustness framework.
[ "['Maria-Irina Nicolae' 'Marc Sebban' 'Amaury Habrard' 'Éric Gaussier'\n 'Massih-Reza Amini']", "Maria-Irina Nicolae, Marc Sebban, Amaury Habrard, \\'Eric Gaussier and\n Massih-Reza Amini" ]
cs.LG stat.ML
null
1412.6493
null
null
http://arxiv.org/pdf/1412.6493v1
2014-12-19T19:27:21Z
2014-12-19T19:27:21Z
A la Carte - Learning Fast Kernels
Kernel methods have great promise for learning rich statistical representations of large modern datasets. However, compared to neural networks, kernel methods have been perceived as lacking in scalability and flexibility. We introduce a family of fast, flexible, lightly parametrized and general purpose kernel learning methods, derived from Fastfood basis function expansions. We provide mechanisms to learn the properties of groups of spectral frequencies in these expansions, which require only O(mlogd) time and O(m) memory, for m basis functions and d input dimensions. We show that the proposed methods can learn a wide class of kernels, outperforming the alternatives in accuracy, speed, and memory consumption.
[ "Zichao Yang and Alexander J. Smola and Le Song and Andrew Gordon\n Wilson", "['Zichao Yang' 'Alexander J. Smola' 'Le Song' 'Andrew Gordon Wilson']" ]
cs.LG cs.NE q-bio.NC
null
1412.6502
null
null
http://arxiv.org/pdf/1412.6502v6
2019-02-04T00:41:35Z
2014-12-19T20:00:38Z
Detecting Epileptic Seizures from EEG Data using Neural Networks
We explore the use of neural networks trained with dropout in predicting epileptic seizures from electroencephalographic data (scalp EEG). The input to the neural network is a 126 feature vector containing 9 features for each of the 14 EEG channels obtained over 1-second, non-overlapping windows. The models in our experiments achieved high sensitivity and specificity on patient records not used in the training process. This is demonstrated using leave-one-out-cross-validation across patient records, where we hold out one patient's record as the test set and use all other patients' records for training; repeating this procedure for all patients in the database.
[ "Siddharth Pramod, Adam Page, Tinoosh Mohsenin and Tim Oates", "['Siddharth Pramod' 'Adam Page' 'Tinoosh Mohsenin' 'Tim Oates']" ]
cs.LG stat.ML
null
1412.6506
null
null
http://arxiv.org/pdf/1412.6506v1
2014-12-19T20:06:02Z
2014-12-19T20:06:02Z
Cauchy Principal Component Analysis
Principal Component Analysis (PCA) has wide applications in machine learning, text mining and computer vision. Classical PCA based on a Gaussian noise model is fragile to noise of large magnitude. Laplace noise assumption based PCA methods cannot deal with dense noise effectively. In this paper, we propose Cauchy Principal Component Analysis (Cauchy PCA), a very simple yet effective PCA method which is robust to various types of noise. We utilize Cauchy distribution to model noise and derive Cauchy PCA under the maximum likelihood estimation (MLE) framework with low rank constraint. Our method can robustly estimate the low rank matrix regardless of whether noise is large or small, dense or sparse. We analyze the robustness of Cauchy PCA from a robust statistics view and present an efficient singular value projection optimization method. Experimental results on both simulated data and real applications demonstrate the robustness of Cauchy PCA to various noise patterns.
[ "['Pengtao Xie' 'Eric Xing']", "Pengtao Xie and Eric Xing" ]
cs.LG stat.ML
null
1412.6514
null
null
http://arxiv.org/pdf/1412.6514v2
2015-04-19T18:46:19Z
2014-12-19T20:18:36Z
Score Function Features for Discriminative Learning
Feature learning forms the cornerstone for tackling challenging learning problems in domains such as speech, computer vision and natural language processing. In this paper, we consider a novel class of matrix and tensor-valued features, which can be pre-trained using unlabeled samples. We present efficient algorithms for extracting discriminative information, given these pre-trained features and labeled samples for any related task. Our class of features are based on higher-order score functions, which capture local variations in the probability density function of the input. We establish a theoretical framework to characterize the nature of discriminative information that can be extracted from score-function features, when used in conjunction with labeled samples. We employ efficient spectral decomposition algorithms (on matrices and tensors) for extracting discriminative components. The advantage of employing tensor-valued features is that we can extract richer discriminative information in the form of an overcomplete representations. Thus, we present a novel framework for employing generative models of the input for discriminative learning.
[ "Majid Janzamin and Hanie Sedghi and Anima Anandkumar", "['Majid Janzamin' 'Hanie Sedghi' 'Anima Anandkumar']" ]
cs.NE cs.LG stat.ML
null
1412.6544
null
null
http://arxiv.org/pdf/1412.6544v6
2015-05-21T21:44:31Z
2014-12-19T21:55:01Z
Qualitatively characterizing neural network optimization problems
Training neural networks involves solving large-scale non-convex optimization problems. This task has long been believed to be extremely difficult, with fear of local minima and other obstacles motivating a variety of schemes to improve optimization, such as unsupervised pretraining. However, modern neural networks are able to achieve negligible training error on complex tasks, using only direct training with stochastic gradient descent. We introduce a simple analysis technique to look for evidence that such networks are overcoming local optima. We find that, in fact, on a straight path from initialization to solution, a variety of state of the art neural networks never encounter any significant obstacles.
[ "['Ian J. Goodfellow' 'Oriol Vinyals' 'Andrew M. Saxe']", "Ian J. Goodfellow, Oriol Vinyals, and Andrew M. Saxe" ]
cs.LG
null
1412.6547
null
null
http://arxiv.org/pdf/1412.6547v7
2015-07-05T15:38:11Z
2014-12-19T22:09:35Z
Fast Label Embeddings via Randomized Linear Algebra
Many modern multiclass and multilabel problems are characterized by increasingly large output spaces. For these problems, label embeddings have been shown to be a useful primitive that can improve computational and statistical efficiency. In this work we utilize a correspondence between rank constrained estimation and low dimensional label embeddings that uncovers a fast label embedding algorithm which works in both the multiclass and multilabel settings. The result is a randomized algorithm whose running time is exponentially faster than naive algorithms. We demonstrate our techniques on two large-scale public datasets, from the Large Scale Hierarchical Text Challenge and the Open Directory Project, where we obtain state of the art results.
[ "Paul Mineiro and Nikos Karampatziakis", "['Paul Mineiro' 'Nikos Karampatziakis']" ]
cs.LG cs.NE
null
1412.6550
null
null
http://arxiv.org/pdf/1412.6550v4
2015-03-27T11:52:28Z
2014-12-19T22:40:51Z
FitNets: Hints for Thin Deep Nets
While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.
[ "Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine\n Chassang, Carlo Gatta and Yoshua Bengio", "['Adriana Romero' 'Nicolas Ballas' 'Samira Ebrahimi Kahou'\n 'Antoine Chassang' 'Carlo Gatta' 'Yoshua Bengio']" ]
cs.CV cs.LG
null
1412.6553
null
null
http://arxiv.org/pdf/1412.6553v3
2015-04-24T11:40:54Z
2014-12-19T23:02:43Z
Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition
We propose a simple two-step approach for speeding up convolution layers within large convolutional neural networks based on tensor decomposition and discriminative fine-tuning. Given a layer, we use non-linear least squares to compute a low-rank CP-decomposition of the 4D convolution kernel tensor into a sum of a small number of rank-one tensors. At the second step, this decomposition is used to replace the original convolutional layer with a sequence of four convolutional layers with small kernels. After such replacement, the entire network is fine-tuned on the training data using standard backpropagation process. We evaluate this approach on two CNNs and show that it is competitive with previous approaches, leading to higher obtained CPU speedups at the cost of lower accuracy drops for the smaller of the two networks. Thus, for the 36-class character classification CNN, our approach obtains a 8.5x CPU speedup of the whole network with only minor accuracy drop (1% from 91% to 90%). For the standard ImageNet architecture (AlexNet), the approach speeds up the second convolution layer by a factor of 4x at the cost of $1\%$ increase of the overall top-5 classification error.
[ "['Vadim Lebedev' 'Yaroslav Ganin' 'Maksim Rakhuba' 'Ivan Oseledets'\n 'Victor Lempitsky']", "Vadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan Oseledets, Victor\n Lempitsky" ]
cs.NE cs.LG stat.ML
null
1412.6558
null
null
http://arxiv.org/pdf/1412.6558v3
2015-02-27T22:28:32Z
2014-12-19T23:24:53Z
Random Walk Initialization for Training Very Deep Feedforward Networks
Training very deep networks is an important open problem in machine learning. One of many difficulties is that the norm of the back-propagated error gradient can grow or decay exponentially. Here we show that training very deep feed-forward networks (FFNs) is not as difficult as previously thought. Unlike when back-propagation is applied to a recurrent network, application to an FFN amounts to multiplying the error gradient by a different random matrix at each layer. We show that the successive application of correctly scaled random matrices to an initial vector results in a random walk of the log of the norm of the resulting vectors, and we compute the scaling that makes this walk unbiased. The variance of the random walk grows only linearly with network depth and is inversely proportional to the size of each layer. Practically, this implies a gradient whose log-norm scales with the square root of the network depth and shows that the vanishing gradient problem can be mitigated by increasing the width of the layers. Mathematical analyses and experimental results using stochastic gradient descent to optimize tasks related to the MNIST and TIMIT datasets are provided to support these claims. Equations for the optimal matrix scaling are provided for the linear and ReLU cases.
[ "David Sussillo, L.F. Abbott", "['David Sussillo' 'L. F. Abbott']" ]
stat.ML cs.CV cs.LG cs.NE
null
1412.6563
null
null
http://arxiv.org/pdf/1412.6563v2
2015-04-13T21:35:29Z
2014-12-20T00:05:57Z
Self-informed neural network structure learning
We study the problem of large scale, multi-label visual recognition with a large number of possible classes. We propose a method for augmenting a trained neural network classifier with auxiliary capacity in a manner designed to significantly improve upon an already well-performing model, while minimally impacting its computational footprint. Using the predictions of the network itself as a descriptor for assessing visual similarity, we define a partitioning of the label space into groups of visually similar entities. We then augment the network with auxilliary hidden layer pathways with connectivity only to these groups of label units. We report a significant improvement in mean average precision on a large-scale object recognition task with the augmented model, while increasing the number of multiply-adds by less than 3%.
[ "['David Warde-Farley' 'Andrew Rabinovich' 'Dragomir Anguelov']", "David Warde-Farley, Andrew Rabinovich, Dragomir Anguelov" ]
cs.LG cs.NE
null
1412.6564
null
null
http://arxiv.org/pdf/1412.6564v2
2015-04-10T19:03:34Z
2014-12-20T00:31:30Z
Move Evaluation in Go Using Deep Convolutional Neural Networks
The game of Go is more challenging than other board games, due to the difficulty of constructing a position or move evaluation function. In this paper we investigate whether deep convolutional networks can be used to directly represent and learn this knowledge. We train a large 12-layer convolutional neural network by supervised learning from a database of human professional games. The network correctly predicts the expert move in 55% of positions, equalling the accuracy of a 6 dan human player. When the trained convolutional network was used directly to play games of Go, without any search, it beat the traditional search program GnuGo in 97% of games, and matched the performance of a state-of-the-art Monte-Carlo tree search that simulates a million positions per move.
[ "Chris J. Maddison, Aja Huang, Ilya Sutskever, David Silver", "['Chris J. Maddison' 'Aja Huang' 'Ilya Sutskever' 'David Silver']" ]
cs.CL cs.LG
null
1412.6568
null
null
http://arxiv.org/pdf/1412.6568v3
2015-04-15T13:10:07Z
2014-12-20T01:03:46Z
Improving zero-shot learning by mitigating the hubness problem
The zero-shot paradigm exploits vector-based word representations extracted from text corpora with unsupervised methods to learn general mapping functions from other feature spaces onto word space, where the words associated to the nearest neighbours of the mapped vectors are used as their linguistic labels. We show that the neighbourhoods of the mapped elements are strongly polluted by hubs, vectors that tend to be near a high proportion of items, pushing their correct labels down the neighbour list. After illustrating the problem empirically, we propose a simple method to correct it by taking the proximity distribution of potential neighbours across many mapped vectors into account. We show that this correction leads to consistent improvements in realistic zero-shot experiments in the cross-lingual, image labeling and image retrieval domains.
[ "Georgiana Dinu, Angeliki Lazaridou, Marco Baroni", "['Georgiana Dinu' 'Angeliki Lazaridou' 'Marco Baroni']" ]
stat.ML cs.LG
null
1412.6572
null
null
http://arxiv.org/pdf/1412.6572v3
2015-03-20T20:19:16Z
2014-12-20T01:17:12Z
Explaining and Harnessing Adversarial Examples
Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.
[ "['Ian J. Goodfellow' 'Jonathon Shlens' 'Christian Szegedy']", "Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy" ]
cs.LG cs.CL stat.ML
null
1412.6577
null
null
http://arxiv.org/pdf/1412.6577v3
2015-05-02T20:22:32Z
2014-12-20T01:53:22Z
Modeling Compositionality with Multiplicative Recurrent Neural Networks
We present the multiplicative recurrent neural network as a general model for compositional meaning in language, and evaluate it on the task of fine-grained sentiment analysis. We establish a connection to the previously investigated matrix-space models for compositionality, and show they are special cases of the multiplicative recurrent net. Our experiments show that these models perform comparably or better than Elman-type additive recurrent neural networks and outperform matrix-space models on a standard fine-grained sentiment analysis corpus. Furthermore, they yield comparable results to structural deep models on the recently published Stanford Sentiment Treebank without the need for generating parse trees.
[ "Ozan \\.Irsoy, Claire Cardie", "['Ozan İrsoy' 'Claire Cardie']" ]
stat.ML cs.LG cs.NE
null
1412.6581
null
null
http://arxiv.org/pdf/1412.6581v6
2015-06-15T12:35:11Z
2014-12-20T02:07:07Z
Variational Recurrent Auto-Encoders
In this paper we propose a model that combines the strengths of RNNs and SGVB: the Variational Recurrent Auto-Encoder (VRAE). Such a model can be used for efficient, large scale unsupervised learning on time series data, mapping the time series data to a latent vector representation. The model is generative, such that data can be generated from samples of the latent space. An important contribution of this work is that the model can make use of unlabeled data in order to facilitate supervised training of RNNs by initialising the weights and network state.
[ "Otto Fabius, Joost R. van Amersfoort", "['Otto Fabius' 'Joost R. van Amersfoort']" ]
cs.LG cs.CV cs.NE
null
1412.6583
null
null
http://arxiv.org/pdf/1412.6583v4
2015-06-17T06:47:48Z
2014-12-20T02:52:03Z
Discovering Hidden Factors of Variation in Deep Networks
Deep learning has enjoyed a great deal of success because of its ability to learn useful features for tasks such as classification. But there has been less exploration in learning the factors of variation apart from the classification signal. By augmenting autoencoders with simple regularization terms during training, we demonstrate that standard deep architectures can discover and explicitly represent factors of variation beyond those relevant for categorization. We introduce a cross-covariance penalty (XCov) as a method to disentangle factors like handwriting style for digits and subject identity in faces. We demonstrate this on the MNIST handwritten digit database, the Toronto Faces Database (TFD) and the Multi-PIE dataset by generating manipulated instances of the data. Furthermore, we demonstrate these deep networks can extrapolate `hidden' variation in the supervised signal.
[ "Brian Cheung, Jesse A. Livezey, Arjun K. Bansal, Bruno A. Olshausen", "['Brian Cheung' 'Jesse A. Livezey' 'Arjun K. Bansal' 'Bruno A. Olshausen']" ]
stat.ML cs.IT cs.LG math.IT stat.ME
10.1109/ACCESS.2015.2425304
1412.6586
null
null
http://arxiv.org/abs/1412.6586v3
2015-05-27T23:05:49Z
2014-12-20T03:02:32Z
A deep-structured fully-connected random field model for structured inference
There has been significant interest in the use of fully-connected graphical models and deep-structured graphical models for the purpose of structured inference. However, fully-connected and deep-structured graphical models have been largely explored independently, leaving the unification of these two concepts ripe for exploration. A fundamental challenge with unifying these two types of models is in dealing with computational complexity. In this study, we investigate the feasibility of unifying fully-connected and deep-structured models in a computationally tractable manner for the purpose of structured inference. To accomplish this, we introduce a deep-structured fully-connected random field (DFRF) model that integrates a series of intermediate sparse auto-encoding layers placed between state layers to significantly reduce computational complexity. The problem of image segmentation was used to illustrate the feasibility of using the DFRF for structured inference in a computationally tractable manner. Results in this study show that it is feasible to unify fully-connected and deep-structured models in a computationally tractable manner for solving structured inference problems such as image segmentation.
[ "['Alexander Wong' 'Mohammad Javad Shafiee' 'Parthipan Siva' 'Xiao Yu Wang']", "Alexander Wong, Mohammad Javad Shafiee, Parthipan Siva, and Xiao Yu\n Wang" ]
cs.CV cs.LG cs.NE
null
1412.6596
null
null
http://arxiv.org/pdf/1412.6596v3
2015-04-15T19:48:37Z
2014-12-20T04:11:33Z
Training Deep Neural Networks on Noisy Labels with Bootstrapping
Current state-of-the-art deep learning systems for visual object recognition and detection use purely supervised training with regularization such as dropout to avoid overfitting. The performance depends critically on the amount of labeled examples, and in current practice the labels are assumed to be unambiguous and accurate. However, this assumption often does not hold; e.g. in recognition, class labels may be missing; in detection, objects in the image may not be localized; and in general, the labeling may be subjective. In this work we propose a generic way to handle noisy and incomplete labeling by augmenting the prediction objective with a notion of consistency. We consider a prediction consistent if the same prediction is made given similar percepts, where the notion of similarity is between deep network features computed from the input data. In experiments we demonstrate that our approach yields substantial robustness to label noise on several datasets. On MNIST handwritten digits, we show that our model is robust to label corruption. On the Toronto Face Database, we show that our model handles well the case of subjective labels in emotion recognition, achieving state-of-the- art results, and can also benefit from unlabeled face images with no modification to our method. On the ILSVRC2014 detection challenge data, we show that our approach extends to very deep networks, high resolution images and structured outputs, and results in improved scalable detection.
[ "Scott Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru\n Erhan, Andrew Rabinovich", "['Scott Reed' 'Honglak Lee' 'Dragomir Anguelov' 'Christian Szegedy'\n 'Dumitru Erhan' 'Andrew Rabinovich']" ]
cs.CV cs.LG cs.NE
null
1412.6597
null
null
http://arxiv.org/pdf/1412.6597v4
2015-04-10T21:26:31Z
2014-12-20T04:20:55Z
An Analysis of Unsupervised Pre-training in Light of Recent Advances
Convolutional neural networks perform well on object recognition because of a number of recent advances: rectified linear units (ReLUs), data augmentation, dropout, and large labelled datasets. Unsupervised data has been proposed as another way to improve performance. Unfortunately, unsupervised pre-training is not used by state-of-the-art methods leading to the following question: Is unsupervised pre-training still useful given recent advances? If so, when? We answer this in three parts: we 1) develop an unsupervised method that incorporates ReLUs and recent unsupervised regularization techniques, 2) analyze the benefits of unsupervised pre-training compared to data augmentation and dropout on CIFAR-10 while varying the ratio of unsupervised to supervised samples, 3) verify our findings on STL-10. We discover unsupervised pre-training, as expected, helps when the ratio of unsupervised to supervised samples is high, and surprisingly, hurts when the ratio is low. We also use unsupervised pre-training with additional color augmentation to achieve near state-of-the-art performance on STL-10.
[ "Tom Le Paine, Pooya Khorrami, Wei Han, Thomas S. Huang", "['Tom Le Paine' 'Pooya Khorrami' 'Wei Han' 'Thomas S. Huang']" ]
cs.CV cs.LG
null
1412.6598
null
null
http://arxiv.org/pdf/1412.6598v2
2015-04-11T20:13:40Z
2014-12-20T04:25:34Z
Automatic Discovery and Optimization of Parts for Image Classification
Part-based representations have been shown to be very useful for image classification. Learning part-based models is often viewed as a two-stage problem. First, a collection of informative parts is discovered, using heuristics that promote part distinctiveness and diversity, and then classifiers are trained on the vector of part responses. In this paper we unify the two stages and learn the image classifiers and a set of shared parts jointly. We generate an initial pool of parts by randomly sampling part candidates and selecting a good subset using L1/L2 regularization. All steps are driven "directly" by the same objective namely the classification loss on a training set. This lets us do away with engineered heuristics. We also introduce the notion of "negative parts", intended as parts that are negatively correlated with one or more classes. Negative parts are complementary to the parts discovered by other methods, which look only for positive correlations.
[ "Sobhan Naderi Parizi, Andrea Vedaldi, Andrew Zisserman and Pedro\n Felzenszwalb", "['Sobhan Naderi Parizi' 'Andrea Vedaldi' 'Andrew Zisserman'\n 'Pedro Felzenszwalb']" ]
cs.LG
null
1412.6599
null
null
http://arxiv.org/pdf/1412.6599v3
2015-04-13T23:28:01Z
2014-12-20T04:36:28Z
Hot Swapping for Online Adaptation of Optimization Hyperparameters
We describe a general framework for online adaptation of optimization hyperparameters by `hot swapping' their values during learning. We investigate this approach in the context of adaptive learning rate selection using an explore-exploit strategy from the multi-armed bandit literature. Experiments on a benchmark neural network show that the hot swapping approach leads to consistently better solutions compared to well-known alternatives such as AdaDelta and stochastic gradient with exhaustive hyperparameter search.
[ "Kevin Bache, Dennis DeCoste, Padhraic Smyth", "['Kevin Bache' 'Dennis DeCoste' 'Padhraic Smyth']" ]
cs.LG cs.NE
null
1412.6601
null
null
http://arxiv.org/pdf/1412.6601v3
2015-09-19T23:30:51Z
2014-12-20T04:44:00Z
Using Neural Networks for Click Prediction of Sponsored Search
Sponsored search is a multi-billion dollar industry and makes up a major source of revenue for search engines (SE). click-through-rate (CTR) estimation plays a crucial role for ads selection, and greatly affects the SE revenue, advertiser traffic and user experience. We propose a novel architecture for solving CTR prediction problem by combining artificial neural networks (ANN) with decision trees. First we compare ANN with respect to other popular machine learning models being used for this task. Then we go on to combine ANN with MatrixNet (proprietary implementation of boosted trees) and evaluate the performance of the system as a whole. The results show that our approach provides significant improvement over existing models.
[ "Afroze Ibrahim Baqapuri and Ilya Trofimov", "['Afroze Ibrahim Baqapuri' 'Ilya Trofimov']" ]
cs.LG cs.CV
null
1412.6604
null
null
http://arxiv.org/pdf/1412.6604v5
2016-05-04T14:01:42Z
2014-12-20T05:05:51Z
Video (language) modeling: a baseline for generative models of natural videos
We propose a strong baseline model for unsupervised feature learning using video data. By learning to predict missing frames or extrapolate future frames from an input video sequence, the model discovers both spatial and temporal correlations which are useful to represent complex deformations and motion patterns. The models we propose are largely borrowed from the language modeling literature, and adapted to the vision domain by quantizing the space of image patches into a large dictionary. We demonstrate the approach on both a filling and a generation task. For the first time, we show that, after training on natural videos, such a model can predict non-trivial motions over short video sequences.
[ "['MarcAurelio Ranzato' 'Arthur Szlam' 'Joan Bruna' 'Michael Mathieu'\n 'Ronan Collobert' 'Sumit Chopra']", "MarcAurelio Ranzato, Arthur Szlam, Joan Bruna, Michael Mathieu, Ronan\n Collobert, Sumit Chopra" ]
stat.ML cs.LG
null
1412.6606
null
null
http://arxiv.org/pdf/1412.6606v2
2015-02-25T19:13:05Z
2014-12-20T05:14:13Z
Competing with the Empirical Risk Minimizer in a Single Pass
In many estimation problems, e.g. linear and logistic regression, we wish to minimize an unknown objective given only unbiased samples of the objective function. Furthermore, we aim to achieve this using as few samples as possible. In the absence of computational constraints, the minimizer of a sample average of observed data -- commonly referred to as either the empirical risk minimizer (ERM) or the $M$-estimator -- is widely regarded as the estimation strategy of choice due to its desirable statistical convergence properties. Our goal in this work is to perform as well as the ERM, on every problem, while minimizing the use of computational resources such as running time and space usage. We provide a simple streaming algorithm which, under standard regularity assumptions on the underlying problem, enjoys the following properties: * The algorithm can be implemented in linear time with a single pass of the observed data, using space linear in the size of a single sample. * The algorithm achieves the same statistical rate of convergence as the empirical risk minimizer on every problem, even considering constant factors. * The algorithm's performance depends on the initial error at a rate that decreases super-polynomially. * The algorithm is easily parallelizable. Moreover, we quantify the (finite-sample) rate at which the algorithm becomes competitive with the ERM.
[ "Roy Frostig, Rong Ge, Sham M. Kakade, Aaron Sidford", "['Roy Frostig' 'Rong Ge' 'Sham M. Kakade' 'Aaron Sidford']" ]
cs.LG cs.NE
null
1412.6610
null
null
http://arxiv.org/pdf/1412.6610v5
2015-06-15T00:25:47Z
2014-12-20T05:46:05Z
Scoring and Classifying with Gated Auto-encoders
Auto-encoders are perhaps the best-known non-probabilistic methods for representation learning. They are conceptually simple and easy to train. Recent theoretical work has shed light on their ability to capture manifold structure, and drawn connections to density modelling. This has motivated researchers to seek ways of auto-encoder scoring, which has furthered their use in classification. Gated auto-encoders (GAEs) are an interesting and flexible extension of auto-encoders which can learn transformations among different images or pixel covariances within images. However, they have been much less studied, theoretically or empirically. In this work, we apply a dynamical systems view to GAEs, deriving a scoring function, and drawing connections to Restricted Boltzmann Machines. On a set of deep learning benchmarks, we also demonstrate their effectiveness for single and multi-label classification.
[ "Daniel Jiwoong Im and Graham W. Taylor", "['Daniel Jiwoong Im' 'Graham W. Taylor']" ]
cs.LG cs.AI cs.CV stat.ML
null
1412.6614
null
null
http://arxiv.org/pdf/1412.6614v4
2015-04-16T18:48:31Z
2014-12-20T06:52:25Z
In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning
We present experiments demonstrating that some other form of capacity control, different from network size, plays a central role in learning multilayer feed-forward networks. We argue, partially through analogy to matrix factorization, that this is an inductive bias that can help shed light on deep learning.
[ "['Behnam Neyshabur' 'Ryota Tomioka' 'Nathan Srebro']", "Behnam Neyshabur, Ryota Tomioka, Nathan Srebro" ]
stat.ML cs.LG
null
1412.6615
null
null
http://arxiv.org/pdf/1412.6615v4
2015-04-06T21:47:50Z
2014-12-20T06:57:12Z
Explorations on high dimensional landscapes
Finding minima of a real valued non-convex function over a high dimensional space is a major challenge in science. We provide evidence that some such functions that are defined on high dimensional domains have a narrow band of values whose pre-image contains the bulk of its critical points. This is in contrast with the low dimensional picture in which this band is wide. Our simulations agree with the previous theoretical work on spin glasses that proves the existence of such a band when the dimension of the domain tends to infinity. Furthermore our experiments on teacher-student networks with the MNIST dataset establish a similar phenomenon in deep networks. We finally observe that both the gradient descent and the stochastic gradient descent methods can reach this level within the same number of steps.
[ "Levent Sagun, V. Ugur Guney, Gerard Ben Arous, Yann LeCun", "['Levent Sagun' 'V. Ugur Guney' 'Gerard Ben Arous' 'Yann LeCun']" ]
cs.CL cs.LG
null
1412.6616
null
null
http://arxiv.org/pdf/1412.6616v2
2015-02-17T17:31:58Z
2014-12-20T07:07:29Z
Outperforming Word2Vec on Analogy Tasks with Random Projections
We present a distributed vector representation based on a simplification of the BEAGLE system, designed in the context of the Sigma cognitive architecture. Our method does not require gradient-based training of neural networks, matrix decompositions as with LSA, or convolutions as with BEAGLE. All that is involved is a sum of random vectors and their pointwise products. Despite the simplicity of this technique, it gives state-of-the-art results on analogy problems, in most cases better than Word2Vec. To explain this success, we interpret it as a dimension reduction via random projection.
[ "['Abram Demski' 'Volkan Ustun' 'Paul Rosenbloom' 'Cody Kommers']", "Abram Demski, Volkan Ustun, Paul Rosenbloom, Cody Kommers" ]
cs.LG
null
1412.6617
null
null
http://arxiv.org/pdf/1412.6617v6
2015-04-07T20:57:05Z
2014-12-20T07:08:37Z
Understanding Minimum Probability Flow for RBMs Under Various Kinds of Dynamics
Energy-based models are popular in machine learning due to the elegance of their formulation and their relationship to statistical physics. Among these, the Restricted Boltzmann Machine (RBM), and its staple training algorithm contrastive divergence (CD), have been the prototype for some recent advancements in the unsupervised training of deep neural networks. However, CD has limited theoretical motivation, and can in some cases produce undesirable behavior. Here, we investigate the performance of Minimum Probability Flow (MPF) learning for training RBMs. Unlike CD, with its focus on approximating an intractable partition function via Gibbs sampling, MPF proposes a tractable, consistent, objective function defined in terms of a Taylor expansion of the KL divergence with respect to sampling dynamics. Here we propose a more general form for the sampling dynamics in MPF, and explore the consequences of different choices for these dynamics for training RBMs. Experimental results show MPF outperforming CD for various RBM configurations.
[ "Daniel Jiwoong Im, Ethan Buchman, Graham W. Taylor", "['Daniel Jiwoong Im' 'Ethan Buchman' 'Graham W. Taylor']" ]
cs.CV cs.LG cs.NE
null
1412.6618
null
null
http://arxiv.org/pdf/1412.6618v3
2015-05-03T11:26:34Z
2014-12-20T07:08:54Z
Permutohedral Lattice CNNs
This paper presents a convolutional layer that is able to process sparse input features. As an example, for image recognition problems this allows an efficient filtering of signals that do not lie on a dense grid (like pixel position), but of more general features (such as color values). The presented algorithm makes use of the permutohedral lattice data structure. The permutohedral lattice was introduced to efficiently implement a bilateral filter, a commonly used image processing operation. Its use allows for a generalization of the convolution type found in current (spatial) convolutional network architectures.
[ "['Martin Kiefel' 'Varun Jampani' 'Peter V. Gehler']", "Martin Kiefel, Varun Jampani and Peter V. Gehler" ]
cs.LG cs.NE stat.ML
null
1412.6621
null
null
http://arxiv.org/pdf/1412.6621v3
2015-02-28T07:19:35Z
2014-12-20T07:28:46Z
Why does Deep Learning work? - A perspective from Group Theory
Why does Deep Learning work? What representations does it capture? How do higher-order representations emerge? We study these questions from the perspective of group theory, thereby opening a new approach towards a theory of Deep learning. One factor behind the recent resurgence of the subject is a key algorithmic step called pre-training: first search for a good generative model for the input samples, and repeat the process one layer at a time. We show deeper implications of this simple principle, by establishing a connection with the interplay of orbits and stabilizers of group actions. Although the neural networks themselves may not form groups, we show the existence of {\em shadow} groups whose elements serve as close approximations. Over the shadow groups, the pre-training step, originally introduced as a mechanism to better initialize a network, becomes equivalent to a search for features with minimal orbits. Intuitively, these features are in a way the {\em simplest}. Which explains why a deep learning network learns simple features first. Next, we show how the same principle, when repeated in the deeper layers, can capture higher order representations, and why representation complexity increases as the layers get deeper.
[ "['Arnab Paul' 'Suresh Venkatasubramanian']", "Arnab Paul, Suresh Venkatasubramanian" ]
cs.LG cs.CV stat.ML
null
1412.6622
null
null
http://arxiv.org/pdf/1412.6622v4
2018-12-04T15:35:35Z
2014-12-20T07:34:50Z
Deep metric learning using Triplet network
Deep learning has proven itself as a successful set of models for learning useful semantic representations of data. These, however, are mostly implicitly learned as part of a classification task. In this paper we propose the triplet network model, which aims to learn useful representations by distance comparisons. A similar model was defined by Wang et al. (2014), tailor made for learning a ranking for image information retrieval. Here we demonstrate using various datasets that our model learns a better representation than that of its immediate competitor, the Siamese network. We also discuss future possible usage as a framework for unsupervised learning.
[ "Elad Hoffer, Nir Ailon", "['Elad Hoffer' 'Nir Ailon']" ]
cs.CL cs.LG
null
1412.6623
null
null
http://arxiv.org/pdf/1412.6623v4
2015-05-01T10:14:58Z
2014-12-20T07:42:40Z
Word Representations via Gaussian Embedding
Current work in lexical distributed representations maps each word to a point vector in low-dimensional space. Mapping instead to a density provides many interesting advantages, including better capturing uncertainty about a representation and its relationships, expressing asymmetries more naturally than dot product or cosine similarity, and enabling more expressive parameterization of decision boundaries. This paper advocates for density-based distributed embeddings and presents a method for learning representations in the space of Gaussian distributions. We compare performance on various word embedding benchmarks, investigate the ability of these embeddings to model entailment and other asymmetric relationships, and explore novel properties of the representation.
[ "Luke Vilnis, Andrew McCallum", "['Luke Vilnis' 'Andrew McCallum']" ]
cs.LG cs.NE stat.ML
null
1412.6630
null
null
http://arxiv.org/pdf/1412.6630v2
2015-01-05T13:28:46Z
2014-12-20T07:59:14Z
Neural Network Regularization via Robust Weight Factorization
Regularization is essential when training large neural networks. As deep neural networks can be mathematically interpreted as universal function approximators, they are effective at memorizing sampling noise in the training data. This results in poor generalization to unseen data. Therefore, it is no surprise that a new regularization technique, Dropout, was partially responsible for the now-ubiquitous winning entry to ImageNet 2012 by the University of Toronto. Currently, Dropout (and related methods such as DropConnect) are the most effective means of regularizing large neural networks. These amount to efficiently visiting a large number of related models at training time, while aggregating them to a single predictor at test time. The proposed FaMe model aims to apply a similar strategy, yet learns a factorization of each weight matrix such that the factors are robust to noise.
[ "Jan Rudy, Weiguang Ding, Daniel Jiwoong Im, Graham W. Taylor", "['Jan Rudy' 'Weiguang Ding' 'Daniel Jiwoong Im' 'Graham W. Taylor']" ]
cs.CV cs.CL cs.LG
null
1412.6632
null
null
http://arxiv.org/pdf/1412.6632v5
2015-06-11T15:26:58Z
2014-12-20T08:10:04Z
Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN)
In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. Image captions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K, Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In addition, we apply the m-RNN model to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval. The project page of this work is: www.stat.ucla.edu/~junhua.mao/m-RNN.html .
[ "['Junhua Mao' 'Wei Xu' 'Yi Yang' 'Jiang Wang' 'Zhiheng Huang'\n 'Alan Yuille']", "Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, Alan Yuille" ]
cs.SD cs.CL cs.LG
null
1412.6645
null
null
http://arxiv.org/pdf/1412.6645v3
2015-04-20T12:35:32Z
2014-12-20T11:54:41Z
Weakly Supervised Multi-Embeddings Learning of Acoustic Models
We trained a Siamese network with multi-task same/different information on a speech dataset, and found that it was possible to share a network for both tasks without a loss in performance. The first task was to discriminate between two same or different words, and the second was to discriminate between two same or different talkers.
[ "['Gabriel Synnaeve' 'Emmanuel Dupoux']", "Gabriel Synnaeve, Emmanuel Dupoux" ]
cs.NE cs.CL cs.LG
null
1412.6650
null
null
http://arxiv.org/pdf/1412.6650v4
2015-07-07T14:54:51Z
2014-12-20T13:06:05Z
Incremental Adaptation Strategies for Neural Network Language Models
It is today acknowledged that neural network language models outperform backoff language models in applications like speech recognition or statistical machine translation. However, training these models on large amounts of data can take several days. We present efficient techniques to adapt a neural network language model to new data. Instead of training a completely new model or relying on mixture approaches, we propose two new methods: continued training on resampled data or insertion of adaptation layers. We present experimental results in an CAT environment where the post-edits of professional translators are used to improve an SMT system. Both methods are very fast and achieve significant improvements without overfitting the small adaptation data.
[ "Aram Ter-Sarkisov, Holger Schwenk, Loic Barrault and Fethi Bougares", "['Aram Ter-Sarkisov' 'Holger Schwenk' 'Loic Barrault' 'Fethi Bougares']" ]
cs.LG stat.ML
null
1412.6651
null
null
http://arxiv.org/pdf/1412.6651v8
2015-10-25T12:12:52Z
2014-12-20T13:22:23Z
Deep learning with Elastic Averaging SGD
We study the problem of stochastic optimization for deep learning in the parallel computing environment under communication constraints. A new algorithm is proposed in this setting where the communication and coordination of work among concurrent processes (local workers), is based on an elastic force which links the parameters they compute with a center variable stored by the parameter server (master). The algorithm enables the local workers to perform more exploration, i.e. the algorithm allows the local variables to fluctuate further from the center variable by reducing the amount of communication between local workers and the master. We empirically demonstrate that in the deep learning setting, due to the existence of many local optima, allowing more exploration can lead to the improved performance. We propose synchronous and asynchronous variants of the new algorithm. We provide the stability analysis of the asynchronous variant in the round-robin scheme and compare it with the more common parallelized method ADMM. We show that the stability of EASGD is guaranteed when a simple stability condition is satisfied, which is not the case for ADMM. We additionally propose the momentum-based version of our algorithm that can be applied in both synchronous and asynchronous settings. Asynchronous variant of the algorithm is applied to train convolutional neural networks for image classification on the CIFAR and ImageNet datasets. Experiments demonstrate that the new algorithm accelerates the training of deep architectures compared to DOWNPOUR and other common baseline approaches and furthermore is very communication efficient.
[ "['Sixin Zhang' 'Anna Choromanska' 'Yann LeCun']", "Sixin Zhang, Anna Choromanska, Yann LeCun" ]
stat.ML cs.LG
null
1412.6734
null
null
http://arxiv.org/pdf/1412.6734v1
2014-12-21T06:53:15Z
2014-12-21T06:53:15Z
Implicit Temporal Differences
In reinforcement learning, the TD($\lambda$) algorithm is a fundamental policy evaluation method with an efficient online implementation that is suitable for large-scale problems. One practical drawback of TD($\lambda$) is its sensitivity to the choice of the step-size. It is an empirically well-known fact that a large step-size leads to fast convergence, at the cost of higher variance and risk of instability. In this work, we introduce the implicit TD($\lambda$) algorithm which has the same function and computational cost as TD($\lambda$), but is significantly more stable. We provide a theoretical explanation of this stability and an empirical evaluation of implicit TD($\lambda$) on typical benchmark tasks. Our results show that implicit TD($\lambda$) outperforms standard TD($\lambda$) and a state-of-the-art method that automatically tunes the step-size, and thus shows promise for wide applicability.
[ "['Aviv Tamar' 'Panos Toulis' 'Shie Mannor' 'Edoardo M. Airoldi']", "Aviv Tamar, Panos Toulis, Shie Mannor, Edoardo M. Airoldi" ]
stat.ML cs.LG
null
1412.6741
null
null
http://arxiv.org/pdf/1412.6741v1
2014-12-21T08:30:57Z
2014-12-21T08:30:57Z
Locally Weighted Learning for Naive Bayes Classifier
As a consequence of the strong and usually violated conditional independence assumption (CIA) of naive Bayes (NB) classifier, the performance of NB becomes less and less favorable compared to sophisticated classifiers when the sample size increases. We learn from this phenomenon that when the size of the training data is large, we should either relax the assumption or apply NB to a "reduced" data set, say for example use NB as a local model. The latter approach trades the ignored information for the robustness to the model assumption. In this paper, we consider using NB as a model for locally weighted data. A special weighting function is designed so that if CIA holds for the unweighted data, it also holds for the weighted data. The new method is intuitive and capable of handling class imbalance. It is theoretically more sound than the locally weighted learners of naive Bayes that base classification only on the $k$ nearest neighbors. Empirical study shows that the new method with appropriate choice of parameter outperforms seven existing classifiers of similar nature.
[ "['Kim-Hung Li' 'Cheuk Ting Li']", "Kim-Hung Li and Cheuk Ting Li" ]
cs.LG stat.ML
null
1412.6752
null
null
http://arxiv.org/pdf/1412.6752v1
2014-12-21T09:50:32Z
2014-12-21T09:50:32Z
Correlation of Data Reconstruction Error and Shrinkages in Pair-wise Distances under Principal Component Analysis (PCA)
In this on-going work, I explore certain theoretical and empirical implications of data transformations under the PCA. In particular, I state and prove three theorems about PCA, which I paraphrase as follows: 1). PCA without discarding eigenvector rows is injective, but looses this injectivity when eigenvector rows are discarded 2). PCA without discarding eigen- vector rows preserves pair-wise distances, but tends to cause pair-wise distances to shrink when eigenvector rows are discarded. 3). For any pair of points, the shrinkage in pair-wise distance is bounded above by an L1 norm reconstruction error associated with the points. Clearly, 3). suggests that there might exist some correlation between shrinkages in pair-wise distances and mean square reconstruction error which is defined as the sum of those eigenvalues associated with the discarded eigenvectors. I therefore decided to perform numerical experiments to obtain the corre- lation between the sum of those eigenvalues and shrinkages in pair-wise distances. In addition, I have also performed some experiments to check respectively the effect of the sum of those eigenvalues and the effect of the shrinkages on classification accuracies under the PCA map. So far, I have obtained the following results on some publicly available data from the UCI Machine Learning Repository: 1). There seems to be a strong cor- relation between the sum of those eigenvalues associated with discarded eigenvectors and shrinkages in pair-wise distances. 2). Neither the sum of those eigenvalues nor pair-wise distances have any strong correlations with classification accuracies. 1
[ "Abdulrahman Oladipupo Ibraheem", "['Abdulrahman Oladipupo Ibraheem']" ]
stat.ML cs.LG
10.1007/978-3-319-18038-0_48
1412.6785
null
null
http://arxiv.org/abs/1412.6785v2
2015-03-11T16:18:47Z
2014-12-21T13:40:29Z
Principal Sensitivity Analysis
We present a novel algorithm (Principal Sensitivity Analysis; PSA) to analyze the knowledge of the classifier obtained from supervised machine learning techniques. In particular, we define principal sensitivity map (PSM) as the direction on the input space to which the trained classifier is most sensitive, and use analogously defined k-th PSM to define a basis for the input space. We train neural networks with artificial data and real data, and apply the algorithm to the obtained supervised classifiers. We then visualize the PSMs to demonstrate the PSA's ability to decompose the knowledge acquired by the trained classifiers.
[ "['Sotetsu Koyamada' 'Masanori Koyama' 'Ken Nakae' 'Shin Ishii']", "Sotetsu Koyamada and Masanori Koyama and Ken Nakae and Shin Ishii" ]
cs.LG cs.CV cs.NE
null
1412.6806
null
null
http://arxiv.org/pdf/1412.6806v3
2015-04-13T07:58:17Z
2014-12-21T16:16:37Z
Striving for Simplicity: The All Convolutional Net
Most modern convolutional neural networks (CNNs) used for object recognition are built using the same principles: Alternating convolution and max-pooling layers followed by a small number of fully connected layers. We re-evaluate the state of the art for object recognition from small images with convolutional networks, questioning the necessity of different components in the pipeline. We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Following this finding -- and building on other recent work for finding simple network structures -- we propose a new architecture that consists solely of convolutional layers and yields competitive or state of the art performance on several object recognition datasets (CIFAR-10, CIFAR-100, ImageNet). To analyze the network we introduce a new variant of the "deconvolution approach" for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches.
[ "Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, Martin\n Riedmiller", "['Jost Tobias Springenberg' 'Alexey Dosovitskiy' 'Thomas Brox'\n 'Martin Riedmiller']" ]
stat.ML cs.CV cs.LG
10.1109/TSP.2015.2469637
1412.6808
null
null
http://arxiv.org/abs/1412.6808v2
2015-08-10T02:12:06Z
2014-12-21T16:40:31Z
Learning the nonlinear geometry of high-dimensional data: Models and algorithms
Modern information processing relies on the axiom that high-dimensional data lie near low-dimensional geometric structures. This paper revisits the problem of data-driven learning of these geometric structures and puts forth two new nonlinear geometric models for data describing "related" objects/phenomena. The first one of these models straddles the two extremes of the subspace model and the union-of-subspaces model, and is termed the metric-constrained union-of-subspaces (MC-UoS) model. The second one of these models---suited for data drawn from a mixture of nonlinear manifolds---generalizes the kernel subspace model, and is termed the metric-constrained kernel union-of-subspaces (MC-KUoS) model. The main contributions of this paper in this regard include the following. First, it motivates and formalizes the problems of MC-UoS and MC-KUoS learning. Second, it presents algorithms that efficiently learn an MC-UoS or an MC-KUoS underlying data of interest. Third, it extends these algorithms to the case when parts of the data are missing. Last, but not least, it reports the outcomes of a series of numerical experiments involving both synthetic and real data that demonstrate the superiority of the proposed geometric models and learning algorithms over existing approaches in the literature. These experiments also help clarify the connections between this work and the literature on (subspace and kernel k-means) clustering.
[ "Tong Wu and Waheed U. Bajwa", "['Tong Wu' 'Waheed U. Bajwa']" ]
cs.CL cs.IR cs.LG
null
1412.6815
null
null
http://arxiv.org/pdf/1412.6815v2
2015-02-28T23:57:08Z
2014-12-21T17:38:19Z
Extraction of Salient Sentences from Labelled Documents
We present a hierarchical convolutional document model with an architecture designed to support introspection of the document structure. Using this model, we show how to use visualisation techniques from the computer vision literature to identify and extract topic-relevant sentences. We also introduce a new scalable evaluation technique for automatic sentence extraction systems that avoids the need for time consuming human annotation of validation data.
[ "['Misha Denil' 'Alban Demiraj' 'Nando de Freitas']", "Misha Denil and Alban Demiraj and Nando de Freitas" ]
stat.ML cs.CV cs.LG math.AT
null
1412.6821
null
null
http://arxiv.org/pdf/1412.6821v1
2014-12-21T19:17:08Z
2014-12-21T19:17:08Z
A Stable Multi-Scale Kernel for Topological Machine Learning
Topological data analysis offers a rich source of valuable information to study vision problems. Yet, so far we lack a theoretically sound connection to popular kernel-based learning techniques, such as kernel SVMs or kernel PCA. In this work, we establish such a connection by designing a multi-scale kernel for persistence diagrams, a stable summary representation of topological features in data. We show that this kernel is positive definite and prove its stability with respect to the 1-Wasserstein distance. Experiments on two benchmark datasets for 3D shape classification/retrieval and texture recognition show considerable performance gains of the proposed method compared to an alternative approach that is based on the recently introduced persistence landscapes.
[ "['Jan Reininghaus' 'Stefan Huber' 'Ulrich Bauer' 'Roland Kwitt']", "Jan Reininghaus, Stefan Huber, Ulrich Bauer, Roland Kwitt" ]
cs.NE cs.CV cs.LG stat.ML
null
1412.6830
null
null
http://arxiv.org/pdf/1412.6830v3
2015-04-21T08:05:02Z
2014-12-21T20:20:21Z
Learning Activation Functions to Improve Deep Neural Networks
Artificial neural networks typically have a fixed, non-linear activation function at each neuron. We have designed a novel form of piecewise linear activation function that is learned independently for each neuron using gradient descent. With this adaptive activation function, we are able to improve upon deep neural network architectures composed of static rectified linear units, achieving state-of-the-art performance on CIFAR-10 (7.51%), CIFAR-100 (30.83%), and a benchmark from high-energy physics involving Higgs boson decay modes.
[ "['Forest Agostinelli' 'Matthew Hoffman' 'Peter Sadowski' 'Pierre Baldi']", "Forest Agostinelli, Matthew Hoffman, Peter Sadowski, Pierre Baldi" ]
cs.CV cs.LG cs.NE
null
1412.6857
null
null
http://arxiv.org/pdf/1412.6857v5
2015-05-12T08:42:42Z
2014-12-22T01:16:50Z
Contour Detection Using Cost-Sensitive Convolutional Neural Networks
We address the problem of contour detection via per-pixel classifications of edge point. To facilitate the process, the proposed approach leverages with DenseNet, an efficient implementation of multiscale convolutional neural networks (CNNs), to extract an informative feature vector for each pixel and uses an SVM classifier to accomplish contour detection. The main challenge lies in adapting a pre-trained per-image CNN model for yielding per-pixel image features. We propose to base on the DenseNet architecture to achieve pixelwise fine-tuning and then consider a cost-sensitive strategy to further improve the learning with a small dataset of edge and non-edge image patches. In the experiment of contour detection, we look into the effectiveness of combining per-pixel features from different CNN layers and obtain comparable performances to the state-of-the-art on BSDS500.
[ "Jyh-Jing Hwang and Tyng-Luh Liu", "['Jyh-Jing Hwang' 'Tyng-Luh Liu']" ]
cs.LG cs.CL stat.ML
null
1412.6881
null
null
http://arxiv.org/pdf/1412.6881v3
2015-04-16T19:23:23Z
2014-12-22T06:12:06Z
On Learning Vector Representations in Hierarchical Label Spaces
An important problem in multi-label classification is to capture label patterns or underlying structures that have an impact on such patterns. This paper addresses one such problem, namely how to exploit hierarchical structures over labels. We present a novel method to learn vector representations of a label space given a hierarchy of labels and label co-occurrence patterns. Our experimental results demonstrate qualitatively that the proposed method is able to learn regularities among labels by exploiting a label hierarchy as well as label co-occurrences. It highlights the importance of the hierarchical information in order to obtain regularities which facilitate analogical reasoning over a label space. We also experimentally illustrate the dependency of the learned representations on the label hierarchy.
[ "['Jinseok Nam' 'Johannes Fürnkranz']", "Jinseok Nam and Johannes F\\\"urnkranz" ]
cs.CV cs.LG cs.NE
null
1412.6885
null
null
http://arxiv.org/pdf/1412.6885v1
2014-12-22T06:43:58Z
2014-12-22T06:43:58Z
Half-CNN: A General Framework for Whole-Image Regression
The Convolutional Neural Network (CNN) has achieved great success in image classification. The classification model can also be utilized at image or patch level for many other applications, such as object detection and segmentation. In this paper, we propose a whole-image CNN regression model, by removing the full connection layer and training the network with continuous feature maps. This is a generic regression framework that fits many applications. We demonstrate this method through two tasks: simultaneous face detection & segmentation, and scene saliency prediction. The result is comparable with other models in the respective fields, using only a small scale network. Since the regression model is trained on corresponding image / feature map pairs, there are no requirements on uniform input size as opposed to the classification model. Our framework avoids classifier design, a process that may introduce too much manual intervention in model development. Yet, it is highly correlated to the classification network and offers some in-deep review of CNN structures.
[ "Jun Yuan, Bingbing Ni, Ashraf A.Kassim", "['Jun Yuan' 'Bingbing Ni' 'Ashraf A. Kassim']" ]
cs.LG
null
1412.6980
null
null
http://arxiv.org/pdf/1412.6980v9
2017-01-30T01:27:54Z
2014-12-22T13:54:29Z
Adam: A Method for Stochastic Optimization
We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.
[ "Diederik P. Kingma and Jimmy Ba", "['Diederik P. Kingma' 'Jimmy Ba']" ]
cs.LG cs.NE stat.ML
null
1412.7003
null
null
http://arxiv.org/pdf/1412.7003v3
2014-12-30T06:32:47Z
2014-12-22T14:46:26Z
A Bayesian encourages dropout
Dropout is one of the key techniques to prevent the learning from overfitting. It is explained that dropout works as a kind of modified L2 regularization. Here, we shed light on the dropout from Bayesian standpoint. Bayesian interpretation enables us to optimize the dropout rate, which is beneficial for learning of weight parameters and prediction after learning. The experiment result also encourages the optimization of the dropout.
[ "Shin-ichi Maeda", "['Shin-ichi Maeda']" ]
cs.CL cs.LG
null
1412.7004
null
null
http://arxiv.org/pdf/1412.7004v2
2015-04-10T17:33:57Z
2014-12-22T14:49:19Z
Tailoring Word Embeddings for Bilexical Predictions: An Experimental Comparison
We investigate the problem of inducing word embeddings that are tailored for a particular bilexical relation. Our learning algorithm takes an existing lexical vector space and compresses it such that the resulting word embeddings are good predictors for a target bilexical relation. In experiments we show that task-specific embeddings can benefit both the quality and efficiency in lexical prediction tasks.
[ "['Pranava Swaroop Madhyastha' 'Xavier Carreras' 'Ariadna Quattoni']", "Pranava Swaroop Madhyastha, Xavier Carreras, Ariadna Quattoni" ]
cs.CV cs.LG cs.NE
10.1109/HPEC.2015.7322485
1412.7006
null
null
http://arxiv.org/abs/1412.7006v2
2015-07-08T01:14:14Z
2014-12-22T14:54:53Z
Multi-modal Sensor Registration for Vehicle Perception via Deep Neural Networks
The ability to simultaneously leverage multiple modes of sensor information is critical for perception of an automated vehicle's physical surroundings. Spatio-temporal alignment of registration of the incoming information is often a prerequisite to analyzing the fused data. The persistence and reliability of multi-modal registration is therefore the key to the stability of decision support systems ingesting the fused information. LiDAR-video systems like on those many driverless cars are a common example of where keeping the LiDAR and video channels registered to common physical features is important. We develop a deep learning method that takes multiple channels of heterogeneous data, to detect the misalignment of the LiDAR-video inputs. A number of variations were tested on the Ford LiDAR-video driving test data set and will be discussed. To the best of our knowledge the use of multi-modal deep convolutional neural networks for dynamic real-time LiDAR-video registration has not been presented.
[ "['Michael Giering' 'Vivek Venugopalan' 'Kishore Reddy']", "Michael Giering, Vivek Venugopalan, Kishore Reddy" ]
cs.CV cs.LG cs.NE
null
1412.7007
null
null
http://arxiv.org/pdf/1412.7007v3
2015-07-08T01:07:23Z
2014-12-22T14:55:17Z
Occlusion Edge Detection in RGB-D Frames using Deep Convolutional Networks
Occlusion edges in images which correspond to range discontinuity in the scene from the point of view of the observer are an important prerequisite for many vision and mobile robot tasks. Although they can be extracted from range data however extracting them from images and videos would be extremely beneficial. We trained a deep convolutional neural network (CNN) to identify occlusion edges in images and videos with both RGB-D and RGB inputs. The use of CNN avoids hand-crafting of features for automatically isolating occlusion edges and distinguishing them from appearance edges. Other than quantitative occlusion edge detection results, qualitative results are provided to demonstrate the trade-off between high resolution analysis and frame-level computation time which is critical for real-time robotics applications.
[ "Soumik Sarkar, Vivek Venugopalan, Kishore Reddy, Michael Giering,\n Julian Ryde, Navdeep Jaitly", "['Soumik Sarkar' 'Vivek Venugopalan' 'Kishore Reddy' 'Michael Giering'\n 'Julian Ryde' 'Navdeep Jaitly']" ]
cs.NE cs.LG
null
1412.7009
null
null
http://arxiv.org/pdf/1412.7009v3
2015-04-09T01:54:33Z
2014-12-22T14:57:05Z
Generative Class-conditional Autoencoders
Recent work by Bengio et al. (2013) proposes a sampling procedure for denoising autoencoders which involves learning the transition operator of a Markov chain. The transition operator is typically unimodal, which limits its capacity to model complex data. In order to perform efficient sampling from conditional distributions, we extend this work, both theoretically and algorithmically, to gated autoencoders (Memisevic, 2013), The proposed model is able to generate convincing class-conditional samples when trained on both the MNIST and TFD datasets.
[ "['Jan Rudy' 'Graham Taylor']", "Jan Rudy, Graham Taylor" ]
cs.SD cs.LG
null
1412.7022
null
null
http://arxiv.org/pdf/1412.7022v3
2015-04-28T02:24:14Z
2014-12-22T15:15:44Z
Audio Source Separation with Discriminative Scattering Networks
In this report we describe an ongoing line of research for solving single-channel source separation problems. Many monaural signal decomposition techniques proposed in the literature operate on a feature space consisting of a time-frequency representation of the input data. A challenge faced by these approaches is to effectively exploit the temporal dependencies of the signals at scales larger than the duration of a time-frame. In this work we propose to tackle this problem by modeling the signals using a time-frequency representation with multiple temporal resolutions. The proposed representation consists of a pyramid of wavelet scattering operators, which generalizes Constant Q Transforms (CQT) with extra layers of convolution and complex modulus. We first show that learning standard models with this multi-resolution setting improves source separation results over fixed-resolution methods. As study case, we use Non-Negative Matrix Factorizations (NMF) that has been widely considered in many audio application. Then, we investigate the inclusion of the proposed multi-resolution setting into a discriminative training regime. We discuss several alternatives using different deep neural network architectures.
[ "['Pablo Sprechmann' 'Joan Bruna' 'Yann LeCun']", "Pablo Sprechmann, Joan Bruna, Yann LeCun" ]
cs.LG cs.CV cs.NE
null
1412.7024
null
null
http://arxiv.org/pdf/1412.7024v5
2015-09-23T01:00:44Z
2014-12-22T15:22:45Z
Training deep neural networks with low precision multiplications
Multipliers are the most space and power-hungry arithmetic operators of the digital implementation of deep neural networks. We train a set of state-of-the-art neural networks (Maxout networks) on three benchmark datasets: MNIST, CIFAR-10 and SVHN. They are trained with three distinct formats: floating point, fixed point and dynamic fixed point. For each of those datasets and for each of those formats, we assess the impact of the precision of the multiplications on the final error after training. We find that very low precision is sufficient not just for running trained networks but also for training them. For example, it is possible to train Maxout networks with 10 bits multiplications.
[ "['Matthieu Courbariaux' 'Yoshua Bengio' 'Jean-Pierre David']", "Matthieu Courbariaux, Yoshua Bengio and Jean-Pierre David" ]
cs.CL cs.LG
null
1412.7026
null
null
http://arxiv.org/pdf/1412.7026v2
2015-02-27T08:02:49Z
2014-12-22T15:34:43Z
Language Recognition using Random Indexing
Random Indexing is a simple implementation of Random Projections with a wide range of applications. It can solve a variety of problems with good accuracy without introducing much complexity. Here we use it for identifying the language of text samples. We present a novel method of generating language representation vectors using letter blocks. Further, we show that the method is easily implemented and requires little computational power and space. Experiments on a number of model parameters illustrate certain properties about high dimensional sparse vector representations of data. Proof of statistically relevant language vectors are shown through the extremely high success of various language recognition tasks. On a difficult data set of 21,000 short sentences from 21 different languages, our model performs a language recognition task and achieves 97.8% accuracy, comparable to state-of-the-art methods.
[ "['Aditya Joshi' 'Johan Halseth' 'Pentti Kanerva']", "Aditya Joshi, Johan Halseth, Pentti Kanerva" ]
cs.LG cs.CL cs.NE
null
1412.7028
null
null
http://arxiv.org/pdf/1412.7028v4
2015-04-10T21:57:49Z
2014-12-22T15:40:31Z
Joint RNN-Based Greedy Parsing and Word Composition
This paper introduces a greedy parser based on neural networks, which leverages a new compositional sub-tree representation. The greedy parser and the compositional procedure are jointly trained, and tightly depends on each-other. The composition procedure outputs a vector representation which summarizes syntactically (parsing tags) and semantically (words) sub-trees. Composition and tagging is achieved over continuous (word or tag) representations, and recurrent neural networks. We reach F1 performance on par with well-known existing parsers, while having the advantage of speed, thanks to the greedy nature of the parser. We provide a fully functional implementation of the method described in this paper.
[ "Jo\\\"el Legrand and Ronan Collobert", "['Joël Legrand' 'Ronan Collobert']" ]
cs.CV cs.LG cs.NE
null
1412.7054
null
null
http://arxiv.org/pdf/1412.7054v3
2015-04-11T01:45:56Z
2014-12-22T17:06:07Z
Attention for Fine-Grained Categorization
This paper presents experiments extending the work of Ba et al. (2014) on recurrent neural models for attention into less constrained visual environments, specifically fine-grained categorization on the Stanford Dogs data set. In this work we use an RNN of the same structure but substitute a more powerful visual network and perform large-scale pre-training of the visual network outside of the attention RNN. Most work in attention models to date focuses on tasks with toy or more constrained visual environments, whereas we present results for fine-grained categorization better than the state-of-the-art GoogLeNet classification model. We show that our model learns to direct high resolution attention to the most discriminative regions without any spatial supervision such as bounding boxes, and it is able to discriminate fine-grained dog breeds moderately well even when given only an initial low-resolution context image and narrow, inexpensive glimpses at faces and fur patterns. This and similar attention models have the major advantage of being trained end-to-end, as opposed to other current detection and recognition pipelines with hand-engineered components where information is lost. While our model is state-of-the-art, further work is needed to fully leverage the sequential input.
[ "['Pierre Sermanet' 'Andrea Frome' 'Esteban Real']", "Pierre Sermanet, Andrea Frome, Esteban Real" ]
cs.LG cs.CV cs.IT math.IT stat.ML
null
1412.7056
null
null
http://arxiv.org/pdf/1412.7056v2
2015-02-22T02:19:29Z
2014-12-22T17:06:44Z
Clustering multi-way data: a novel algebraic approach
In this paper, we develop a method for unsupervised clustering of two-way (matrix) data by combining two recent innovations from different fields: the Sparse Subspace Clustering (SSC) algorithm [10], which groups points coming from a union of subspaces into their respective subspaces, and the t-product [18], which was introduced to provide a matrix-like multiplication for third order tensors. Our algorithm is analogous to SSC in that an "affinity" between different data points is built using a sparse self-representation of the data. Unlike SSC, we employ the t-product in the self-representation. This allows us more flexibility in modeling; infact, SSC is a special case of our method. When using the t-product, three-way arrays are treated as matrices whose elements (scalars) are n-tuples or tubes. Convolutions take the place of scalar multiplication. This framework allows us to embed the 2-D data into a vector-space-like structure called a free module over a commutative ring. These free modules retain many properties of complex inner-product spaces, and we leverage that to provide theoretical guarantees on our algorithm. We show that compared to vector-space counterparts, SSmC achieves higher accuracy and better able to cluster data with less preprocessing in some image clustering problems. In particular we show the performance of the proposed method on Weizmann face database, the Extended Yale B Face database and the MNIST handwritten digits database.
[ "['Eric Kernfeld' 'Shuchin Aeron' 'Misha Kilmer']", "Eric Kernfeld and Shuchin Aeron and Misha Kilmer" ]
cs.CV cs.LG cs.NE
null
1412.7062
null
null
http://arxiv.org/pdf/1412.7062v4
2016-06-07T04:00:08Z
2014-12-22T17:18:33Z
Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs
Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called "semantic image segmentation"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our "DeepLab" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6% IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.
[ "Liang-Chieh Chen and George Papandreou and Iasonas Kokkinos and Kevin\n Murphy and Alan L. Yuille", "['Liang-Chieh Chen' 'George Papandreou' 'Iasonas Kokkinos' 'Kevin Murphy'\n 'Alan L. Yuille']" ]
cs.CL cs.LG cs.NE
null
1412.7063
null
null
http://arxiv.org/pdf/1412.7063v5
2015-04-15T20:07:50Z
2014-12-22T17:19:56Z
Diverse Embedding Neural Network Language Models
We propose Diverse Embedding Neural Network (DENN), a novel architecture for language models (LMs). A DENNLM projects the input word history vector onto multiple diverse low-dimensional sub-spaces instead of a single higher-dimensional sub-space as in conventional feed-forward neural network LMs. We encourage these sub-spaces to be diverse during network training through an augmented loss function. Our language modeling experiments on the Penn Treebank data set show the performance benefit of using a DENNLM.
[ "Kartik Audhkhasi, Abhinav Sethy, Bhuvana Ramabhadran", "['Kartik Audhkhasi' 'Abhinav Sethy' 'Bhuvana Ramabhadran']" ]
cs.NE cs.CL cs.LG
null
1412.7091
null
null
http://arxiv.org/pdf/1412.7091v3
2015-07-14T01:27:13Z
2014-12-22T18:51:08Z
Efficient Exact Gradient Update for training Deep Networks with Very Large Sparse Targets
An important class of problems involves training deep neural networks with sparse prediction targets of very high dimension D. These occur naturally in e.g. neural language models or the learning of word-embeddings, often posed as predicting the probability of next words among a vocabulary of size D (e.g. 200 000). Computing the equally large, but typically non-sparse D-dimensional output vector from a last hidden layer of reasonable dimension d (e.g. 500) incurs a prohibitive O(Dd) computational cost for each example, as does updating the D x d output weight matrix and computing the gradient needed for backpropagation to previous layers. While efficient handling of large sparse network inputs is trivial, the case of large sparse targets is not, and has thus so far been sidestepped with approximate alternatives such as hierarchical softmax or sampling-based approximations during training. In this work we develop an original algorithmic approach which, for a family of loss functions that includes squared error and spherical softmax, can compute the exact loss, gradient update for the output weights, and gradient for backpropagation, all in O(d^2) per example instead of O(Dd), remarkably without ever computing the D-dimensional output. The proposed algorithm yields a speedup of D/4d , i.e. two orders of magnitude for typical sizes, for that critical part of the computations that often dominates the training time in this kind of network architecture.
[ "Pascal Vincent, Alexandre de Br\\'ebisson, Xavier Bouthillier", "['Pascal Vincent' 'Alexandre de Brébisson' 'Xavier Bouthillier']" ]
cs.LG cs.CL cs.NE
null
1412.7110
null
null
http://arxiv.org/pdf/1412.7110v6
2015-04-16T08:29:14Z
2014-12-22T19:46:01Z
Learning linearly separable features for speech recognition using convolutional neural networks
Automatic speech recognition systems usually rely on spectral-based features, such as MFCC of PLP. These features are extracted based on prior knowledge such as, speech perception or/and speech production. Recently, convolutional neural networks have been shown to be able to estimate phoneme conditional probabilities in a completely data-driven manner, i.e. using directly temporal raw speech signal as input. This system was shown to yield similar or better performance than HMM/ANN based system on phoneme recognition task and on large scale continuous speech recognition task, using less parameters. Motivated by these studies, we investigate the use of simple linear classifier in the CNN-based framework. Thus, the network learns linearly separable features from raw speech. We show that such system yields similar or better performance than MLP based system using cepstral-based features as input.
[ "Dimitri Palaz, Mathew Magimai Doss and Ronan Collobert", "['Dimitri Palaz' 'Mathew Magimai Doss' 'Ronan Collobert']" ]