categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
stat.ML cs.LG cs.NE
null
1502.02072
null
null
http://arxiv.org/pdf/1502.02072v1
2015-02-06T23:04:01Z
2015-02-06T23:04:01Z
Massively Multitask Networks for Drug Discovery
Massively multitask neural architectures provide a learning framework for drug discovery that synthesizes information from many distinct biological sources. To train these architectures at scale, we gather large amounts of data from public sources to create a dataset of nearly 40 million measurements across more than 200 biological targets. We investigate several aspects of the multitask framework by performing a series of empirical studies and obtain some interesting results: (1) massively multitask networks obtain predictive accuracies significantly better than single-task methods, (2) the predictive power of multitask networks improves as additional tasks and data are added, (3) the total amount of data and the total number of tasks both contribute significantly to multitask improvement, and (4) multitask networks afford limited transferability to tasks not in the training set. Our results underscore the need for greater data sharing and further algorithmic innovation to accelerate the drug discovery process.
[ "['Bharath Ramsundar' 'Steven Kearnes' 'Patrick Riley' 'Dale Webster'\n 'David Konerding' 'Vijay Pande']", "Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David\n Konerding, Vijay Pande" ]
cs.LG cs.CV physics.chem-ph physics.comp-ph quant-ph
null
1502.02077
null
null
http://arxiv.org/pdf/1502.02077v3
2016-05-20T14:02:49Z
2015-02-06T23:55:13Z
Quantum Energy Regression using Scattering Transforms
We present a novel approach to the regression of quantum mechanical energies based on a scattering transform of an intermediate electron density representation. A scattering transform is a deep convolution network computed with a cascade of multiscale wavelet transforms. It possesses appropriate invariant and stability properties for quantum energy regression. This new framework removes fundamental limitations of Coulomb matrix based energy regressions, and numerical experiments give state-of-the-art accuracy over planar molecules.
[ "Matthew Hirn and Nicolas Poilvert and St\\'ephane Mallat", "['Matthew Hirn' 'Nicolas Poilvert' 'Stéphane Mallat']" ]
cs.MM cs.LG cs.MA
null
1502.02125
null
null
http://arxiv.org/pdf/1502.02125v2
2015-03-24T12:04:41Z
2015-02-07T11:14:10Z
Contextual Online Learning for Multimedia Content Aggregation
The last decade has witnessed a tremendous growth in the volume as well as the diversity of multimedia content generated by a multitude of sources (news agencies, social media, etc.). Faced with a variety of content choices, consumers are exhibiting diverse preferences for content; their preferences often depend on the context in which they consume content as well as various exogenous events. To satisfy the consumers' demand for such diverse content, multimedia content aggregators (CAs) have emerged which gather content from numerous multimedia sources. A key challenge for such systems is to accurately predict what type of content each of its consumers prefers in a certain context, and adapt these predictions to the evolving consumers' preferences, contexts and content characteristics. We propose a novel, distributed, online multimedia content aggregation framework, which gathers content generated by multiple heterogeneous producers to fulfill its consumers' demand for content. Since both the multimedia content characteristics and the consumers' preferences and contexts are unknown, the optimal content aggregation strategy is unknown a priori. Our proposed content aggregation algorithm is able to learn online what content to gather and how to match content and users by exploiting similarities between consumer types. We prove bounds for our proposed learning algorithms that guarantee both the accuracy of the predictions as well as the learning speed. Importantly, our algorithms operate efficiently even when feedback from consumers is missing or content and preferences evolve over time. Illustrative results highlight the merits of the proposed content aggregation system in a variety of settings.
[ "Cem Tekin and Mihaela van der Schaar", "['Cem Tekin' 'Mihaela van der Schaar']" ]
cs.LG stat.ML
null
1502.02127
null
null
http://arxiv.org/pdf/1502.02127v2
2015-04-06T15:44:52Z
2015-02-07T11:46:22Z
Hyperparameter Search in Machine Learning
We introduce the hyperparameter search problem in the field of machine learning and discuss its main challenges from an optimization perspective. Machine learning methods attempt to build models that capture some element of interest based on given data. Most common learning algorithms feature a set of hyperparameters that must be determined before training commences. The choice of hyperparameters can significantly affect the resulting model's performance, but determining good values can be complex; hence a disciplined, theoretically sound search strategy is essential.
[ "['Marc Claesen' 'Bart De Moor']", "Marc Claesen and Bart De Moor" ]
cs.LG
null
1502.02158
null
null
http://arxiv.org/pdf/1502.02158v1
2015-02-07T16:21:28Z
2015-02-07T16:21:28Z
Learning Parametric-Output HMMs with Two Aliased States
In various applications involving hidden Markov models (HMMs), some of the hidden states are aliased, having identical output distributions. The minimality, identifiability and learnability of such aliased HMMs have been long standing problems, with only partial solutions provided thus far. In this paper we focus on parametric-output HMMs, whose output distributions come from a parametric family, and that have exactly two aliased states. For this class, we present a complete characterization of their minimality and identifiability. Furthermore, for a large family of parametric output distributions, we derive computationally efficient and statistically consistent algorithms to detect the presence of aliasing and learn the aliased HMM transition and emission parameters. We illustrate our theoretical analysis by several simulations.
[ "Roi Weiss, Boaz Nadler", "['Roi Weiss' 'Boaz Nadler']" ]
cs.LG stat.ML
null
1502.02206
null
null
http://arxiv.org/pdf/1502.02206v2
2015-05-20T05:48:10Z
2015-02-08T03:18:50Z
Learning to Search Better Than Your Teacher
Methods for learning to search for structured prediction typically imitate a reference policy, with existing theoretical guarantees demonstrating low regret compared to that reference. This is unsatisfactory in many applications where the reference policy is suboptimal and the goal of learning is to improve upon it. Can learning to search work even when the reference is poor? We provide a new learning to search algorithm, LOLS, which does well relative to the reference policy, but additionally guarantees low regret compared to deviations from the learned policy: a local-optimality guarantee. Consequently, LOLS can improve upon the reference policy, unlike previous algorithms. This enables us to develop structured contextual bandits, a partial information structured prediction setting with many potential applications.
[ "['Kai-Wei Chang' 'Akshay Krishnamurthy' 'Alekh Agarwal' 'Hal Daumé III'\n 'John Langford']", "Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daum\\'e III,\n John Langford" ]
cs.LG cs.CY cs.SE
null
1502.02215
null
null
http://arxiv.org/pdf/1502.02215v1
2015-02-08T06:18:55Z
2015-02-08T06:18:55Z
Real World Applications of Machine Learning Techniques over Large Mobile Subscriber Datasets
Communication Service Providers (CSPs) are in a unique position to utilize their vast transactional data assets generated from interactions of subscribers with network elements as well as with other subscribers. CSPs could leverage its data assets for a gamut of applications such as service personalization, predictive offer management, loyalty management, revenue forecasting, network capacity planning, product bundle optimization and churn management to gain significant competitive advantage. However, due to the sheer data volume, variety, velocity and veracity of mobile subscriber datasets, sophisticated data analytics techniques and frameworks are necessary to derive actionable insights in a useable timeframe. In this paper, we describe our journey from a relational database management system (RDBMS) based campaign management solution which allowed data scientists and marketers to use hand-written rules for service personalization and targeted promotions to a distributed Big Data Analytics platform, capable of performing large scale machine learning and data mining to deliver real time service personalization, predictive modelling and product optimization. Our work involves a careful blend of technology, processes and best practices, which facilitate man-machine collaboration and continuous experimentation to derive measurable economic value from data. Our platform has a reach of more than 500 million mobile subscribers worldwide, delivering over 1 billion personalized recommendations annually, processing a total data volume of 64 Petabytes, corresponding to 8.5 trillion events.
[ "Jobin Wilson, Chitharanj Kachappilly, Rakesh Mohan, Prateek Kapadia,\n Arun Soman, Santanu Chaudhury", "['Jobin Wilson' 'Chitharanj Kachappilly' 'Rakesh Mohan' 'Prateek Kapadia'\n 'Arun Soman' 'Santanu Chaudhury']" ]
stat.ML cs.LG cs.RO cs.SY
null
1502.02251
null
null
http://arxiv.org/pdf/1502.02251v3
2015-06-18T16:59:43Z
2015-02-08T13:57:59Z
From Pixels to Torques: Policy Learning with Deep Dynamical Models
Data-efficient learning in continuous state-action spaces using very high-dimensional observations remains a key challenge in developing fully autonomous systems. In this paper, we consider one instance of this challenge, the pixels to torques problem, where an agent must learn a closed-loop control policy from pixel information only. We introduce a data-efficient, model-based reinforcement learning algorithm that learns such a closed-loop policy directly from pixel information. The key ingredient is a deep dynamical model that uses deep auto-encoders to learn a low-dimensional embedding of images jointly with a predictive model in this low-dimensional feature space. Joint learning ensures that not only static but also dynamic properties of the data are accounted for. This is crucial for long-term predictions, which lie at the core of the adaptive model predictive control strategy that we use for closed-loop control. Compared to state-of-the-art reinforcement learning methods for continuous states and actions, our approach learns quickly, scales to high-dimensional state spaces and is an important step toward fully autonomous learning from pixels to torques.
[ "Niklas Wahlstr\\\"om and Thomas B. Sch\\\"on and Marc Peter Deisenroth", "['Niklas Wahlström' 'Thomas B. Schön' 'Marc Peter Deisenroth']" ]
stat.ML cs.LG
null
1502.02259
null
null
http://arxiv.org/pdf/1502.02259v1
2015-02-08T14:58:50Z
2015-02-08T14:58:50Z
Contextual Markov Decision Processes
We consider a planning problem where the dynamics and rewards of the environment depend on a hidden static parameter referred to as the context. The objective is to learn a strategy that maximizes the accumulated reward across all contexts. The new model, called Contextual Markov Decision Process (CMDP), can model a customer's behavior when interacting with a website (the learner). The customer's behavior depends on gender, age, location, device, etc. Based on that behavior, the website objective is to determine customer characteristics, and to optimize the interaction between them. Our work focuses on one basic scenario--finite horizon with a small known number of possible contexts. We suggest a family of algorithms with provable guarantees that learn the underlying models and the latent contexts, and optimize the CMDPs. Bounds are obtained for specific naive implementations, and extensions of the framework are discussed, laying the ground for future research.
[ "['Assaf Hallak' 'Dotan Di Castro' 'Shie Mannor']", "Assaf Hallak, Dotan Di Castro and Shie Mannor" ]
cs.LG
null
1502.02268
null
null
http://arxiv.org/pdf/1502.02268v1
2015-02-08T16:34:41Z
2015-02-08T16:34:41Z
SDNA: Stochastic Dual Newton Ascent for Empirical Risk Minimization
We propose a new algorithm for minimizing regularized empirical loss: Stochastic Dual Newton Ascent (SDNA). Our method is dual in nature: in each iteration we update a random subset of the dual variables. However, unlike existing methods such as stochastic dual coordinate ascent, SDNA is capable of utilizing all curvature information contained in the examples, which leads to striking improvements in both theory and practice - sometimes by orders of magnitude. In the special case when an L2-regularizer is used in the primal, the dual problem is a concave quadratic maximization problem plus a separable term. In this regime, SDNA in each step solves a proximal subproblem involving a random principal submatrix of the Hessian of the quadratic function; whence the name of the method. If, in addition, the loss functions are quadratic, our method can be interpreted as a novel variant of the recently introduced Iterative Hessian Sketch.
[ "['Zheng Qu' 'Peter Richtárik' 'Martin Takáč' 'Olivier Fercoq']", "Zheng Qu and Peter Richt\\'arik and Martin Tak\\'a\\v{c} and Olivier\n Fercoq" ]
cs.LG
null
1502.02322
null
null
http://arxiv.org/pdf/1502.02322v2
2015-04-02T03:55:51Z
2015-02-09T01:12:11Z
Rademacher Observations, Private Data, and Boosting
The minimization of the logistic loss is a popular approach to batch supervised learning. Our paper starts from the surprising observation that, when fitting linear (or kernelized) classifiers, the minimization of the logistic loss is \textit{equivalent} to the minimization of an exponential \textit{rado}-loss computed (i) over transformed data that we call Rademacher observations (rados), and (ii) over the \textit{same} classifier as the one of the logistic loss. Thus, a classifier learnt from rados can be \textit{directly} used to classify \textit{observations}. We provide a learning algorithm over rados with boosting-compliant convergence rates on the \textit{logistic loss} (computed over examples). Experiments on domains with up to millions of examples, backed up by theoretical arguments, display that learning over a small set of random rados can challenge the state of the art that learns over the \textit{complete} set of examples. We show that rados comply with various privacy requirements that make them good candidates for machine learning in a privacy framework. We give several algebraic, geometric and computational hardness results on reconstructing examples from rados. We also show how it is possible to craft, and efficiently learn from, rados in a differential privacy framework. Tests reveal that learning from differentially private rados can compete with learning from random rados, and hence with batch learning from examples, achieving non-trivial privacy vs accuracy tradeoffs.
[ "Richard Nock and Giorgio Patrini and Arik Friedman", "['Richard Nock' 'Giorgio Patrini' 'Arik Friedman']" ]
stat.ML cs.CV cs.LG
null
1502.02330
null
null
http://arxiv.org/pdf/1502.02330v1
2015-02-09T01:58:27Z
2015-02-09T01:58:27Z
Tensor Canonical Correlation Analysis for Multi-view Dimension Reduction
Canonical correlation analysis (CCA) has proven an effective tool for two-view dimension reduction due to its profound theoretical foundation and success in practical applications. In respect of multi-view learning, however, it is limited by its capability of only handling data represented by two-view features, while in many real-world applications, the number of views is frequently many more. Although the ad hoc way of simultaneously exploring all possible pairs of features can numerically deal with multi-view data, it ignores the high order statistics (correlation information) which can only be discovered by simultaneously exploring all features. Therefore, in this work, we develop tensor CCA (TCCA) which straightforwardly yet naturally generalizes CCA to handle the data of an arbitrary number of views by analyzing the covariance tensor of the different views. TCCA aims to directly maximize the canonical correlation of multiple (more than two) views. Crucially, we prove that the multi-view canonical correlation maximization problem is equivalent to finding the best rank-1 approximation of the data covariance tensor, which can be solved efficiently using the well-known alternating least squares (ALS) algorithm. As a consequence, the high order correlation information contained in the different views is explored and thus a more reliable common subspace shared by all features can be obtained. In addition, a non-linear extension of TCCA is presented. Experiments on various challenge tasks, including large scale biometric structure prediction, internet advertisement classification and web image annotation, demonstrate the effectiveness of the proposed method.
[ "['Yong Luo' 'Dacheng Tao' 'Yonggang Wen' 'Kotagiri Ramamohanarao'\n 'Chao Xu']", "Yong Luo, Dacheng Tao, Yonggang Wen, Kotagiri Ramamohanarao, Chao Xu" ]
cs.LG stat.ML
null
1502.02362
null
null
http://arxiv.org/pdf/1502.02362v2
2015-05-20T23:29:49Z
2015-02-09T05:09:25Z
Counterfactual Risk Minimization: Learning from Logged Bandit Feedback
We develop a learning principle and an efficient algorithm for batch learning from logged bandit feedback. This learning setting is ubiquitous in online systems (e.g., ad placement, web search, recommendation), where an algorithm makes a prediction (e.g., ad ranking) for a given input (e.g., query) and observes bandit feedback (e.g., user clicks on presented ads). We first address the counterfactual nature of the learning problem through propensity scoring. Next, we prove generalization error bounds that account for the variance of the propensity-weighted empirical risk estimator. These constructive bounds give rise to the Counterfactual Risk Minimization (CRM) principle. We show how CRM can be used to derive a new learning method -- called Policy Optimizer for Exponential Models (POEM) -- for learning stochastic linear rules for structured output prediction. We present a decomposition of the POEM objective that enables efficient stochastic gradient optimization. POEM is evaluated on several multi-label classification problems showing substantially improved robustness and generalization performance compared to the state-of-the-art.
[ "Adith Swaminathan and Thorsten Joachims", "['Adith Swaminathan' 'Thorsten Joachims']" ]
cs.NE cs.LG stat.ML
null
1502.02367
null
null
http://arxiv.org/pdf/1502.02367v4
2015-06-17T06:26:21Z
2015-02-09T05:25:54Z
Gated Feedback Recurrent Neural Networks
In this work, we propose a novel recurrent neural network (RNN) architecture. The proposed RNN, gated-feedback RNN (GF-RNN), extends the existing approach of stacking multiple recurrent layers by allowing and controlling signals flowing from upper recurrent layers to lower layers using a global gating unit for each pair of layers. The recurrent signals exchanged between layers are gated adaptively based on the previous hidden states and the current input. We evaluated the proposed GF-RNN with different types of recurrent units, such as tanh, long short-term memory and gated recurrent units, on the tasks of character-level language modeling and Python program evaluation. Our empirical evaluation of different RNN units, revealed that in both tasks, the GF-RNN outperforms the conventional approaches to build deep stacked RNNs. We suggest that the improvement arises because the GF-RNN can adaptively assign different layers to different timescales and layer-to-layer interactions (including the top-down ones which are not usually present in a stacked RNN) by learning to gate these interactions.
[ "['Junyoung Chung' 'Caglar Gulcehre' 'Kyunghyun Cho' 'Yoshua Bengio']", "Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho and Yoshua Bengio" ]
cs.LG stat.ML
10.1007/s00521-016-2269-9
1502.02377
null
null
http://arxiv.org/abs/1502.02377v2
2016-03-14T18:13:28Z
2015-02-09T06:42:02Z
Sparse Coding with Earth Mover's Distance for Multi-Instance Histogram Representation
Sparse coding (Sc) has been studied very well as a powerful data representation method. It attempts to represent the feature vector of a data sample by reconstructing it as the sparse linear combination of some basic elements, and a $L_2$ norm distance function is usually used as the loss function for the reconstruction error. In this paper, we investigate using Sc as the representation method within multi-instance learning framework, where a sample is given as a bag of instances, and further represented as a histogram of the quantized instances. We argue that for the data type of histogram, using $L_2$ norm distance is not suitable, and propose to use the earth mover's distance (EMD) instead of $L_2$ norm distance as a measure of the reconstruction error. By minimizing the EMD between the histogram of a sample and the its reconstruction from some basic histograms, a novel sparse coding method is developed, which is refereed as SC-EMD. We evaluate its performances as a histogram representation method in tow multi-instance learning problems --- abnormal image detection in wireless capsule endoscopy videos, and protein binding site retrieval. The encouraging results demonstrate the advantages of the new method over the traditional method using $L_2$ norm distance.
[ "['Mohua Zhang' 'Jianhua Peng' 'Xuejie Liu' 'Jim Jing-Yan Wang']", "Mohua Zhang, Jianhua Peng, Xuejie Liu, Jim Jing-Yan Wang" ]
cs.CV cs.LG
10.1109/TIP.2016.2520368
1502.02410
null
null
http://arxiv.org/abs/1502.02410v1
2015-02-09T09:56:57Z
2015-02-09T09:56:57Z
Out-of-sample generalizations for supervised manifold learning for classification
Supervised manifold learning methods for data classification map data samples residing in a high-dimensional ambient space to a lower-dimensional domain in a structure-preserving way, while enhancing the separation between different classes in the learned embedding. Most nonlinear supervised manifold learning methods compute the embedding of the manifolds only at the initially available training points, while the generalization of the embedding to novel points, known as the out-of-sample extension problem in manifold learning, becomes especially important in classification applications. In this work, we propose a semi-supervised method for building an interpolation function that provides an out-of-sample extension for general supervised manifold learning algorithms studied in the context of classification. The proposed algorithm computes a radial basis function (RBF) interpolator that minimizes an objective function consisting of the total embedding error of unlabeled test samples, defined as their distance to the embeddings of the manifolds of their own class, as well as a regularization term that controls the smoothness of the interpolation function in a direction-dependent way. The class labels of test data and the interpolation function parameters are estimated jointly with a progressive procedure. Experimental results on face and object images demonstrate the potential of the proposed out-of-sample extension algorithm for the classification of manifold-modeled data sets.
[ "['Elif Vural' 'Christine Guillemot']", "Elif Vural and Christine Guillemot" ]
cs.CV cs.LG stat.AP stat.ML
null
1502.02445
null
null
http://arxiv.org/pdf/1502.02445v2
2015-06-25T16:19:44Z
2015-02-09T11:48:42Z
Deep Neural Networks for Anatomical Brain Segmentation
We present a novel approach to automatically segment magnetic resonance (MR) images of the human brain into anatomical regions. Our methodology is based on a deep artificial neural network that assigns each voxel in an MR image of the brain to its corresponding anatomical region. The inputs of the network capture information at different scales around the voxel of interest: 3D and orthogonal 2D intensity patches capture the local spatial context while large, compressed 2D orthogonal patches and distances to the regional centroids enforce global spatial consistency. Contrary to commonly used segmentation methods, our technique does not require any non-linear registration of the MR images. To benchmark our model, we used the dataset provided for the MICCAI 2012 challenge on multi-atlas labelling, which consists of 35 manually segmented MR images of the brain. We obtained competitive results (mean dice coefficient 0.725, error rate 0.163) showing the potential of our approach. To our knowledge, our technique is the first to tackle the anatomical segmentation of the whole brain using deep neural networks.
[ "['Alexandre de Brebisson' 'Giovanni Montana']", "Alexandre de Brebisson, Giovanni Montana" ]
cs.LG
null
1502.02476
null
null
http://arxiv.org/pdf/1502.02476v4
2016-03-18T14:14:04Z
2015-02-09T13:18:24Z
An Infinite Restricted Boltzmann Machine
We present a mathematical construction for the restricted Boltzmann machine (RBM) that doesn't require specifying the number of hidden units. In fact, the hidden layer size is adaptive and can grow during training. This is obtained by first extending the RBM to be sensitive to the ordering of its hidden units. Then, thanks to a carefully chosen definition of the energy function, we show that the limit of infinitely many hidden units is well defined. As with RBM, approximate maximum likelihood training can be performed, resulting in an algorithm that naturally and adaptively adds trained hidden units during learning. We empirically study the behaviour of this infinite RBM, showing that its performance is competitive to that of the RBM, while not requiring the tuning of a hidden layer size.
[ "['Marc-Alexandre Côté' 'Hugo Larochelle']", "Marc-Alexandre C\\^ot\\'e, Hugo Larochelle" ]
cs.CV cs.LG stat.AP stat.ML
null
1502.02506
null
null
http://arxiv.org/pdf/1502.02506v1
2015-02-09T14:46:40Z
2015-02-09T14:46:40Z
Predicting Alzheimer's disease: a neuroimaging study with 3D convolutional neural networks
Pattern recognition methods using neuroimaging data for the diagnosis of Alzheimer's disease have been the subject of extensive research in recent years. In this paper, we use deep learning methods, and in particular sparse autoencoders and 3D convolutional neural networks, to build an algorithm that can predict the disease status of a patient, based on an MRI scan of the brain. We report on experiments using the ADNI data set involving 2,265 historical scans. We demonstrate that 3D convolutional neural networks outperform several other classifiers reported in the literature and produce state-of-art results.
[ "['Adrien Payan' 'Giovanni Montana']", "Adrien Payan, Giovanni Montana" ]
stat.ME cs.LG stat.AP
null
1502.02512
null
null
http://arxiv.org/pdf/1502.02512v1
2015-02-09T14:57:58Z
2015-02-09T14:57:58Z
The Adaptive Mean-Linkage Algorithm: A Bottom-Up Hierarchical Cluster Technique
In this paper a variant of the classical hierarchical cluster analysis is reported. This agglomerative (bottom-up) cluster technique is referred to as the Adaptive Mean-Linkage Algorithm. It can be interpreted as a linkage algorithm where the value of the threshold is conveniently up-dated at each interaction. The superiority of the adaptive clustering with respect to the average-linkage algorithm follows because it achieves a good compromise on threshold values: Thresholds based on the cut-off distance are sufficiently small to assure the homogeneity and also large enough to guarantee at least a pair of merging sets. This approach is applied to a set of possible substituents in a chemical series.
[ "['H. M. de Oliveira']", "H.M. de Oliveira" ]
cs.LG cs.NE stat.ML
null
1502.02551
null
null
http://arxiv.org/pdf/1502.02551v1
2015-02-09T16:37:29Z
2015-02-09T16:37:29Z
Deep Learning with Limited Numerical Precision
Training of large-scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. Within the context of low-precision fixed-point computations, we observe the rounding scheme to play a crucial role in determining the network's behavior during training. Our results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy. We also demonstrate an energy-efficient hardware accelerator that implements low-precision fixed-point arithmetic with stochastic rounding.
[ "['Suyog Gupta' 'Ankur Agrawal' 'Kailash Gopalakrishnan'\n 'Pritish Narayanan']", "Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, Pritish Narayanan" ]
stat.ML cs.LG
null
1502.02558
null
null
http://arxiv.org/pdf/1502.02558v4
2015-12-26T16:57:55Z
2015-02-09T16:49:31Z
K2-ABC: Approximate Bayesian Computation with Kernel Embeddings
Complicated generative models often result in a situation where computing the likelihood of observed data is intractable, while simulating from the conditional density given a parameter value is relatively easy. Approximate Bayesian Computation (ABC) is a paradigm that enables simulation-based posterior inference in such cases by measuring the similarity between simulated and observed data in terms of a chosen set of summary statistics. However, there is no general rule to construct sufficient summary statistics for complex models. Insufficient summary statistics will "leak" information, which leads to ABC algorithms yielding samples from an incorrect (partial) posterior. In this paper, we propose a fully nonparametric ABC paradigm which circumvents the need for manually selecting summary statistics. Our approach, K2-ABC, uses maximum mean discrepancy (MMD) as a dissimilarity measure between the distributions over observed and simulated data. MMD is easily estimated as the squared difference between their empirical kernel embeddings. Experiments on a simulated scenario and a real-world biological problem illustrate the effectiveness of the proposed algorithm.
[ "Mijung Park and Wittawat Jitkrittum and Dino Sejdinovic", "['Mijung Park' 'Wittawat Jitkrittum' 'Dino Sejdinovic']" ]
cs.LG cs.CV stat.ML
null
1502.02590
null
null
http://arxiv.org/pdf/1502.02590v4
2016-03-28T22:50:52Z
2015-02-09T18:20:00Z
Analysis of classifiers' robustness to adversarial perturbations
The goal of this paper is to analyze an intriguing phenomenon recently discovered in deep networks, namely their instability to adversarial perturbations (Szegedy et. al., 2014). We provide a theoretical framework for analyzing the robustness of classifiers to adversarial perturbations, and show fundamental upper bounds on the robustness of classifiers. Specifically, we establish a general upper bound on the robustness of classifiers to adversarial perturbations, and then illustrate the obtained upper bound on the families of linear and quadratic classifiers. In both cases, our upper bound depends on a distinguishability measure that captures the notion of difficulty of the classification task. Our results for both classes imply that in tasks involving small distinguishability, no classifier in the considered set will be robust to adversarial perturbations, even if a good accuracy is achieved. Our theoretical framework moreover suggests that the phenomenon of adversarial instability is due to the low flexibility of classifiers, compared to the difficulty of the classification task (captured by the distinguishability). Moreover, we show the existence of a clear distinction between the robustness of a classifier to random noise and its robustness to adversarial perturbations. Specifically, the former is shown to be larger than the latter by a factor that is proportional to \sqrt{d} (with d being the signal dimension) for linear classifiers. This result gives a theoretical explanation for the discrepancy between the two robustness properties in high dimensional problems, which was empirically observed in the context of neural networks. To the best of our knowledge, our results provide the first theoretical work that addresses the phenomenon of adversarial instability recently observed for deep networks. Our analysis is complemented by experimental results on controlled and real-world data.
[ "['Alhussein Fawzi' 'Omar Fawzi' 'Pascal Frossard']", "Alhussein Fawzi, Omar Fawzi, Pascal Frossard" ]
cs.LG
null
1502.02599
null
null
http://arxiv.org/pdf/1502.02599v1
2015-02-09T18:49:29Z
2015-02-09T18:49:29Z
Adaptive Random SubSpace Learning (RSSL) Algorithm for Prediction
We present a novel adaptive random subspace learning algorithm (RSSL) for prediction purpose. This new framework is flexible where it can be adapted with any learning technique. In this paper, we tested the algorithm for regression and classification problems. In addition, we provide a variety of weighting schemes to increase the robustness of the developed algorithm. These different wighting flavors were evaluated on simulated as well as on real-world data sets considering the cases where the ratio between features (attributes) and instances (samples) is large and vice versa. The framework of the new algorithm consists of many stages: first, calculate the weights of all features on the data set using the correlation coefficient and F-statistic statistical measurements. Second, randomly draw n samples with replacement from the data set. Third, perform regular bootstrap sampling (bagging). Fourth, draw without replacement the indices of the chosen variables. The decision was taken based on the heuristic subspacing scheme. Fifth, call base learners and build the model. Sixth, use the model for prediction purpose on test set of the data. The results show the advancement of the adaptive RSSL algorithm in most of the cases compared with the synonym (conventional) machine learning algorithms.
[ "['Mohamed Elshrif' 'Ernest Fokoue']", "Mohamed Elshrif, Ernest Fokoue" ]
cs.LG cs.AI cs.DC
null
1502.02606
null
null
http://arxiv.org/pdf/1502.02606v2
2015-04-22T17:49:22Z
2015-02-09T19:04:43Z
The Power of Randomization: Distributed Submodular Maximization on Massive Datasets
A wide variety of problems in machine learning, including exemplar clustering, document summarization, and sensor placement, can be cast as constrained submodular maximization problems. Unfortunately, the resulting submodular optimization problems are often too large to be solved on a single machine. We develop a simple distributed algorithm that is embarrassingly parallel and it achieves provable, constant factor, worst-case approximation guarantees. In our experiments, we demonstrate its efficiency in large problems with different kinds of constraints with objective values always close to what is achievable in the centralized setting.
[ "Rafael da Ponte Barbosa and Alina Ene and Huy L. Nguyen and Justin\n Ward", "['Rafael da Ponte Barbosa' 'Alina Ene' 'Huy L. Nguyen' 'Justin Ward']" ]
cs.SY cs.LG math.OC
10.1016/j.automatica.2016.08.004
1502.02609
null
null
http://arxiv.org/abs/1502.02609v1
2015-02-09T19:13:17Z
2015-02-09T19:13:17Z
Efficient model-based reinforcement learning for approximate online optimal
In this paper the infinite horizon optimal regulation problem is solved online for a deterministic control-affine nonlinear dynamical system using the state following (StaF) kernel method to approximate the value function. Unlike traditional methods that aim to approximate a function over a large compact set, the StaF kernel method aims to approximate a function in a small neighborhood of a state that travels within a compact set. Simulation results demonstrate that stability and approximate optimality of the control system can be achieved with significantly fewer basis functions than may be required for global approximation methods.
[ "['Rushikesh Kamalapurkar' 'Joel A. Rosenfeld' 'Warren E. Dixon']", "Rushikesh Kamalapurkar, Joel A. Rosenfeld, Warren E. Dixon" ]
cs.LG cs.AI
null
1502.02643
null
null
http://arxiv.org/pdf/1502.02643v1
2015-02-09T20:31:18Z
2015-02-09T20:31:18Z
Random Coordinate Descent Methods for Minimizing Decomposable Submodular Functions
Submodular function minimization is a fundamental optimization problem that arises in several applications in machine learning and computer vision. The problem is known to be solvable in polynomial time, but general purpose algorithms have high running times and are unsuitable for large-scale problems. Recent work have used convex optimization techniques to obtain very practical algorithms for minimizing functions that are sums of ``simple" functions. In this paper, we use random coordinate descent methods to obtain algorithms with faster linear convergence rates and cheaper iteration costs. Compared to alternating projection methods, our algorithms do not rely on full-dimensional vector operations and they converge in significantly fewer iterations.
[ "Alina Ene and Huy L. Nguyen", "['Alina Ene' 'Huy L. Nguyen']" ]
cs.LG
null
1502.02651
null
null
http://arxiv.org/pdf/1502.02651v1
2015-02-09T20:58:38Z
2015-02-09T20:58:38Z
Optimal and Adaptive Algorithms for Online Boosting
We study online boosting, the task of converting any weak online learner into a strong online learner. Based on a novel and natural definition of weak online learnability, we develop two online boosting algorithms. The first algorithm is an online version of boost-by-majority. By proving a matching lower bound, we show that this algorithm is essentially optimal in terms of the number of weak learners and the sample complexity needed to achieve a specified accuracy. This optimal algorithm is not adaptive however. Using tools from online loss minimization, we derive an adaptive online boosting algorithm that is also parameter-free, but not optimal. Both algorithms work with base learners that can handle example importance weights directly, as well as by rejection sampling examples with probability defined by the booster. Results are complemented with an extensive experimental study.
[ "['Alina Beygelzimer' 'Satyen Kale' 'Haipeng Luo']", "Alina Beygelzimer, Satyen Kale, and Haipeng Luo" ]
cs.LG
null
1502.02704
null
null
http://arxiv.org/pdf/1502.02704v1
2015-02-09T22:05:25Z
2015-02-09T22:05:25Z
Learning Reductions that Really Work
We provide a summary of the mathematical and computational techniques that have enabled learning reductions to effectively address a wide class of problems, and show that this approach to solving machine learning problems can be broadly useful.
[ "Alina Beygelzimer, Hal Daum\\'e III, John Langford, Paul Mineiro", "['Alina Beygelzimer' 'Hal Daumé III' 'John Langford' 'Paul Mineiro']" ]
cs.LG
null
1502.02710
null
null
http://arxiv.org/pdf/1502.02710v2
2015-04-20T21:08:19Z
2015-02-09T22:18:26Z
Scalable Multilabel Prediction via Randomized Methods
Modeling the dependence between outputs is a fundamental challenge in multilabel classification. In this work we show that a generic regularized nonlinearity mapping independent predictions to joint predictions is sufficient to achieve state-of-the-art performance on a variety of benchmark problems. Crucially, we compute the joint predictions without ever obtaining any independent predictions, while incorporating low-rank and smoothness regularization. We achieve this by leveraging randomized algorithms for matrix decomposition and kernel approximation. Furthermore, our techniques are applicable to the multiclass setting. We apply our method to a variety of multiclass and multilabel data sets, obtaining state-of-the-art results.
[ "['Nikos Karampatziakis' 'Paul Mineiro']", "Nikos Karampatziakis, Paul Mineiro" ]
cs.LG cs.AI stat.ML
null
1502.02761
null
null
http://arxiv.org/pdf/1502.02761v1
2015-02-10T02:54:58Z
2015-02-10T02:54:58Z
Generative Moment Matching Networks
We consider the problem of learning deep generative models from data. We formulate a method that generates an independent sample via a single feedforward pass through a multilayer perceptron, as in the recently proposed generative adversarial networks (Goodfellow et al., 2014). Training a generative adversarial network, however, requires careful optimization of a difficult minimax program. Instead, we utilize a technique from statistical hypothesis testing known as maximum mean discrepancy (MMD), which leads to a simple objective that can be interpreted as matching all orders of statistics between a dataset and samples from the model, and can be trained by backpropagation. We further boost the performance of this approach by combining our generative network with an auto-encoder network, using MMD to learn to generate codes that can then be decoded to produce samples. We show that the combination of these techniques yields excellent generative models compared to baseline approaches as measured on MNIST and the Toronto Face Database.
[ "Yujia Li, Kevin Swersky and Richard Zemel", "['Yujia Li' 'Kevin Swersky' 'Richard Zemel']" ]
cs.LG stat.ML
null
1502.02763
null
null
http://arxiv.org/pdf/1502.02763v2
2015-05-18T19:03:38Z
2015-02-10T02:56:04Z
Cascading Bandits: Learning to Rank in the Cascade Model
A search engine usually outputs a list of $K$ web pages. The user examines this list, from the first web page to the last, and chooses the first attractive page. This model of user behavior is known as the cascade model. In this paper, we propose cascading bandits, a learning variant of the cascade model where the objective is to identify $K$ most attractive items. We formulate our problem as a stochastic combinatorial partial monitoring problem. We propose two algorithms for solving it, CascadeUCB1 and CascadeKL-UCB. We also prove gap-dependent upper bounds on the regret of these algorithms and derive a lower bound on the regret in cascading bandits. The lower bound matches the upper bound of CascadeKL-UCB up to a logarithmic factor. We experiment with our algorithms on several problems. The algorithms perform surprisingly well even when our modeling assumptions are violated.
[ "Branislav Kveton, Csaba Szepesvari, Zheng Wen, and Azin Ashkan", "['Branislav Kveton' 'Csaba Szepesvari' 'Zheng Wen' 'Azin Ashkan']" ]
cs.LG
null
1502.02791
null
null
http://arxiv.org/pdf/1502.02791v2
2015-05-27T05:28:35Z
2015-02-10T06:01:30Z
Learning Transferable Features with Deep Adaptation Networks
Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.
[ "Mingsheng Long, Yue Cao, Jianmin Wang, Michael I. Jordan", "['Mingsheng Long' 'Yue Cao' 'Jianmin Wang' 'Michael I. Jordan']" ]
cs.LG math.OC stat.ML
null
1502.02846
null
null
http://arxiv.org/pdf/1502.02846v4
2016-01-18T13:17:08Z
2015-02-10T10:36:25Z
Probabilistic Line Searches for Stochastic Optimization
In deterministic optimization, line searches are a standard tool ensuring stability and efficiency. Where only stochastic gradients are available, no direct equivalent has so far been formulated, because uncertain gradients do not allow for a strict sequence of decisions collapsing the search space. We construct a probabilistic line search by combining the structure of existing deterministic methods with notions from Bayesian optimization. Our method retains a Gaussian process surrogate of the univariate optimization objective, and uses a probabilistic belief over the Wolfe conditions to monitor the descent. The algorithm has very low computational cost, and no user-controlled parameters. Experiments show that it effectively removes the need to define a learning rate for stochastic gradient descent.
[ "['Maren Mahsereci' 'Philipp Hennig']", "Maren Mahsereci and Philipp Hennig" ]
stat.ML cs.LG cs.RO cs.SY
10.1109/TPAMI.2013.218
1502.02860
null
null
http://arxiv.org/abs/1502.02860v2
2017-10-10T18:25:45Z
2015-02-10T11:09:38Z
Gaussian Processes for Data-Efficient Learning in Robotics and Control
Autonomous learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows to reduce the amount of engineering knowledge, which is otherwise required. However, autonomous reinforcement learning (RL) approaches typically require many interactions with the system to learn controllers, which is a practical limitation in real systems, such as robots, where many interactions can be impractical and time consuming. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations, realistic simulators, pre-shaped policies, or specific knowledge about the underlying dynamics. In this article, we follow a different approach and speed up learning by extracting more information from data. In particular, we learn a probabilistic, non-parametric Gaussian process transition model of the system. By explicitly incorporating model uncertainty into long-term planning and controller learning our approach reduces the effects of model errors, a key problem in model-based learning. Compared to state-of-the art RL our model-based policy search method achieves an unprecedented speed of learning. We demonstrate its applicability to autonomous learning in real robot and control tasks.
[ "['Marc Peter Deisenroth' 'Dieter Fox' 'Carl Edward Rasmussen']", "Marc Peter Deisenroth, Dieter Fox and Carl Edward Rasmussen" ]
cs.LG cs.CV
null
1502.03044
null
null
http://arxiv.org/pdf/1502.03044v3
2016-04-19T16:43:09Z
2015-02-10T19:18:29Z
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.
[ "['Kelvin Xu' 'Jimmy Ba' 'Ryan Kiros' 'Kyunghyun Cho' 'Aaron Courville'\n 'Ruslan Salakhutdinov' 'Richard Zemel' 'Yoshua Bengio']", "Kelvin Xu and Jimmy Ba and Ryan Kiros and Kyunghyun Cho and Aaron\n Courville and Ruslan Salakhutdinov and Richard Zemel and Yoshua Bengio" ]
stat.ML cs.CV cs.LG
null
1502.03126
null
null
http://arxiv.org/pdf/1502.03126v1
2015-02-10T21:27:27Z
2015-02-10T21:27:27Z
Kernel Task-Driven Dictionary Learning for Hyperspectral Image Classification
Dictionary learning algorithms have been successfully used in both reconstructive and discriminative tasks, where the input signal is represented by a linear combination of a few dictionary atoms. While these methods are usually developed under $\ell_1$ sparsity constrain (prior) in the input domain, recent studies have demonstrated the advantages of sparse representation using structured sparsity priors in the kernel domain. In this paper, we propose a supervised dictionary learning algorithm in the kernel domain for hyperspectral image classification. In the proposed formulation, the dictionary and classifier are obtained jointly for optimal classification performance. The supervised formulation is task-driven and provides learned features from the hyperspectral data that are well suited for the classification task. Moreover, the proposed algorithm uses a joint ($\ell_{12}$) sparsity prior to enforce collaboration among the neighboring pixels. The simulation results illustrate the efficiency of the proposed dictionary learning algorithm.
[ "Soheil Bahrampour and Nasser M. Nasrabadi and Asok Ray and Kenneth W.\n Jenkins", "['Soheil Bahrampour' 'Nasser M. Nasrabadi' 'Asok Ray' 'Kenneth W. Jenkins']" ]
cs.SD cs.LG stat.ML
null
1502.03163
null
null
http://arxiv.org/pdf/1502.03163v1
2015-02-11T00:55:14Z
2015-02-11T00:55:14Z
Gaussian Process Models for HRTF based Sound-Source Localization and Active-Learning
From a machine learning perspective, the human ability localize sounds can be modeled as a non-parametric and non-linear regression problem between binaural spectral features of sound received at the ears (input) and their sound-source directions (output). The input features can be summarized in terms of the individual's head-related transfer functions (HRTFs) which measure the spectral response between the listener's eardrum and an external point in $3$D. Based on these viewpoints, two related problems are considered: how can one achieve an optimal sampling of measurements for training sound-source localization (SSL) models, and how can SSL models be used to infer the subject's HRTFs in listening tests. First, we develop a class of binaural SSL models based on Gaussian process regression and solve a \emph{forward selection} problem that finds a subset of input-output samples that best generalize to all SSL directions. Second, we use an \emph{active-learning} approach that updates an online SSL model for inferring the subject's SSL errors via headphones and a graphical user interface. Experiments show that only a small fraction of HRTFs are required for $5^{\circ}$ localization accuracy and that the learned HRTFs are localized closer to their intended directions than non-individualized HRTFs.
[ "Yuancheng Luo, Dmitry N. Zotkin, Ramani Duraiswami", "['Yuancheng Luo' 'Dmitry N. Zotkin' 'Ramani Duraiswami']" ]
cs.LG
null
1502.03167
null
null
http://arxiv.org/pdf/1502.03167v3
2015-03-02T20:44:12Z
2015-02-11T01:44:18Z
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters.
[ "Sergey Ioffe, Christian Szegedy", "['Sergey Ioffe' 'Christian Szegedy']" ]
stat.ML cs.LG stat.ME
null
1502.03175
null
null
http://arxiv.org/pdf/1502.03175v3
2015-05-30T22:01:39Z
2015-02-11T02:21:49Z
Proximal Algorithms in Statistics and Machine Learning
In this paper we develop proximal methods for statistical learning. Proximal point algorithms are useful in statistics and machine learning for obtaining optimization solutions for composite functions. Our approach exploits closed-form solutions of proximal operators and envelope representations based on the Moreau, Forward-Backward, Douglas-Rachford and Half-Quadratic envelopes. Envelope representations lead to novel proximal algorithms for statistical optimisation of composite objective functions which include both non-smooth and non-convex objectives. We illustrate our methodology with regularized Logistic and Poisson regression and non-convex bridge penalties with a fused lasso norm. We provide a discussion of convergence of non-descent algorithms with acceleration and for non-convex functions. Finally, we provide directions for future research.
[ "Nicholas G. Polson, James G. Scott and Brandon T. Willard", "['Nicholas G. Polson' 'James G. Scott' 'Brandon T. Willard']" ]
stat.ML cs.LG
null
1502.03255
null
null
http://arxiv.org/pdf/1502.03255v1
2015-02-11T10:42:40Z
2015-02-11T10:42:40Z
Off-policy evaluation for MDPs with unknown structure
Off-policy learning in dynamic decision problems is essential for providing strong evidence that a new policy is better than the one in use. But how can we prove superiority without testing the new policy? To answer this question, we introduce the G-SCOPE algorithm that evaluates a new policy based on data generated by the existing policy. Our algorithm is both computationally and sample efficient because it greedily learns to exploit factored structure in the dynamics of the environment. We present a finite sample analysis of our approach and show through experiments that the algorithm scales well on high-dimensional problems with few samples.
[ "Assaf Hallak and Fran\\c{c}ois Schnitzler and Timothy Mann and Shie\n Mannor", "['Assaf Hallak' 'François Schnitzler' 'Timothy Mann' 'Shie Mannor']" ]
physics.soc-ph cs.LG physics.data-an
10.1007/978-3-319-24403-7_2
1502.03296
null
null
http://arxiv.org/abs/1502.03296v1
2015-02-11T13:10:58Z
2015-02-11T13:10:58Z
Statistical laws in linguistics
Zipf's law is just one out of many universal laws proposed to describe statistical regularities in language. Here we review and critically discuss how these laws can be statistically interpreted, fitted, and tested (falsified). The modern availability of large databases of written text allows for tests with an unprecedent statistical accuracy and also a characterization of the fluctuations around the typical behavior. We find that fluctuations are usually much larger than expected based on simplifying statistical assumptions (e.g., independence and lack of correlations between observations).These simplifications appear also in usual statistical tests so that the large fluctuations can be erroneously interpreted as a falsification of the law. Instead, here we argue that linguistic laws are only meaningful (falsifiable) if accompanied by a model for which the fluctuations can be computed (e.g., a generative model of the text). The large fluctuations we report show that the constraints imposed by linguistic laws on the creativity process of text generation are not as tight as one could expect.
[ "['Eduardo G. Altmann' 'Martin Gerlach']", "Eduardo G. Altmann and Martin Gerlach" ]
cs.CY cs.HC cs.LG
null
1502.03302
null
null
http://arxiv.org/pdf/1502.03302v2
2015-03-23T11:45:11Z
2015-02-11T13:27:01Z
Using Distance Estimation and Deep Learning to Simplify Calibration in Food Calorie Measurement
High calorie intake in the human body on the one hand, has proved harmful in numerous occasions leading to several diseases and on the other hand, a standard amount of calorie intake has been deemed essential by dieticians to maintain the right balance of calorie content in human body. As such, researchers have proposed a variety of automatic tools and systems to assist users measure their calorie in-take. In this paper, we consider the category of those tools that use image processing to recognize the food, and we propose a method for fully automatic and user-friendly calibration of the dimension of the food portion sizes, which is needed in order to measure food portion weight and its ensuing amount of calories. Experimental results show that our method, which uses deep learning, mobile cloud computing, distance estimation and size calibration inside a mobile device, leads to an accuracy improvement to 95% on average compared to previous work
[ "Pallavi Kuhad, Abdulsalam Yassine, Shervin Shirmohammadi", "['Pallavi Kuhad' 'Abdulsalam Yassine' 'Shervin Shirmohammadi']" ]
cs.LG cs.CV
null
1502.03409
null
null
http://arxiv.org/pdf/1502.03409v1
2015-02-11T19:24:36Z
2015-02-11T19:24:36Z
Large-Scale Deep Learning on the YFCC100M Dataset
We present a work-in-progress snapshot of learning with a 15 billion parameter deep learning network on HPC architectures applied to the largest publicly available natural image and video dataset released to-date. Recent advancements in unsupervised deep neural networks suggest that scaling up such networks in both model and training dataset size can yield significant improvements in the learning of concepts at the highest layers. We train our three-layer deep neural network on the Yahoo! Flickr Creative Commons 100M dataset. The dataset comprises approximately 99.2 million images and 800,000 user-created videos from Yahoo's Flickr image and video sharing platform. Training of our network takes eight days on 98 GPU nodes at the High Performance Computing Center at Lawrence Livermore National Laboratory. Encouraging preliminary results and future research directions are presented and discussed.
[ "['Karl Ni' 'Roger Pearce' 'Kofi Boakye' 'Brian Van Essen' 'Damian Borth'\n 'Barry Chen' 'Eric Wang']", "Karl Ni, Roger Pearce, Kofi Boakye, Brian Van Essen, Damian Borth,\n Barry Chen, Eric Wang" ]
cs.LG cs.AI stat.ML
null
1502.03473
null
null
http://arxiv.org/pdf/1502.03473v7
2016-05-31T18:47:03Z
2015-02-11T22:28:14Z
Collaborative Filtering Bandits
Classical collaborative filtering, and content-based filtering methods try to learn a static recommendation model given training data. These approaches are far from ideal in highly dynamic recommendation domains such as news recommendation and computational advertisement, where the set of items and users is very fluid. In this work, we investigate an adaptive clustering technique for content recommendation based on exploration-exploitation strategies in contextual multi-armed bandit settings. Our algorithm takes into account the collaborative effects that arise due to the interaction of the users with the items, by dynamically grouping users based on the items under consideration and, at the same time, grouping items based on the similarity of the clusterings induced over the users. The resulting algorithm thus takes advantage of preference patterns in the data in a way akin to collaborative filtering methods. We provide an empirical analysis on medium-size real-world datasets, showing scalability and increased prediction performance (as measured by click-through rate) over state-of-the-art methods for clustering bandits. We also provide a regret analysis within a standard linear stochastic noise setting.
[ "Shuai Li and Alexandros Karatzoglou and Claudio Gentile", "['Shuai Li' 'Alexandros Karatzoglou' 'Claudio Gentile']" ]
cs.LG math.OC stat.ML
null
1502.03475
null
null
http://arxiv.org/pdf/1502.03475v3
2015-11-06T00:53:37Z
2015-02-11T22:35:50Z
Combinatorial Bandits Revisited
This paper investigates stochastic and adversarial combinatorial multi-armed bandit problems. In the stochastic setting under semi-bandit feedback, we derive a problem-specific regret lower bound, and discuss its scaling with the dimension of the decision space. We propose ESCB, an algorithm that efficiently exploits the structure of the problem and provide a finite-time analysis of its regret. ESCB has better performance guarantees than existing algorithms, and significantly outperforms these algorithms in practice. In the adversarial setting under bandit feedback, we propose \textsc{CombEXP}, an algorithm with the same regret scaling as state-of-the-art algorithms, but with lower computational complexity for some combinatorial problems.
[ "Richard Combes and M. Sadegh Talebi and Alexandre Proutiere and Marc\n Lelarge", "['Richard Combes' 'M. Sadegh Talebi' 'Alexandre Proutiere' 'Marc Lelarge']" ]
stat.ML cs.LG
null
1502.03491
null
null
http://arxiv.org/pdf/1502.03491v1
2015-02-11T23:44:02Z
2015-02-11T23:44:02Z
How to show a probabilistic model is better
We present a simple theoretical framework, and corresponding practical procedures, for comparing probabilistic models on real data in a traditional machine learning setting. This framework is based on the theory of proper scoring rules, but requires only basic algebra and probability theory to understand and verify. The theoretical concepts presented are well-studied, primarily in the statistics literature. The goal of this paper is to advocate their wider adoption for performance evaluation in empirical machine learning.
[ "Mithun Chakraborty, Sanmay Das, Allen Lavoie", "['Mithun Chakraborty' 'Sanmay Das' 'Allen Lavoie']" ]
stat.ML cs.LG
null
1502.03492
null
null
http://arxiv.org/pdf/1502.03492v3
2015-04-02T17:40:44Z
2015-02-11T23:52:36Z
Gradient-based Hyperparameter Optimization through Reversible Learning
Tuning hyperparameters of learning algorithms is hard because gradients are usually unavailable. We compute exact gradients of cross-validation performance with respect to all hyperparameters by chaining derivatives backwards through the entire training procedure. These gradients allow us to optimize thousands of hyperparameters, including step-size and momentum schedules, weight initialization distributions, richly parameterized regularization schemes, and neural network architectures. We compute hyperparameter gradients by exactly reversing the dynamics of stochastic gradient descent with momentum.
[ "['Dougal Maclaurin' 'David Duvenaud' 'Ryan P. Adams']", "Dougal Maclaurin, David Duvenaud, Ryan P. Adams" ]
cs.DS cs.DM cs.LG cs.SI stat.ML
null
1502.03496
null
null
http://arxiv.org/pdf/1502.03496v1
2015-02-12T00:25:32Z
2015-02-12T00:25:32Z
Spectral Sparsification of Random-Walk Matrix Polynomials
We consider a fundamental algorithmic question in spectral graph theory: Compute a spectral sparsifier of random-walk matrix-polynomial $$L_\alpha(G)=D-\sum_{r=1}^d\alpha_rD(D^{-1}A)^r$$ where $A$ is the adjacency matrix of a weighted, undirected graph, $D$ is the diagonal matrix of weighted degrees, and $\alpha=(\alpha_1...\alpha_d)$ are nonnegative coefficients with $\sum_{r=1}^d\alpha_r=1$. Recall that $D^{-1}A$ is the transition matrix of random walks on the graph. The sparsification of $L_\alpha(G)$ appears to be algorithmically challenging as the matrix power $(D^{-1}A)^r$ is defined by all paths of length $r$, whose precise calculation would be prohibitively expensive. In this paper, we develop the first nearly linear time algorithm for this sparsification problem: For any $G$ with $n$ vertices and $m$ edges, $d$ coefficients $\alpha$, and $\epsilon > 0$, our algorithm runs in time $O(d^2m\log^2n/\epsilon^{2})$ to construct a Laplacian matrix $\tilde{L}=D-\tilde{A}$ with $O(n\log n/\epsilon^{2})$ non-zeros such that $\tilde{L}\approx_{\epsilon}L_\alpha(G)$. Matrix polynomials arise in mathematical analysis of matrix functions as well as numerical solutions of matrix equations. Our work is particularly motivated by the algorithmic problems for speeding up the classic Newton's method in applications such as computing the inverse square-root of the precision matrix of a Gaussian random field, as well as computing the $q$th-root transition (for $q\geq1$) in a time-reversible Markov model. The key algorithmic step for both applications is the construction of a spectral sparsifier of a constant degree random-walk matrix-polynomials introduced by Newton's method. Our algorithm can also be used to build efficient data structures for effective resistances for multi-step time-reversible Markov models, and we anticipate that it could be useful for other tasks in network analysis.
[ "['Dehua Cheng' 'Yu Cheng' 'Yan Liu' 'Richard Peng' 'Shang-Hua Teng']", "Dehua Cheng, Yu Cheng, Yan Liu, Richard Peng, Shang-Hua Teng" ]
cs.LG
null
1502.03505
null
null
http://arxiv.org/pdf/1502.03505v1
2015-02-12T01:38:36Z
2015-02-12T01:38:36Z
Supervised LogEuclidean Metric Learning for Symmetric Positive Definite Matrices
Metric learning has been shown to be highly effective to improve the performance of nearest neighbor classification. In this paper, we address the problem of metric learning for Symmetric Positive Definite (SPD) matrices such as covariance matrices, which arise in many real-world applications. Naively using standard Mahalanobis metric learning methods under the Euclidean geometry for SPD matrices is not appropriate, because the difference of SPD matrices can be a non-SPD matrix and thus the obtained solution can be uninterpretable. To cope with this problem, we propose to use a properly parameterized LogEuclidean distance and optimize the metric with respect to kernel-target alignment, which is a supervised criterion for kernel learning. Then the resulting non-trivial optimization problem is solved by utilizing the Riemannian geometry. Finally, we experimentally demonstrate the usefulness of our LogEuclidean metric learning algorithm on real-world classification tasks for EEG signals and texture patches.
[ "['Florian Yger' 'Masashi Sugiyama']", "Florian Yger and Masashi Sugiyama" ]
cs.LG
null
1502.03508
null
null
http://arxiv.org/pdf/1502.03508v2
2015-07-03T19:35:13Z
2015-02-12T01:51:08Z
Adding vs. Averaging in Distributed Primal-Dual Optimization
Distributed optimization methods for large-scale machine learning suffer from a communication bottleneck. It is difficult to reduce this bottleneck while still efficiently and accurately aggregating partial work from different machines. In this paper, we present a novel generalization of the recent communication-efficient primal-dual framework (CoCoA) for distributed optimization. Our framework, CoCoA+, allows for additive combination of local updates to the global parameters at each iteration, whereas previous schemes with convergence guarantees only allow conservative averaging. We give stronger (primal-dual) convergence rate guarantees for both CoCoA as well as our new variants, and generalize the theory for both methods to cover non-smooth convex loss functions. We provide an extensive experimental comparison that shows the markedly improved performance of CoCoA+ on several real-world distributed datasets, especially when scaling up the number of machines.
[ "Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter\n Richt\\'arik and Martin Tak\\'a\\v{c}", "['Chenxin Ma' 'Virginia Smith' 'Martin Jaggi' 'Michael I. Jordan'\n 'Peter Richtárik' 'Martin Takáč']" ]
cs.LG cs.NE stat.ML
null
1502.03509
null
null
http://arxiv.org/pdf/1502.03509v2
2015-06-05T14:37:32Z
2015-02-12T02:06:07Z
MADE: Masked Autoencoder for Distribution Estimation
There has been a lot of recent interest in designing neural network models to estimate a distribution from a set of examples. We introduce a simple modification for autoencoder neural networks that yields powerful generative models. Our method masks the autoencoder's parameters to respect autoregressive constraints: each input is reconstructed only from previous inputs in a given ordering. Constrained this way, the autoencoder outputs can be interpreted as a set of conditional probabilities, and their product, the full joint probability. We can also train a single network that can decompose the joint probability in multiple different orderings. Our simple framework can be applied to multiple architectures, including deep ones. Vectorized implementations, such as on GPUs, are simple and fast. Experiments demonstrate that this approach is competitive with state-of-the-art tractable distribution estimators. At test time, the method is significantly faster and scales better than other autoregressive estimators.
[ "['Mathieu Germain' 'Karol Gregor' 'Iain Murray' 'Hugo Larochelle']", "Mathieu Germain, Karol Gregor, Iain Murray, Hugo Larochelle" ]
cs.LG cs.CL stat.ML
null
1502.03520
null
null
http://arxiv.org/pdf/1502.03520v8
2019-06-19T21:54:20Z
2015-02-12T02:50:08Z
A Latent Variable Model Approach to PMI-based Word Embeddings
Semantic word embeddings represent the meaning of a word via a vector, and are created by diverse methods. Many use nonlinear operations on co-occurrence statistics, and have hand-tuned hyperparameters and reweighting methods. This paper proposes a new generative model, a dynamic version of the log-linear topic model of~\citet{mnih2007three}. The methodological novelty is to use the prior to compute closed form expressions for word statistics. This provides a theoretical justification for nonlinear models like PMI, word2vec, and GloVe, as well as some hyperparameter choices. It also helps explain why low-dimensional semantic embeddings contain linear algebraic structure that allows solution of word analogies, as shown by~\citet{mikolov2013efficient} and many subsequent papers. Experimental support is provided for the generative model assumptions, the most important of which is that latent word vectors are fairly uniformly dispersed in space.
[ "['Sanjeev Arora' 'Yuanzhi Li' 'Yingyu Liang' 'Tengyu Ma' 'Andrej Risteski']", "Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, Andrej Risteski" ]
cs.LG
null
1502.03529
null
null
http://arxiv.org/pdf/1502.03529v3
2015-07-20T10:01:27Z
2015-02-12T04:01:46Z
Scalable Stochastic Alternating Direction Method of Multipliers
Stochastic alternating direction method of multipliers (ADMM), which visits only one sample or a mini-batch of samples each time, has recently been proved to achieve better performance than batch ADMM. However, most stochastic methods can only achieve a convergence rate $O(1/\sqrt T)$ on general convex problems,where T is the number of iterations. Hence, these methods are not scalable with respect to convergence rate (computation cost). There exists only one stochastic method, called SA-ADMM, which can achieve convergence rate $O(1/T)$ on general convex problems. However, an extra memory is needed for SA-ADMM to store the historic gradients on all samples, and thus it is not scalable with respect to storage cost. In this paper, we propose a novel method, called scalable stochastic ADMM(SCAS-ADMM), for large-scale optimization and learning problems. Without the need to store the historic gradients, SCAS-ADMM can achieve the same convergence rate $O(1/T)$ as the best stochastic method SA-ADMM and batch ADMM on general convex problems. Experiments on graph-guided fused lasso show that SCAS-ADMM can achieve state-of-the-art performance in real applications
[ "Shen-Yi Zhao, Wu-Jun Li, Zhi-Hua Zhou", "['Shen-Yi Zhao' 'Wu-Jun Li' 'Zhi-Hua Zhou']" ]
cs.LG cs.CV math.OC
null
1502.03537
null
null
http://arxiv.org/pdf/1502.03537v1
2015-02-12T04:31:36Z
2015-02-12T04:31:36Z
Convergence of gradient based pre-training in Denoising autoencoders
The success of deep architectures is at least in part attributed to the layer-by-layer unsupervised pre-training that initializes the network. Various papers have reported extensive empirical analysis focusing on the design and implementation of good pre-training procedures. However, an understanding pertaining to the consistency of parameter estimates, the convergence of learning procedures and the sample size estimates is still unavailable in the literature. In this work, we study pre-training in classical and distributed denoising autoencoders with these goals in mind. We show that the gradient converges at the rate of $\frac{1}{\sqrt{N}}$ and has a sub-linear dependence on the size of the autoencoder network. In a distributed setting where disjoint sections of the whole network are pre-trained synchronously, we show that the convergence improves by at least $\tau^{3/4}$, where $\tau$ corresponds to the size of the sections. We provide a broad set of experiments to empirically evaluate the suggested behavior.
[ "Vamsi K Ithapu, Sathya Ravi, Vikas Singh", "['Vamsi K Ithapu' 'Sathya Ravi' 'Vikas Singh']" ]
cs.NE cs.LG
null
1502.03581
null
null
http://arxiv.org/pdf/1502.03581v1
2015-02-12T09:58:23Z
2015-02-12T09:58:23Z
Web spam classification using supervised artificial neural network algorithms
Due to the rapid growth in technology employed by the spammers, there is a need of classifiers that are more efficient, generic and highly adaptive. Neural Network based technologies have high ability of adaption as well as generalization. As per our knowledge, very little work has been done in this field using neural network. We present this paper to fill this gap. This paper evaluates performance of three supervised learning algorithms of artificial neural network by creating classifiers for the complex problem of latest web spam pattern classification. These algorithms are Conjugate Gradient algorithm, Resilient Backpropagation learning, and Levenberg-Marquardt algorithm.
[ "['Ashish Chandra' 'Mohammad Suaib' 'Dr. Rizwan Beg']", "Ashish Chandra, Mohammad Suaib, and Dr. Rizwan Beg" ]
cs.LG
10.5121/ijdkp.2015.5103
1502.03601
null
null
http://arxiv.org/abs/1502.03601v1
2015-02-12T11:07:51Z
2015-02-12T11:07:51Z
A Predictive System for detection of Bankruptcy using Machine Learning techniques
Bankruptcy is a legal procedure that claims a person or organization as a debtor. It is essential to ascertain the risk of bankruptcy at initial stages to prevent financial losses. In this perspective, different soft computing techniques can be employed to ascertain bankruptcy. This study proposes a bankruptcy prediction system to categorize the companies based on extent of risk. The prediction system acts as a decision support tool for detection of bankruptcy Keywords: Bankruptcy, soft computing, decision support tool
[ "Kalyan Nagaraj, Amulyashree Sridhar", "['Kalyan Nagaraj' 'Amulyashree Sridhar']" ]
cs.LG cs.CL cs.IR
null
1502.03630
null
null
http://arxiv.org/pdf/1502.03630v1
2015-02-12T12:32:39Z
2015-02-12T12:32:39Z
Ordering-sensitive and Semantic-aware Topic Modeling
Topic modeling of textual corpora is an important and challenging problem. In most previous work, the "bag-of-words" assumption is usually made which ignores the ordering of words. This assumption simplifies the computation, but it unrealistically loses the ordering information and the semantic of words in the context. In this paper, we present a Gaussian Mixture Neural Topic Model (GMNTM) which incorporates both the ordering of words and the semantic meaning of sentences into topic modeling. Specifically, we represent each topic as a cluster of multi-dimensional vectors and embed the corpus into a collection of vectors generated by the Gaussian mixture model. Each word is affected not only by its topic, but also by the embedding vector of its surrounding words and the context. The Gaussian mixture components and the topic of documents, sentences and words can be learnt jointly. Extensive experiments show that our model can learn better topics and more accurate word distributions for each topic. Quantitatively, comparing to state-of-the-art topic modeling approaches, GMNTM obtains significantly better performance in terms of perplexity, retrieval accuracy and classification accuracy.
[ "Min Yang, Tianyi Cui, Wenting Tu", "['Min Yang' 'Tianyi Cui' 'Wenting Tu']" ]
cs.LG cs.NE
null
1502.03648
null
null
http://arxiv.org/pdf/1502.03648v1
2015-02-12T13:29:03Z
2015-02-12T13:29:03Z
Over-Sampling in a Deep Neural Network
Deep neural networks (DNN) are the state of the art on many engineering problems such as computer vision and audition. A key factor in the success of the DNN is scalability - bigger networks work better. However, the reason for this scalability is not yet well understood. Here, we interpret the DNN as a discrete system, of linear filters followed by nonlinear activations, that is subject to the laws of sampling theory. In this context, we demonstrate that over-sampled networks are more selective, learn faster and learn more robustly. Our findings may ultimately generalize to the human brain.
[ "Andrew J.R. Simpson", "['Andrew J. R. Simpson']" ]
cs.CL cs.IR cs.LG cs.NE
null
1502.03682
null
null
http://arxiv.org/pdf/1502.03682v1
2015-02-12T14:44:15Z
2015-02-12T14:44:15Z
Applying deep learning techniques on medical corpora from the World Wide Web: a prototypical system and evaluation
BACKGROUND: The amount of biomedical literature is rapidly growing and it is becoming increasingly difficult to keep manually curated knowledge bases and ontologies up-to-date. In this study we applied the word2vec deep learning toolkit to medical corpora to test its potential for identifying relationships from unstructured text. We evaluated the efficiency of word2vec in identifying properties of pharmaceuticals based on mid-sized, unstructured medical text corpora available on the web. Properties included relationships to diseases ('may treat') or physiological processes ('has physiological effect'). We compared the relationships identified by word2vec with manually curated information from the National Drug File - Reference Terminology (NDF-RT) ontology as a gold standard. RESULTS: Our results revealed a maximum accuracy of 49.28% which suggests a limited ability of word2vec to capture linguistic regularities on the collected medical corpora compared with other published results. We were able to document the influence of different parameter settings on result accuracy and found and unexpected trade-off between ranking quality and accuracy. Pre-processing corpora to reduce syntactic variability proved to be a good strategy for increasing the utility of the trained vector models. CONCLUSIONS: Word2vec is a very efficient implementation for computing vector representations and for its ability to identify relationships in textual data without any prior domain knowledge. We found that the ranking and retrieved results generated by word2vec were not of sufficient quality for automatic population of knowledge bases and ontologies, but could serve as a starting point for further manual curation.
[ "Jose Antonio Mi\\~narro-Gim\\'enez, Oscar Mar\\'in-Alonso, Matthias\n Samwald", "['Jose Antonio Miñarro-Giménez' 'Oscar Marín-Alonso' 'Matthias Samwald']" ]
cs.LG cs.CV
null
1502.03879
null
null
http://arxiv.org/pdf/1502.03879v1
2015-02-13T03:35:15Z
2015-02-13T03:35:15Z
Semi-supervised Data Representation via Affinity Graph Learning
We consider the general problem of utilizing both labeled and unlabeled data to improve data representation performance. A new semi-supervised learning framework is proposed by combing manifold regularization and data representation methods such as Non negative matrix factorization and sparse coding. We adopt unsupervised data representation methods as the learning machines because they do not depend on the labeled data, which can improve machine's generation ability as much as possible. The proposed framework forms the Laplacian regularizer through learning the affinity graph. We incorporate the new Laplacian regularizer into the unsupervised data representation to smooth the low dimensional representation of data and make use of label information. Experimental results on several real benchmark datasets indicate that our semi-supervised learning framework achieves encouraging results compared with state-of-art methods.
[ "Weiya Ren", "['Weiya Ren']" ]
cs.AI cs.LG stat.ML
null
1502.03919
null
null
http://arxiv.org/pdf/1502.03919v2
2015-06-08T06:31:42Z
2015-02-13T09:16:24Z
Policy Gradient for Coherent Risk Measures
Several authors have recently developed risk-sensitive policy gradient methods that augment the standard expected cost minimization problem with a measure of variability in cost. These studies have focused on specific risk-measures, such as the variance or conditional value at risk (CVaR). In this work, we extend the policy gradient method to the whole class of coherent risk measures, which is widely accepted in finance and operations research, among other fields. We consider both static and time-consistent dynamic risk measures. For static risk measures, our approach is in the spirit of policy gradient algorithms and combines a standard sampling approach with convex programming. For dynamic risk measures, our approach is actor-critic style and involves explicit approximation of value function. Most importantly, our contribution presents a unified approach to risk-sensitive reinforcement learning that generalizes and extends previous results.
[ "Aviv Tamar, Yinlam Chow, Mohammad Ghavamzadeh, Shie Mannor", "['Aviv Tamar' 'Yinlam Chow' 'Mohammad Ghavamzadeh' 'Shie Mannor']" ]
cs.LG stat.ML
10.1016/j.ins.2015.06.027
1502.04033
null
null
http://arxiv.org/abs/1502.04033v2
2015-02-16T13:02:05Z
2015-02-13T15:48:00Z
The Responsibility Weighted Mahalanobis Kernel for Semi-Supervised Training of Support Vector Machines for Classification
Kernel functions in support vector machines (SVM) are needed to assess the similarity of input samples in order to classify these samples, for instance. Besides standard kernels such as Gaussian (i.e., radial basis function, RBF) or polynomial kernels, there are also specific kernels tailored to consider structure in the data for similarity assessment. In this article, we will capture structure in data by means of probabilistic mixture density models, for example Gaussian mixtures in the case of real-valued input spaces. From the distance measures that are inherently contained in these models, e.g., Mahalanobis distances in the case of Gaussian mixtures, we derive a new kernel, the responsibility weighted Mahalanobis (RWM) kernel. Basically, this kernel emphasizes the influence of model components from which any two samples that are compared are assumed to originate (that is, the "responsible" model components). We will see that this kernel outperforms the RBF kernel and other kernels capturing structure in data (such as the LAP kernel in Laplacian SVM) in many applications where partially labeled data are available, i.e., for semi-supervised training of SVM. Other key advantages are that the RWM kernel can easily be used with standard SVM implementations and training algorithms such as sequential minimal optimization, and heuristics known for the parametrization of RBF kernels in a C-SVM can easily be transferred to this new kernel. Properties of the RWM kernel are demonstrated with 20 benchmark data sets and an increasing percentage of labeled samples in the training data.
[ "Tobias Reitmaier and Bernhard Sick", "['Tobias Reitmaier' 'Bernhard Sick']" ]
cs.LG cs.NE
null
1502.04042
null
null
http://arxiv.org/pdf/1502.04042v1
2015-02-13T16:09:41Z
2015-02-13T16:09:41Z
Abstract Learning via Demodulation in a Deep Neural Network
Inspired by the brain, deep neural networks (DNN) are thought to learn abstract representations through their hierarchical architecture. However, at present, how this happens is not well understood. Here, we demonstrate that DNN learn abstract representations by a process of demodulation. We introduce a biased sigmoid activation function and use it to show that DNN learn and perform better when optimized for demodulation. Our findings constitute the first unambiguous evidence that DNN perform abstract learning in practical use. Our findings may also explain abstract learning in the human brain.
[ "Andrew J.R. Simpson", "['Andrew J. R. Simpson']" ]
stat.ML cs.CL cs.LG
null
1502.04081
null
null
http://arxiv.org/pdf/1502.04081v2
2015-05-31T20:04:53Z
2015-02-13T18:39:29Z
A Linear Dynamical System Model for Text
Low dimensional representations of words allow accurate NLP models to be trained on limited annotated data. While most representations ignore words' local context, a natural way to induce context-dependent representations is to perform inference in a probabilistic latent-variable sequence model. Given the recent success of continuous vector space word representations, we provide such an inference procedure for continuous states, where words' representations are given by the posterior mean of a linear dynamical system. Here, efficient inference can be performed using Kalman filtering. Our learning algorithm is extremely scalable, operating on simple cooccurrence counts for both parameter initialization using the method of moments and subsequent iterations of EM. In our experiments, we employ our inferred word embeddings as features in standard tagging tasks, obtaining significant accuracy improvements. Finally, the Kalman filter updates can be seen as a linear recurrent neural network. We demonstrate that using the parameters of our model to initialize a non-linear recurrent neural network language model reduces its training time by a day and yields lower perplexity.
[ "David Belanger and Sham Kakade", "['David Belanger' 'Sham Kakade']" ]
cs.LG
null
1502.04137
null
null
http://arxiv.org/pdf/1502.04137v1
2015-02-13T21:32:12Z
2015-02-13T21:32:12Z
Non-Adaptive Learning a Hidden Hipergraph
We give a new deterministic algorithm that non-adaptively learns a hidden hypergraph from edge-detecting queries. All previous non-adaptive algorithms either run in exponential time or have non-optimal query complexity. We give the first polynomial time non-adaptive learning algorithm for learning hypergraph that asks almost optimal number of queries.
[ "['Hasan Abasi' 'Nader H. Bshouty' 'Hanna Mazzawi']", "Hasan Abasi and Nader H. Bshouty and Hanna Mazzawi" ]
cs.LG stat.ML
null
1502.04148
null
null
http://arxiv.org/pdf/1502.04148v2
2015-10-01T16:05:56Z
2015-02-13T23:18:35Z
A Pseudo-Euclidean Iteration for Optimal Recovery in Noisy ICA
Independent Component Analysis (ICA) is a popular model for blind signal separation. The ICA model assumes that a number of independent source signals are linearly mixed to form the observed signals. We propose a new algorithm, PEGI (for pseudo-Euclidean Gradient Iteration), for provable model recovery for ICA with Gaussian noise. The main technical innovation of the algorithm is to use a fixed point iteration in a pseudo-Euclidean (indefinite "inner product") space. The use of this indefinite "inner product" resolves technical issues common to several existing algorithms for noisy ICA. This leads to an algorithm which is conceptually simple, efficient and accurate in testing. Our second contribution is combining PEGI with the analysis of objectives for optimal recovery in the noisy ICA model. It has been observed that the direct approach of demixing with the inverse of the mixing matrix is suboptimal for signal recovery in terms of the natural Signal to Interference plus Noise Ratio (SINR) criterion. There have been several partial solutions proposed in the ICA literature. It turns out that any solution to the mixing matrix reconstruction problem can be used to construct an SINR-optimal ICA demixing, despite the fact that SINR itself cannot be computed from data. That allows us to obtain a practical and provably SINR-optimal recovery method for ICA with arbitrary Gaussian noise.
[ "James Voss, Mikhail Belkin, and Luis Rademacher", "['James Voss' 'Mikhail Belkin' 'Luis Rademacher']" ]
cs.SD cs.AI cs.LG cs.MM
10.1109/TASLP.2015.2468583
1502.04149
null
null
http://arxiv.org/abs/1502.04149v4
2015-10-01T02:58:01Z
2015-02-13T23:22:16Z
Joint Optimization of Masks and Deep Recurrent Neural Networks for Monaural Source Separation
Monaural source separation is important for many real world applications. It is challenging because, with only a single channel of information available, without any constraints, an infinite number of solutions are possible. In this paper, we explore joint optimization of masking functions and deep recurrent neural networks for monaural source separation tasks, including monaural speech separation, monaural singing voice separation, and speech denoising. The joint optimization of the deep recurrent neural networks with an extra masking layer enforces a reconstruction constraint. Moreover, we explore a discriminative criterion for training neural networks to further enhance the separation performance. We evaluate the proposed system on the TSP, MIR-1K, and TIMIT datasets for speech separation, singing voice separation, and speech denoising tasks, respectively. Our approaches achieve 2.30--4.98 dB SDR gain compared to NMF models in the speech separation task, 2.30--2.48 dB GNSDR gain and 4.32--5.42 dB GSIR gain compared to existing models in the singing voice separation task, and outperform NMF and DNN baselines in the speech denoising task.
[ "Po-Sen Huang, Minje Kim, Mark Hasegawa-Johnson, Paris Smaragdis", "['Po-Sen Huang' 'Minje Kim' 'Mark Hasegawa-Johnson' 'Paris Smaragdis']" ]
cs.LG
null
1502.04156
null
null
http://arxiv.org/pdf/1502.04156v3
2016-08-09T01:57:09Z
2015-02-14T01:11:25Z
Towards Biologically Plausible Deep Learning
Neuroscientists have long criticised deep learning algorithms as incompatible with current knowledge of neurobiology. We explore more biologically plausible versions of deep representation learning, focusing here mostly on unsupervised learning but developing a learning mechanism that could account for supervised, unsupervised and reinforcement learning. The starting point is that the basic learning rule believed to govern synaptic weight updates (Spike-Timing-Dependent Plasticity) arises out of a simple update rule that makes a lot of sense from a machine learning point of view and can be interpreted as gradient descent on some objective function so long as the neuronal dynamics push firing rates towards better values of the objective function (be it supervised, unsupervised, or reward-driven). The second main idea is that this corresponds to a form of the variational EM algorithm, i.e., with approximate rather than exact posteriors, implemented by neural dynamics. Another contribution of this paper is that the gradients required for updating the hidden states in the above variational interpretation can be estimated using an approximation that only requires propagating activations forward and backward, with pairs of layers learning to form a denoising auto-encoder. Finally, we extend the theory about the probabilistic interpretation of auto-encoders to justify improved sampling schemes based on the generative interpretation of denoising auto-encoders, and we validate all these ideas on generative learning tasks.
[ "['Yoshua Bengio' 'Dong-Hyun Lee' 'Jorg Bornschein' 'Thomas Mesnard'\n 'Zhouhan Lin']", "Yoshua Bengio, Dong-Hyun Lee, Jorg Bornschein, Thomas Mesnard and\n Zhouhan Lin" ]
cs.LG stat.ML
null
1502.04168
null
null
http://arxiv.org/pdf/1502.04168v2
2015-09-10T12:34:58Z
2015-02-14T05:37:32Z
Nonparametric regression using needlet kernels for spherical data
Needlets have been recognized as state-of-the-art tools to tackle spherical data, due to their excellent localization properties in both spacial and frequency domains. This paper considers developing kernel methods associated with the needlet kernel for nonparametric regression problems whose predictor variables are defined on a sphere. Due to the localization property in the frequency domain, we prove that the regularization parameter of the kernel ridge regression associated with the needlet kernel can decrease arbitrarily fast. A natural consequence is that the regularization term for the kernel ridge regression is not necessary in the sense of rate optimality. Based on the excellent localization property in the spacial domain further, we also prove that all the $l^{q}$ $(01\leq q < \infty)$ kernel regularization estimates associated with the needlet kernel, including the kernel lasso estimate and the kernel bridge estimate, possess almost the same generalization capability for a large range of regularization parameters in the sense of rate optimality. This finding tentatively reveals that, if the needlet kernel is utilized, then the choice of $q$ might not have a strong impact in terms of the generalization capability in some modeling contexts. From this perspective, $q$ can be arbitrarily specified, or specified merely by other no generalization criteria like smoothness, computational complexity, sparsity, etc..
[ "['Shaobo Lin']", "Shaobo Lin" ]
cs.LG
null
1502.04187
null
null
http://arxiv.org/pdf/1502.04187v2
2015-07-08T11:05:10Z
2015-02-14T10:58:53Z
Application of Deep Neural Network in Estimation of the Weld Bead Parameters
We present a deep learning approach to estimation of the bead parameters in welding tasks. Our model is based on a four-hidden-layer neural network architecture. More specifically, the first three hidden layers of this architecture utilize Sigmoid function to produce their respective intermediate outputs. On the other hand, the last hidden layer uses a linear transformation to generate the final output of this architecture. This transforms our deep network architecture from a classifier to a non-linear regression model. We compare the performance of our deep network with a selected number of results in the literature to show a considerable improvement in reducing the errors in estimation of these values. Furthermore, we show its scalability on estimating the weld bead parameters with same level of accuracy on combination of datasets that pertain to different welding techniques. This is a nontrivial result that is counter-intuitive to the general belief in this field of research.
[ "['Soheil Keshmiri' 'Xin Zheng' 'Chee Meng Chew' 'Chee Khiang Pang']", "Soheil Keshmiri, Xin Zheng, Chee Meng Chew, Chee Khiang Pang" ]
cs.LG cs.IT math.IT
null
1502.04248
null
null
http://arxiv.org/pdf/1502.04248v1
2015-02-14T21:23:14Z
2015-02-14T21:23:14Z
Asymptotic Justification of Bandlimited Interpolation of Graph signals for Semi-Supervised Learning
Graph-based methods play an important role in unsupervised and semi-supervised learning tasks by taking into account the underlying geometry of the data set. In this paper, we consider a statistical setting for semi-supervised learning and provide a formal justification of the recently introduced framework of bandlimited interpolation of graph signals. Our analysis leads to the interpretation that, given enough labeled data, this method is very closely related to a constrained low density separation problem as the number of data points tends to infinity. We demonstrate the practical utility of our results through simple experiments.
[ "['Aamir Anis' 'Aly El Gamal' 'A. Salman Avestimehr' 'Antonio Ortega']", "Aamir Anis, Aly El Gamal, A. Salman Avestimehr, Antonio Ortega" ]
stat.ML cs.DM cs.LG stat.AP stat.ME
10.1007/s10994-015-5528-6
1502.04269
null
null
http://arxiv.org/abs/1502.04269v3
2016-01-26T17:34:21Z
2015-02-15T01:26:41Z
Supersparse Linear Integer Models for Optimized Medical Scoring Systems
Scoring systems are linear classification models that only require users to add, subtract and multiply a few small numbers in order to make a prediction. These models are in widespread use by the medical community, but are difficult to learn from data because they need to be accurate and sparse, have coprime integer coefficients, and satisfy multiple operational constraints. We present a new method for creating data-driven scoring systems called a Supersparse Linear Integer Model (SLIM). SLIM scoring systems are built by solving an integer program that directly encodes measures of accuracy (the 0-1 loss) and sparsity (the $\ell_0$-seminorm) while restricting coefficients to coprime integers. SLIM can seamlessly incorporate a wide range of operational constraints related to accuracy and sparsity, and can produce highly tailored models without parameter tuning. We provide bounds on the testing and training accuracy of SLIM scoring systems, and present a new data reduction technique that can improve scalability by eliminating a portion of the training data beforehand. Our paper includes results from a collaboration with the Massachusetts General Hospital Sleep Laboratory, where SLIM was used to create a highly tailored scoring system for sleep apnea screening
[ "Berk Ustun and Cynthia Rudin", "['Berk Ustun' 'Cynthia Rudin']" ]
cs.LG cs.NA
null
1502.04390
null
null
http://arxiv.org/pdf/1502.04390v2
2015-08-29T23:04:39Z
2015-02-15T23:41:33Z
Equilibrated adaptive learning rates for non-convex optimization
Parameter-specific adaptive learning rate methods are computationally efficient ways to reduce the ill-conditioning problems encountered when training large deep networks. Following recent work that strongly suggests that most of the critical points encountered when training such networks are saddle points, we find how considering the presence of negative eigenvalues of the Hessian could help us design better suited adaptive learning rate schemes. We show that the popular Jacobi preconditioner has undesirable behavior in the presence of both positive and negative curvature, and present theoretical and empirical evidence that the so-called equilibration preconditioner is comparatively better suited to non-convex problems. We introduce a novel adaptive learning rate scheme, called ESGD, based on the equilibration preconditioner. Our experiments show that ESGD performs as well or better than RMSProp in terms of convergence speed, always clearly improving over plain stochastic gradient descent.
[ "['Yann N. Dauphin' 'Harm de Vries' 'Yoshua Bengio']", "Yann N. Dauphin, Harm de Vries, Yoshua Bengio" ]
stat.ML cs.LG cs.NE
null
1502.04434
null
null
http://arxiv.org/pdf/1502.04434v3
2016-01-15T04:49:00Z
2015-02-16T06:28:35Z
Invariant backpropagation: how to train a transformation-invariant neural network
In many classification problems a classifier should be robust to small variations in the input vector. This is a desired property not only for particular transformations, such as translation and rotation in image classification problems, but also for all others for which the change is small enough to retain the object perceptually indistinguishable. We propose two extensions of the backpropagation algorithm that train a neural network to be robust to variations in the feature vector. While the first of them enforces robustness of the loss function to all variations, the second method trains the predictions to be robust to a particular variation which changes the loss function the most. The second methods demonstrates better results, but is slightly slower. We analytically compare the proposed algorithm with two the most similar approaches (Tangent BP and Adversarial Training), and propose their fast versions. In the experimental part we perform comparison of all algorithms in terms of classification accuracy and robustness to noise on MNIST and CIFAR-10 datasets. Additionally we analyze how the performance of the proposed algorithm depends on the dataset size and data augmentation.
[ "Sergey Demyanov, James Bailey, Ramamohanarao Kotagiri, Christopher\n Leckie", "['Sergey Demyanov' 'James Bailey' 'Ramamohanarao Kotagiri'\n 'Christopher Leckie']" ]
cs.LG q-bio.MN q-bio.QM
null
1502.04469
null
null
http://arxiv.org/pdf/1502.04469v4
2015-03-12T02:05:46Z
2015-02-16T09:17:40Z
Classification and its applications for drug-target interaction identification
Classification is one of the most popular and widely used supervised learning tasks, which categorizes objects into predefined classes based on known knowledge. Classification has been an important research topic in machine learning and data mining. Different classification methods have been proposed and applied to deal with various real-world problems. Unlike unsupervised learning such as clustering, a classifier is typically trained with labeled data before being used to make prediction, and usually achieves higher accuracy than unsupervised one. In this paper, we first define classification and then review several representative methods. After that, we study in details the application of classification to a critical problem in drug discovery, i.e., drug-target prediction, due to the challenges in predicting possible interactions between drugs and targets.
[ "['Jian-Ping Mei' 'Chee-Keong Kwoh' 'Peng Yang' 'Xiao-Li Li']", "Jian-Ping Mei, Chee-Keong Kwoh, Peng Yang and Xiao-Li Li" ]
cs.CV cs.LG
null
1502.04492
null
null
http://arxiv.org/pdf/1502.04492v1
2015-02-16T11:01:25Z
2015-02-16T11:01:25Z
Towards Building Deep Networks with Bayesian Factor Graphs
We propose a Multi-Layer Network based on the Bayesian framework of the Factor Graphs in Reduced Normal Form (FGrn) applied to a two-dimensional lattice. The Latent Variable Model (LVM) is the basic building block of a quadtree hierarchy built on top of a bottom layer of random variables that represent pixels of an image, a feature map, or more generally a collection of spatially distributed discrete variables. The multi-layer architecture implements a hierarchical data representation that, via belief propagation, can be used for learning and inference. Typical uses are pattern completion, correction and classification. The FGrn paradigm provides great flexibility and modularity and appears as a promising candidate for building deep networks: the system can be easily extended by introducing new and different (in cardinality and in type) variables. Prior knowledge, or supervised information, can be introduced at different scales. The FGrn paradigm provides a handy way for building all kinds of architectures by interconnecting only three types of units: Single Input Single Output (SISO) blocks, Sources and Replicators. The network is designed like a circuit diagram and the belief messages flow bidirectionally in the whole system. The learning algorithms operate only locally within each block. The framework is demonstrated in this paper in a three-layer structure applied to images extracted from a standard data set.
[ "['Amedeo Buonanno' 'Francesco A. N. Palmieri']", "Amedeo Buonanno and Francesco A.N. Palmieri" ]
stat.ML cs.CV cs.LG
null
1502.04502
null
null
http://arxiv.org/pdf/1502.04502v1
2015-02-16T11:50:42Z
2015-02-16T11:50:42Z
Clustering by Descending to the Nearest Neighbor in the Delaunay Graph Space
In our previous works, we proposed a physically-inspired rule to organize the data points into an in-tree (IT) structure, in which some undesired edges are allowed to occur. By removing those undesired or redundant edges, this IT structure is divided into several separate parts, each representing one cluster. In this work, we seek to prevent the undesired edges from arising at the source. Before using the physically-inspired rule, data points are at first organized into a proximity graph which restricts each point to select the optimal directed neighbor just among its neighbors. Consequently, separated in-trees or clusters automatically arise, without redundant edges requiring to be removed.
[ "Teng Qiu, Yongjie Li", "['Teng Qiu' 'Yongjie Li']" ]
cs.LG
null
1502.04585
null
null
http://arxiv.org/pdf/1502.04585v1
2015-02-16T15:53:03Z
2015-02-16T15:53:03Z
The Ladder: A Reliable Leaderboard for Machine Learning Competitions
The organizer of a machine learning competition faces the problem of maintaining an accurate leaderboard that faithfully represents the quality of the best submission of each competing team. What makes this estimation problem particularly challenging is its sequential and adaptive nature. As participants are allowed to repeatedly evaluate their submissions on the leaderboard, they may begin to overfit to the holdout data that supports the leaderboard. Few theoretical results give actionable advice on how to design a reliable leaderboard. Existing approaches therefore often resort to poorly understood heuristics such as limiting the bit precision of answers and the rate of re-submission. In this work, we introduce a notion of "leaderboard accuracy" tailored to the format of a competition. We introduce a natural algorithm called "the Ladder" and demonstrate that it simultaneously supports strong theoretical guarantees in a fully adaptive model of estimation, withstands practical adversarial attacks, and achieves high utility on real submission files from an actual competition hosted by Kaggle. Notably, we are able to sidestep a powerful recent hardness result for adaptive risk estimation that rules out algorithms such as ours under a seemingly very similar notion of accuracy. On a practical note, we provide a completely parameter-free variant of our algorithm that can be deployed in a real competition with no tuning required whatsoever.
[ "['Avrim Blum' 'Moritz Hardt']", "Avrim Blum and Moritz Hardt" ]
cs.LG
null
1502.04617
null
null
http://arxiv.org/pdf/1502.04617v1
2015-02-16T16:41:26Z
2015-02-16T16:41:26Z
Deep Transform: Error Correction via Probabilistic Re-Synthesis
Errors in data are usually unwelcome and so some means to correct them is useful. However, it is difficult to define, detect or correct errors in an unsupervised way. Here, we train a deep neural network to re-synthesize its inputs at its output layer for a given class of data. We then exploit the fact that this abstract transformation, which we call a deep transform (DT), inherently rejects information (errors) existing outside of the abstract feature space. Using the DT to perform probabilistic re-synthesis, we demonstrate the recovery of data that has been subject to extreme degradation.
[ "Andrew J.R. Simpson", "['Andrew J. R. Simpson']" ]
stat.ML cs.LG stat.CO
null
1502.04622
null
null
http://arxiv.org/pdf/1502.04622v1
2015-02-16T16:48:30Z
2015-02-16T16:48:30Z
Particle Gibbs for Bayesian Additive Regression Trees
Additive regression trees are flexible non-parametric models and popular off-the-shelf tools for real-world non-linear regression. In application domains, such as bioinformatics, where there is also demand for probabilistic predictions with measures of uncertainty, the Bayesian additive regression trees (BART) model, introduced by Chipman et al. (2010), is increasingly popular. As data sets have grown in size, however, the standard Metropolis-Hastings algorithms used to perform inference in BART are proving inadequate. In particular, these Markov chains make local changes to the trees and suffer from slow mixing when the data are high-dimensional or the best fitting trees are more than a few layers deep. We present a novel sampler for BART based on the Particle Gibbs (PG) algorithm (Andrieu et al., 2010) and a top-down particle filtering algorithm for Bayesian decision trees (Lakshminarayanan et al., 2013). Rather than making local changes to individual trees, the PG sampler proposes a complete tree to fit the residual. Experiments show that the PG sampler outperforms existing samplers in many settings.
[ "Balaji Lakshminarayanan, Daniel M. Roy and Yee Whye Teh", "['Balaji Lakshminarayanan' 'Daniel M. Roy' 'Yee Whye Teh']" ]
cs.CV cs.LG cs.NE
null
1502.04623
null
null
http://arxiv.org/pdf/1502.04623v2
2015-05-20T15:29:42Z
2015-02-16T16:48:56Z
DRAW: A Recurrent Neural Network For Image Generation
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
[ "Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan\n Wierstra", "['Karol Gregor' 'Ivo Danihelka' 'Alex Graves' 'Danilo Jimenez Rezende'\n 'Daan Wierstra']" ]
math.OC cs.LG stat.ML
null
1502.04635
null
null
http://arxiv.org/pdf/1502.04635v2
2015-08-29T19:50:20Z
2015-02-16T17:17:24Z
Parameter estimation in softmax decision-making models with linear objective functions
With an eye towards human-centered automation, we contribute to the development of a systematic means to infer features of human decision-making from behavioral data. Motivated by the common use of softmax selection in models of human decision-making, we study the maximum likelihood parameter estimation problem for softmax decision-making models with linear objective functions. We present conditions under which the likelihood function is convex. These allow us to provide sufficient conditions for convergence of the resulting maximum likelihood estimator and to construct its asymptotic distribution. In the case of models with nonlinear objective functions, we show how the estimator can be applied by linearizing about a nominal parameter value. We apply the estimator to fit the stochastic UCL (Upper Credible Limit) model of human decision-making to human subject data. We show statistically significant differences in behavior across related, but distinct, tasks.
[ "['Paul Reverdy' 'Naomi E. Leonard']", "Paul Reverdy and Naomi E. Leonard" ]
cs.LG cs.CV cs.NE
null
1502.04681
null
null
http://arxiv.org/pdf/1502.04681v3
2016-01-04T00:42:07Z
2015-02-16T20:00:07Z
Unsupervised Learning of Video Representations using LSTMs
We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations ("percepts") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.
[ "['Nitish Srivastava' 'Elman Mansimov' 'Ruslan Salakhutdinov']", "Nitish Srivastava, Elman Mansimov and Ruslan Salakhutdinov" ]
cs.LG cs.NA stat.ML
null
1502.04689
null
null
http://arxiv.org/pdf/1502.04689v2
2015-02-27T19:31:25Z
2015-02-16T20:37:35Z
Exact tensor completion using t-SVD
In this paper we focus on the problem of completion of multidimensional arrays (also referred to as tensors) from limited sampling. Our approach is based on a recently proposed tensor-Singular Value Decomposition (t-SVD) [1]. Using this factorization one can derive notion of tensor rank, referred to as the tensor tubal rank, which has optimality properties similar to that of matrix rank derived from SVD. As shown in [2] some multidimensional data, such as panning video sequences exhibit low tensor tubal rank and we look at the problem of completing such data under random sampling of the data cube. We show that by solving a convex optimization problem, which minimizes the tensor nuclear norm obtained as the convex relaxation of tensor tubal rank, one can guarantee recovery with overwhelming probability as long as samples in proportion to the degrees of freedom in t-SVD are observed. In this sense our results are order-wise optimal. The conditions under which this result holds are very similar to the incoherency conditions for the matrix completion, albeit we define incoherency under the algebraic set-up of t-SVD. We show the performance of the algorithm on some real data sets and compare it with other existing approaches based on tensor flattening and Tucker decomposition.
[ "['Zemin Zhang' 'Shuchin Aeron']", "Zemin Zhang, Shuchin Aeron" ]
stat.ML cs.CV cs.LG
null
1502.04837
null
null
http://arxiv.org/pdf/1502.04837v2
2015-03-18T12:17:46Z
2015-02-17T09:27:03Z
Nonparametric Nearest Neighbor Descent Clustering based on Delaunay Triangulation
In our physically inspired in-tree (IT) based clustering algorithm and the series after it, there is only one free parameter involved in computing the potential value of each point. In this work, based on the Delaunay Triangulation or its dual Voronoi tessellation, we propose a nonparametric process to compute potential values by the local information. This computation, though nonparametric, is relatively very rough, and consequently, many local extreme points will be generated. However, unlike those gradient-based methods, our IT-based methods are generally insensitive to those local extremes. This positively demonstrates the superiority of these parametric (previous) and nonparametric (in this work) IT-based methods.
[ "Teng Qiu, Yongjie Li", "['Teng Qiu' 'Yongjie Li']" ]
cs.LG
null
1502.04843
null
null
http://arxiv.org/pdf/1502.04843v2
2015-06-09T10:50:41Z
2015-02-17T10:08:48Z
Generalized Gradient Learning on Time Series under Elastic Transformations
The majority of machine learning algorithms assumes that objects are represented as vectors. But often the objects we want to learn on are more naturally represented by other data structures such as sequences and time series. For these representations many standard learning algorithms are unavailable. We generalize gradient-based learning algorithms to time series under dynamic time warping. To this end, we introduce elastic functions, which extend functions on time series to matrix spaces. Necessary conditions are presented under which generalized gradient learning on time series is consistent. We indicate how results carry over to arbitrary elastic distance functions and to sequences consisting of symbolic elements. Specifically, four linear classifiers are extended to time series under dynamic time warping and applied to benchmark datasets. Results indicate that generalized gradient learning via elastic functions have the potential to complement the state-of-the-art in statistical pattern recognition on time series.
[ "['Brijnesh Jain']", "Brijnesh Jain" ]
cs.LG stat.ML
null
1502.04868
null
null
http://arxiv.org/pdf/1502.04868v2
2015-02-18T09:33:34Z
2015-02-17T11:59:44Z
Proper Complex Gaussian Processes for Regression
Complex-valued signals are used in the modeling of many systems in engineering and science, hence being of fundamental interest. Often, random complex-valued signals are considered to be proper. A proper complex random variable or process is uncorrelated with its complex conjugate. This assumption is a good model of the underlying physics in many problems, and simplifies the computations. While linear processing and neural networks have been widely studied for these signals, the development of complex-valued nonlinear kernel approaches remains an open problem. In this paper we propose Gaussian processes for regression as a framework to develop 1) a solution for proper complex-valued kernel regression and 2) the design of the reproducing kernel for complex-valued inputs, using the convolutional approach for cross-covariances. In this design we pay attention to preserve, in the complex domain, the measure of similarity between near inputs. The hyperparameters of the kernel are learned maximizing the marginal likelihood using Wirtinger derivatives. Besides, the approach is connected to the multiple output learning scenario. In the experiments included, we first solve a proper complex Gaussian process where the cross-covariance does not cancel, a challenging scenario when dealing with proper complex signals. Then we successfully use these novel results to solve some problems previously proposed in the literature as benchmarks, reporting a remarkable improvement in the estimation error.
[ "['Rafael Boloix-Tortosa' 'F. Javier Payán-Somet' 'Eva Arias-de-Reyna'\n 'Juan José Murillo-Fuentes']", "Rafael Boloix-Tortosa, F. Javier Pay\\'an-Somet, Eva Arias-de-Reyna and\n Juan Jos\\'e Murillo-Fuentes" ]
cs.AI cs.LG cs.SI
null
1502.04956
null
null
http://arxiv.org/pdf/1502.04956v2
2016-12-27T15:01:40Z
2015-02-17T16:49:23Z
The Linearization of Belief Propagation on Pairwise Markov Networks
Belief Propagation (BP) is a widely used approximation for exact probabilistic inference in graphical models, such as Markov Random Fields (MRFs). In graphs with cycles, however, no exact convergence guarantees for BP are known, in general. For the case when all edges in the MRF carry the same symmetric, doubly stochastic potential, recent works have proposed to approximate BP by linearizing the update equations around default values, which was shown to work well for the problem of node classification. The present paper generalizes all prior work and derives an approach that approximates loopy BP on any pairwise MRF with the problem of solving a linear equation system. This approach combines exact convergence guarantees and a fast matrix implementation with the ability to model heterogenous networks. Experiments on synthetic graphs with planted edge potentials show that the linearization has comparable labeling accuracy as BP for graphs with weak potentials, while speeding-up inference by orders of magnitude.
[ "['Wolfgang Gatterbauer']", "Wolfgang Gatterbauer" ]
stat.ML cs.DS cs.IT cs.LG math.IT
null
1502.05023
null
null
http://arxiv.org/pdf/1502.05023v2
2015-02-19T21:05:53Z
2015-02-17T20:23:13Z
A New Sampling Technique for Tensors
In this paper we propose new techniques to sample arbitrary third-order tensors, with an objective of speeding up tensor algorithms that have recently gained popularity in machine learning. Our main contribution is a new way to select, in a biased random way, only $O(n^{1.5}/\epsilon^2)$ of the possible $n^3$ elements while still achieving each of the three goals: \\ {\em (a) tensor sparsification}: for a tensor that has to be formed from arbitrary samples, compute very few elements to get a good spectral approximation, and for arbitrary orthogonal tensors {\em (b) tensor completion:} recover an exactly low-rank tensor from a small number of samples via alternating least squares, or {\em (c) tensor factorization:} approximating factors of a low-rank tensor corrupted by noise. \\ Our sampling can be used along with existing tensor-based algorithms to speed them up, removing the computational bottleneck in these methods.
[ "['Srinadh Bhojanapalli' 'Sujay Sanghavi']", "Srinadh Bhojanapalli, Sujay Sanghavi" ]
cs.LG cs.GT
null
1502.05056
null
null
http://arxiv.org/pdf/1502.05056v1
2015-02-17T21:02:37Z
2015-02-17T21:02:37Z
On Sex, Evolution, and the Multiplicative Weights Update Algorithm
We consider a recent innovative theory by Chastain et al. on the role of sex in evolution [PNAS'14]. In short, the theory suggests that the evolutionary process of gene recombination implements the celebrated multiplicative weights updates algorithm (MWUA). They prove that the population dynamics induced by sexual reproduction can be precisely modeled by genes that use MWUA as their learning strategy in a particular coordination game. The result holds in the environments of \emph{weak selection}, under the assumption that the population frequencies remain a product distribution. We revisit the theory, eliminating both the requirement of weak selection and any assumption on the distribution of the population. Removing the assumption of product distributions is crucial, since as we show, this assumption is inconsistent with the population dynamics. We show that the marginal allele distributions induced by the population dynamics precisely match the marginals induced by a multiplicative weights update algorithm in this general setting, thereby affirming and substantially generalizing these earlier results. We further revise the implications for convergence and utility or fitness guarantees in coordination games. In contrast to the claim of Chastain et al.[PNAS'14], we conclude that the sexual evolutionary dynamics does not entail any property of the population distribution, beyond those already implied by convergence.
[ "Reshef Meir and David Parkes", "['Reshef Meir' 'David Parkes']" ]
cs.LG
null
1502.05090
null
null
http://arxiv.org/pdf/1502.05090v1
2015-02-18T00:27:39Z
2015-02-18T00:27:39Z
Real time clustering of time series using triangular potentials
Motivated by the problem of computing investment portfolio weightings we investigate various methods of clustering as alternatives to traditional mean-variance approaches. Such methods can have significant benefits from a practical point of view since they remove the need to invert a sample covariance matrix, which can suffer from estimation error and will almost certainly be non-stationary. The general idea is to find groups of assets which share similar return characteristics over time and treat each group as a single composite asset. We then apply inverse volatility weightings to these new composite assets. In the course of our investigation we devise a method of clustering based on triangular potentials and we present associated theoretical results as well as various examples based on synthetic data.
[ "['Aldo Pacchiano' 'Oliver Williams']", "Aldo Pacchiano, Oliver Williams" ]
cs.LG
null
1502.05111
null
null
http://arxiv.org/pdf/1502.05111v1
2015-02-18T04:04:45Z
2015-02-18T04:04:45Z
CSAL: Self-adaptive Labeling based Clustering Integrating Supervised Learning on Unlabeled Data
Supervised classification approaches can predict labels for unknown data because of the supervised training process. The success of classification is heavily dependent on the labeled training data. Differently, clustering is effective in revealing the aggregation property of unlabeled data, but the performance of most clustering methods is limited by the absence of labeled data. In real applications, however, it is time-consuming and sometimes impossible to obtain labeled data. The combination of clustering and classification is a promising and active approach which can largely improve the performance. In this paper, we propose an innovative and effective clustering framework based on self-adaptive labeling (CSAL) which integrates clustering and classification on unlabeled data. Clustering is first employed to partition data and a certain proportion of clustered data are selected by our proposed labeling approach for training classifiers. In order to refine the trained classifiers, an iterative process of Expectation-Maximization algorithm is devised into the proposed clustering framework CSAL. Experiments are conducted on publicly data sets to test different combinations of clustering algorithms and classification models as well as various training data labeling methods. The experimental results show that our approach along with the self-adaptive method outperforms other methods.
[ "Fangfang Li, Guandong Xu, Longbing Cao", "['Fangfang Li' 'Guandong Xu' 'Longbing Cao']" ]
cs.LG cs.NE
10.1109/TKDE.2016.2598171
1502.05113
null
null
http://arxiv.org/abs/1502.05113v1
2015-02-18T04:25:23Z
2015-02-18T04:25:23Z
Temporal Embedding in Convolutional Neural Networks for Robust Learning of Abstract Snippets
The prediction of periodical time-series remains challenging due to various types of data distortions and misalignments. Here, we propose a novel model called Temporal embedding-enhanced convolutional neural Network (TeNet) to learn repeatedly-occurring-yet-hidden structural elements in periodical time-series, called abstract snippets, for predicting future changes. Our model uses convolutional neural networks and embeds a time-series with its potential neighbors in the temporal domain for aligning it to the dominant patterns in the dataset. The model is robust to distortions and misalignments in the temporal domain and demonstrates strong prediction power for periodical time-series. We conduct extensive experiments and discover that the proposed model shows significant and consistent advantages over existing methods on a variety of data modalities ranging from human mobility to household power consumption records. Empirical results indicate that the model is robust to various factors such as number of samples, variance of data, numerical ranges of data etc. The experiments also verify that the intuition behind the model can be generalized to multiple data types and applications and promises significant improvement in prediction performances across the datasets studied.
[ "['Jiajun Liu' 'Kun Zhao' 'Brano Kusy' 'Ji-rong Wen' 'Raja Jurdak']", "Jiajun Liu, Kun Zhao, Brano Kusy, Ji-rong Wen, Raja Jurdak" ]
cs.LG
null
1502.05134
null
null
http://arxiv.org/pdf/1502.05134v2
2015-08-18T05:20:59Z
2015-02-18T06:55:07Z
Supervised cross-modal factor analysis for multiple modal data classification
In this paper we study the problem of learning from multiple modal data for purpose of document classification. In this problem, each document is composed two different modals of data, i.e., an image and a text. Cross-modal factor analysis (CFA) has been proposed to project the two different modals of data to a shared data space, so that the classification of a image or a text can be performed directly in this space. A disadvantage of CFA is that it has ignored the supervision information. In this paper, we improve CFA by incorporating the supervision information to represent and classify both image and text modals of documents. We project both image and text data to a shared data space by factor analysis, and then train a class label predictor in the shared space to use the class label information. The factor analysis parameter and the predictor parameter are learned jointly by solving one single objective function. With this objective function, we minimize the distance between the projections of image and text of the same document, and the classification error of the projection measured by hinge loss function. The objective function is optimized by an alternate optimization strategy in an iterative algorithm. Experiments in two different multiple modal document data sets show the advantage of the proposed algorithm over other CFA methods.
[ "Jingbin Wang, Yihua Zhou, Kanghong Duan, Jim Jing-Yan Wang, Halima\n Bensmail", "['Jingbin Wang' 'Yihua Zhou' 'Kanghong Duan' 'Jim Jing-Yan Wang'\n 'Halima Bensmail']" ]
cs.CY cs.LG
null
1502.05167
null
null
http://arxiv.org/pdf/1502.05167v1
2015-02-18T09:52:55Z
2015-02-18T09:52:55Z
Dengue disease prediction using weka data mining tool
Dengue is a life threatening disease prevalent in several developed as well as developing countries like India.In this paper we discuss various algorithm approaches of data mining that have been utilized for dengue disease prediction. Data mining is a well known technique used by health organizations for classification of diseases such as dengue, diabetes and cancer in bioinformatics research. In the proposed approach we have used WEKA with 10 cross validation to evaluate data and compare results. Weka has an extensive collection of different machine learning and data mining algorithms. In this paper we have firstly classified the dengue data set and then compared the different data mining techniques in weka through Explorer, knowledge flow and Experimenter interfaces. Furthermore in order to validate our approach we have used a dengue dataset with 108 instances but weka used 99 rows and 18 attributes to determine the prediction of disease and their accuracy using classifications of different algorithms to find out the best performance. The main objective of this paper is to classify data and assist the users in extracting useful information from data and easily identify a suitable algorithm for accurate predictive model from it. From the findings of this paper it can be concluded that Na\"ive Bayes and J48 are the best performance algorithms for classified accuracy because they achieved maximum accuracy= 100% with 99 correctly classified instances, maximum ROC = 1, had least mean absolute error and it took minimum time for building this model through Explorer and Knowledge flow results
[ "Kashish Ara Shakil, Shadma Anis and Mansaf Alam", "['Kashish Ara Shakil' 'Shadma Anis' 'Mansaf Alam']" ]
cs.LG cs.NE
null
1502.05213
null
null
http://arxiv.org/pdf/1502.05213v1
2015-02-18T13:15:13Z
2015-02-18T13:15:13Z
F0 Modeling In Hmm-Based Speech Synthesis System Using Deep Belief Network
In recent years multilayer perceptrons (MLPs) with many hid- den layers Deep Neural Network (DNN) has performed sur- prisingly well in many speech tasks, i.e. speech recognition, speaker verification, speech synthesis etc. Although in the context of F0 modeling these techniques has not been ex- ploited properly. In this paper, Deep Belief Network (DBN), a class of DNN family has been employed and applied to model the F0 contour of synthesized speech which was generated by HMM-based speech synthesis system. The experiment was done on Bengali language. Several DBN-DNN architectures ranging from four to seven hidden layers and up to 200 hid- den units per hidden layer was presented and evaluated. The results were compared against clustering tree techniques pop- ularly found in statistical parametric speech synthesis. We show that from textual inputs DBN-DNN learns a high level structure which in turn improves F0 contour in terms of ob- jective and subjective tests.
[ "Sankar Mukherjee, Shyamal Kumar Das Mandal", "['Sankar Mukherjee' 'Shyamal Kumar Das Mandal']" ]
cs.DS cs.DM cs.LG
null
1502.05375
null
null
http://arxiv.org/pdf/1502.05375v1
2015-02-18T20:36:19Z
2015-02-18T20:36:19Z
On learning k-parities with and without noise
We first consider the problem of learning $k$-parities in the on-line mistake-bound model: given a hidden vector $x \in \{0,1\}^n$ with $|x|=k$ and a sequence of "questions" $a_1, a_2, ...\in \{0,1\}^n$, where the algorithm must reply to each question with $< a_i, x> \pmod 2$, what is the best tradeoff between the number of mistakes made by the algorithm and its time complexity? We improve the previous best result of Buhrman et al. by an $\exp(k)$ factor in the time complexity. Second, we consider the problem of learning $k$-parities in the presence of classification noise of rate $\eta \in (0,1/2)$. A polynomial time algorithm for this problem (when $\eta > 0$ and $k = \omega(1)$) is a longstanding challenge in learning theory. Grigorescu et al. showed an algorithm running in time ${n \choose k/2}^{1 + 4\eta^2 +o(1)}$. Note that this algorithm inherently requires time ${n \choose k/2}$ even when the noise rate $\eta$ is polynomially small. We observe that for sufficiently small noise rate, it is possible to break the $n \choose k/2$ barrier. In particular, if for some function $f(n) = \omega(1)$ and $\alpha \in [1/2, 1)$, $k = n/f(n)$ and $\eta = o(f(n)^{- \alpha}/\log n)$, then there is an algorithm for the problem with running time $poly(n)\cdot {n \choose k}^{1-\alpha} \cdot e^{-k/4.01}$.
[ "Arnab Bhattacharyya, Ameet Gadekar, Ninad Rajgopal", "['Arnab Bhattacharyya' 'Ameet Gadekar' 'Ninad Rajgopal']" ]
cs.LG cs.CL cs.IR
10.1145/3106235
1502.05472
null
null
http://arxiv.org/abs/1502.05472v2
2015-03-04T08:08:49Z
2015-02-19T06:04:40Z
On the Effects of Low-Quality Training Data on Information Extraction from Clinical Reports
In the last five years there has been a flurry of work on information extraction from clinical documents, i.e., on algorithms capable of extracting, from the informal and unstructured texts that are generated during everyday clinical practice, mentions of concepts relevant to such practice. Most of this literature is about methods based on supervised learning, i.e., methods for training an information extraction system from manually annotated examples. While a lot of work has been devoted to devising learning methods that generate more and more accurate information extractors, no work has been devoted to investigating the effect of the quality of training data on the learning process. Low quality in training data often derives from the fact that the person who has annotated the data is different from the one against whose judgment the automatically annotated data must be evaluated. In this paper we test the impact of such data quality issues on the accuracy of information extraction systems as applied to the clinical domain. We do this by comparing the accuracy deriving from training data annotated by the authoritative coder (i.e., the one who has also annotated the test data, and by whose judgment we must abide), with the accuracy deriving from training data annotated by a different coder. The results indicate that, although the disagreement between the two coders (as measured on the training set) is substantial, the difference is (surprisingly enough) not always statistically significant.
[ "['Diego Marcheggiani' 'Fabrizio Sebastiani']", "Diego Marcheggiani and Fabrizio Sebastiani" ]
cs.LG
null
1502.05477
null
null
http://arxiv.org/pdf/1502.05477v5
2017-04-20T18:04:12Z
2015-02-19T06:44:25Z
Trust Region Policy Optimization
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
[ "['John Schulman' 'Sergey Levine' 'Philipp Moritz' 'Michael I. Jordan'\n 'Pieter Abbeel']", "John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan,\n Pieter Abbeel" ]