categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
cs.DS cs.LG cs.NA
null
1503.06394
null
null
http://arxiv.org/pdf/1503.06394v1
2015-03-22T06:55:12Z
2015-03-22T06:55:12Z
Large-scale Log-determinant Computation through Stochastic Chebyshev Expansions
Logarithms of determinants of large positive definite matrices appear ubiquitously in machine learning applications including Gaussian graphical and Gaussian process models, partition functions of discrete graphical models, minimum-volume ellipsoids, metric learning and kernel learning. Log-determinant computation involves the Cholesky decomposition at the cost cubic in the number of variables, i.e., the matrix dimension, which makes it prohibitive for large-scale applications. We propose a linear-time randomized algorithm to approximate log-determinants for very large-scale positive definite and general non-singular matrices using a stochastic trace approximation, called the Hutchinson method, coupled with Chebyshev polynomial expansions that both rely on efficient matrix-vector multiplications. We establish rigorous additive and multiplicative approximation error bounds depending on the condition number of the input matrix. In our experiments, the proposed algorithm can provide very high accuracy solutions at orders of magnitude faster time than the Cholesky decomposition and Schur completion, and enables us to compute log-determinants of matrices involving tens of millions of variables.
[ "Insu Han, Dmitry Malioutov, Jinwoo Shin", "['Insu Han' 'Dmitry Malioutov' 'Jinwoo Shin']" ]
cs.IR cs.CL cs.LG cs.NE stat.CO stat.ML
null
1503.06410
null
null
http://arxiv.org/pdf/1503.06410v2
2019-09-12T05:42:44Z
2015-03-22T11:32:34Z
What the F-measure doesn't measure: Features, Flaws, Fallacies and Fixes
The F-measure or F-score is one of the most commonly used single number measures in Information Retrieval, Natural Language Processing and Machine Learning, but it is based on a mistake, and the flawed assumptions render it unsuitable for use in most contexts! Fortunately, there are better alternatives.
[ "['David M. W. Powers']", "David M. W. Powers" ]
stat.ML cs.LG
null
1503.06429
null
null
http://arxiv.org/pdf/1503.06429v1
2015-03-22T13:55:10Z
2015-03-22T13:55:10Z
Asymmetric Distributions from Constrained Mixtures
This paper introduces constrained mixtures for continuous distributions, characterized by a mixture of distributions where each distribution has a shape similar to the base distribution and disjoint domains. This new concept is used to create generalized asymmetric versions of the Laplace and normal distributions, which are shown to define exponential families, with known conjugate priors, and to have maximum likelihood estimates for the original parameters, with known closed-form expressions. The asymmetric and symmetric normal distributions are compared in a linear regression example, showing that the asymmetric version performs at least as well as the symmetric one, and in a real world time-series problem, where a hidden Markov model is used to fit a stock index, indicating that the asymmetric version provides higher likelihood and may learn distribution models over states and transition distributions with considerably less entropy.
[ "['Conrado S. Miranda' 'Fernando J. Von Zuben']", "Conrado S. Miranda and Fernando J. Von Zuben" ]
cs.LG cs.NE stat.ML
null
1503.06452
null
null
http://arxiv.org/pdf/1503.06452v1
2015-03-22T18:22:28Z
2015-03-22T18:22:28Z
Unsupervised model compression for multilayer bootstrap networks
Recently, multilayer bootstrap network (MBN) has demonstrated promising performance in unsupervised dimensionality reduction. It can learn compact representations in standard data sets, i.e. MNIST and RCV1. However, as a bootstrap method, the prediction complexity of MBN is high. In this paper, we propose an unsupervised model compression framework for this general problem of unsupervised bootstrap methods. The framework compresses a large unsupervised bootstrap model into a small model by taking the bootstrap model and its application together as a black box and learning a mapping function from the input of the bootstrap model to the output of the application by a supervised learner. To specialize the framework, we propose a new technique, named compressive MBN. It takes MBN as the unsupervised bootstrap model and deep neural network (DNN) as the supervised learner. Our initial result on MNIST showed that compressive MBN not only maintains the high prediction accuracy of MBN but also is over thousands of times faster than MBN at the prediction stage. Our result suggests that the new technique integrates the effectiveness of MBN on unsupervised learning and the effectiveness and efficiency of DNN on supervised learning together for the effectiveness and efficiency of compressive MBN on unsupervised learning.
[ "Xiao-Lei Zhang", "['Xiao-Lei Zhang']" ]
cs.LG cs.CR cs.SY
10.1109/TNNLS.2015.2404803
1503.06468
null
null
http://arxiv.org/abs/1503.06468v1
2015-03-22T19:38:45Z
2015-03-22T19:38:45Z
Machine Learning Methods for Attack Detection in the Smart Grid
Attack detection problems in the smart grid are posed as statistical learning problems for different attack scenarios in which the measurements are observed in batch or online settings. In this approach, machine learning algorithms are used to classify measurements as being either secure or attacked. An attack detection framework is provided to exploit any available prior knowledge about the system and surmount constraints arising from the sparse structure of the problem in the proposed approach. Well-known batch and online learning algorithms (supervised and semi-supervised) are employed with decision and feature level fusion to model the attack detection problem. The relationships between statistical and geometric properties of attack vectors employed in the attack scenarios and learning algorithms are analyzed to detect unobservable attacks using statistical learning methods. The proposed algorithms are examined on various IEEE test systems. Experimental analyses show that machine learning algorithms can detect attacks with performances higher than the attack detection algorithms which employ state vector estimation methods in the proposed attack detection framework.
[ "['Mete Ozay' 'Inaki Esnaola' 'Fatos T. Yarman Vural' 'Sanjeev R. Kulkarni'\n 'H. Vincent Poor']", "Mete Ozay, Inaki Esnaola, Fatos T. Yarman Vural, Sanjeev R. Kulkarni,\n H. Vincent Poor" ]
cs.DB cs.AI cs.DS cs.IR cs.LG
10.14569/IJACSA.2015.060313
1503.06483
null
null
http://arxiv.org/abs/1503.06483v1
2015-03-22T21:46:12Z
2015-03-22T21:46:12Z
Construction of FuzzyFind Dictionary using Golay Coding Transformation for Searching Applications
Searching through a large volume of data is very critical for companies, scientists, and searching engines applications due to time complexity and memory complexity. In this paper, a new technique of generating FuzzyFind Dictionary for text mining was introduced. We simply mapped the 23 bits of the English alphabet into a FuzzyFind Dictionary or more than 23 bits by using more FuzzyFind Dictionary, and reflecting the presence or absence of particular letters. This representation preserves closeness of word distortions in terms of closeness of the created binary vectors within Hamming distance of 2 deviations. This paper talks about the Golay Coding Transformation Hash Table and how it can be used on a FuzzyFind Dictionary as a new technology for using in searching through big data. This method is introduced by linear time complexity for generating the dictionary and constant time complexity to access the data and update by new data sets, also updating for new data sets is linear time depends on new data points. This technique is based on searching only for letters of English that each segment has 23 bits, and also we have more than 23-bit and also it could work with more segments as reference table.
[ "Kamran Kowsari, Maryam Yammahi, Nima Bari, Roman Vichr, Faisal Alsaby,\n Simon Y. Berkovich", "['Kamran Kowsari' 'Maryam Yammahi' 'Nima Bari' 'Roman Vichr'\n 'Faisal Alsaby' 'Simon Y. Berkovich']" ]
cs.LG
null
1503.06549
null
null
http://arxiv.org/pdf/1503.06549v1
2015-03-23T08:19:17Z
2015-03-23T08:19:17Z
Optimum Reject Options for Prototype-based Classification
We analyse optimum reject strategies for prototype-based classifiers and real-valued rejection measures, using the distance of a data point to the closest prototype or probabilistic counterparts. We compare reject schemes with global thresholds, and local thresholds for the Voronoi cells of the classifier. For the latter, we develop a polynomial-time algorithm to compute optimum thresholds based on a dynamic programming scheme, and we propose an intuitive linear time, memory efficient approximation thereof with competitive accuracy. Evaluating the performance in various benchmarks, we conclude that local reject options are beneficial in particular for simple prototype-based classifiers, while the improvement is less pronounced for advanced models. For the latter, an accuracy-reject curve which is comparable to support vector machine classifiers with state of the art reject options can be reached.
[ "['Lydia Fischer' 'Barbara Hammer' 'Heiko Wersing']", "Lydia Fischer, Barbara Hammer and Heiko Wersing" ]
cs.LG cs.DS stat.ML
null
1503.06567
null
null
http://arxiv.org/pdf/1503.06567v2
2015-08-22T11:24:43Z
2015-03-23T09:20:39Z
On some provably correct cases of variational inference for topic models
Variational inference is a very efficient and popular heuristic used in various forms in the context of latent variable models. It's closely related to Expectation Maximization (EM), and is applied when exact EM is computationally infeasible. Despite being immensely popular, current theoretical understanding of the effectiveness of variaitonal inference based algorithms is very limited. In this work we provide the first analysis of instances where variational inference algorithms converge to the global optimum, in the setting of topic models. More specifically, we show that variational inference provably learns the optimal parameters of a topic model under natural assumptions on the topic-word matrix and the topic priors. The properties that the topic word matrix must satisfy in our setting are related to the topic expansion assumption introduced in (Anandkumar et al., 2013), as well as the anchor words assumption in (Arora et al., 2012c). The assumptions on the topic priors are related to the well known Dirichlet prior, introduced to the area of topic modeling by (Blei et al., 2003). It is well known that initialization plays a crucial role in how well variational based algorithms perform in practice. The initializations that we use are fairly natural. One of them is similar to what is currently used in LDA-c, the most popular implementation of variational inference for topic models. The other one is an overlapping clustering algorithm, inspired by a work by (Arora et al., 2014) on dictionary learning, which is very simple and efficient. While our primary goal is to provide insights into when variational inference might work in practice, the multiplicative, rather than the additive nature of the variational inference updates forces us to use fairly non-standard proof arguments, which we believe will be of general interest.
[ "['Pranjal Awasthi' 'Andrej Risteski']", "Pranjal Awasthi and Andrej Risteski" ]
cs.LG cs.AI cs.CC
null
1503.06572
null
null
http://arxiv.org/pdf/1503.06572v1
2015-03-23T09:37:33Z
2015-03-23T09:37:33Z
A Machine Learning Approach to Predicting the Smoothed Complexity of Sorting Algorithms
Smoothed analysis is a framework for analyzing the complexity of an algorithm, acting as a bridge between average and worst-case behaviour. For example, Quicksort and the Simplex algorithm are widely used in practical applications, despite their heavy worst-case complexity. Smoothed complexity aims to better characterize such algorithms. Existing theoretical bounds for the smoothed complexity of sorting algorithms are still quite weak. Furthermore, empirically computing the smoothed complexity via its original definition is computationally infeasible, even for modest input sizes. In this paper, we focus on accurately predicting the smoothed complexity of sorting algorithms, using machine learning techniques. We propose two regression models that take into account various properties of sorting algorithms and some of the known theoretical results in smoothed analysis to improve prediction quality. We show experimental results for predicting the smoothed complexity of Quicksort, Mergesort, and optimized Bubblesort for large input sizes, therefore filling the gap between known theoretical and empirical results.
[ "Bichen Shi, Michel Schellekens, Georgiana Ifrim", "['Bichen Shi' 'Michel Schellekens' 'Georgiana Ifrim']" ]
cs.LG
null
1503.06608
null
null
http://arxiv.org/pdf/1503.06608v1
2015-03-23T11:47:05Z
2015-03-23T11:47:05Z
Proficiency Comparison of LADTree and REPTree Classifiers for Credit Risk Forecast
Predicting the Credit Defaulter is a perilous task of Financial Industries like Banks. Ascertaining non-payer before giving loan is a significant and conflict-ridden task of the Banker. Classification techniques are the better choice for predictive analysis like finding the claimant, whether he/she is an unpretentious customer or a cheat. Defining the outstanding classifier is a risky assignment for any industrialist like a banker. This allow computer science researchers to drill down efficient research works through evaluating different classifiers and finding out the best classifier for such predictive problems. This research work investigates the productivity of LADTree Classifier and REPTree Classifier for the credit risk prediction and compares their fitness through various measures. German credit dataset has been taken and used to predict the credit risk with a help of open source machine learning tool.
[ "Lakshmi Devasena C", "['Lakshmi Devasena C']" ]
cs.LG
10.1007/s10439-015-1344-1
1503.06619
null
null
http://arxiv.org/abs/1503.06619v2
2015-06-13T13:06:29Z
2015-03-23T12:31:18Z
Fusing Continuous-valued Medical Labels using a Bayesian Model
With the rapid increase in volume of time series medical data available through wearable devices, there is a need to employ automated algorithms to label data. Examples of labels include interventions, changes in activity (e.g. sleep) and changes in physiology (e.g. arrhythmias). However, automated algorithms tend to be unreliable resulting in lower quality care. Expert annotations are scarce, expensive, and prone to significant inter- and intra-observer variance. To address these problems, a Bayesian Continuous-valued Label Aggregator(BCLA) is proposed to provide a reliable estimation of label aggregation while accurately infer the precision and bias of each algorithm. The BCLA was applied to QT interval (pro-arrhythmic indicator) estimation from the electrocardiogram using labels from the 2006 PhysioNet/Computing in Cardiology Challenge database. It was compared to the mean, median, and a previously proposed Expectation Maximization (EM) label aggregation approaches. While accurately predicting each labelling algorithm's bias and precision, the root-mean-square error of the BCLA was 11.78$\pm$0.63ms, significantly outperforming the best Challenge entry (15.37$\pm$2.13ms) as well as the EM, mean, and median voting strategies (14.76$\pm$0.52ms, 17.61$\pm$0.55ms, and 14.43$\pm$0.57ms respectively with $p<0.0001$).
[ "['Tingting Zhu' 'Nic Dunkley' 'Joachim Behar' 'David A. Clifton'\n 'Gari D. Clifford']", "Tingting Zhu, Nic Dunkley, Joachim Behar, David A. Clifton, Gari D.\n Clifford" ]
cs.LG
null
1503.06629
null
null
http://arxiv.org/pdf/1503.06629v1
2015-03-23T13:20:22Z
2015-03-23T13:20:22Z
A Probabilistic Interpretation of Sampling Theory of Graph Signals
We give a probabilistic interpretation of sampling theory of graph signals. To do this, we first define a generative model for the data using a pairwise Gaussian random field (GRF) which depends on the graph. We show that, under certain conditions, reconstructing a graph signal from a subset of its samples by least squares is equivalent to performing MAP inference on an approximation of this GRF which has a low rank covariance matrix. We then show that a sampling set of given size with the largest associated cut-off frequency, which is optimal from a sampling theoretic point of view, minimizes the worst case predictive covariance of the MAP estimate on the GRF. This interpretation also gives an intuitive explanation for the superior performance of the sampling theoretic approach to active semi-supervised classification.
[ "['Akshay Gadde' 'Antonio Ortega']", "Akshay Gadde and Antonio Ortega" ]
cs.IR cs.LG cs.SD
10.1109/TASLP.2016.2541299
1503.06666
null
null
http://arxiv.org/abs/1503.06666v3
2016-03-09T16:24:42Z
2015-03-23T14:48:24Z
Using Generic Summarization to Improve Music Information Retrieval Tasks
In order to satisfy processing time constraints, many MIR tasks process only a segment of the whole music signal. This practice may lead to decreasing performance, since the most important information for the tasks may not be in those processed segments. In this paper, we leverage generic summarization algorithms, previously applied to text and speech summarization, to summarize items in music datasets. These algorithms build summaries, that are both concise and diverse, by selecting appropriate segments from the input signal which makes them good candidates to summarize music as well. We evaluate the summarization process on binary and multiclass music genre classification tasks, by comparing the performance obtained using summarized datasets against the performances obtained using continuous segments (which is the traditional method used for addressing the previously mentioned time constraints) and full songs of the same original dataset. We show that GRASSHOPPER, LexRank, LSA, MMR, and a Support Sets-based Centrality model improve classification performance when compared to selected 30-second baselines. We also show that summarized datasets lead to a classification performance whose difference is not statistically significant from using full songs. Furthermore, we make an argument stating the advantages of sharing summarized datasets for future MIR research.
[ "Francisco Raposo, Ricardo Ribeiro, David Martins de Matos", "['Francisco Raposo' 'Ricardo Ribeiro' 'David Martins de Matos']" ]
cs.LG
null
1503.06745
null
null
http://arxiv.org/pdf/1503.06745v1
2015-03-23T17:47:00Z
2015-03-23T17:47:00Z
Online classifier adaptation for cost-sensitive learning
In this paper, we propose the problem of online cost-sensitive clas- sifier adaptation and the first algorithm to solve it. We assume we have a base classifier for a cost-sensitive classification problem, but it is trained with respect to a cost setting different to the desired one. Moreover, we also have some training data samples streaming to the algorithm one by one. The prob- lem is to adapt the given base classifier to the desired cost setting using the steaming training samples online. To solve this problem, we propose to learn a new classifier by adding an adaptation function to the base classifier, and update the adaptation function parameter according to the streaming data samples. Given a input data sample and the cost of misclassifying it, we up- date the adaptation function parameter by minimizing cost weighted hinge loss and respecting previous learned parameter simultaneously. The proposed algorithm is compared to both online and off-line cost-sensitive algorithms on two cost-sensitive classification problems, and the experiments show that it not only outperforms them one classification performances, but also requires significantly less running time.
[ "Junlin Zhang, Jose Garcia", "['Junlin Zhang' 'Jose Garcia']" ]
math.OC cs.LG
null
1503.06833
null
null
http://arxiv.org/pdf/1503.06833v1
2015-03-23T21:00:18Z
2015-03-23T21:00:18Z
On Lower and Upper Bounds for Smooth and Strongly Convex Optimization Problems
We develop a novel framework to study smooth and strongly convex optimization algorithms, both deterministic and stochastic. Focusing on quadratic functions we are able to examine optimization algorithms as a recursive application of linear operators. This, in turn, reveals a powerful connection between a class of optimization algorithms and the analytic theory of polynomials whereby new lower and upper bounds are derived. Whereas existing lower bounds for this setting are only valid when the dimensionality scales with the number of iterations, our lower bound holds in the natural regime where the dimensionality is fixed. Lastly, expressing it as an optimal solution for the corresponding optimization problem over polynomials, as formulated by our framework, we present a novel systematic derivation of Nesterov's well-known Accelerated Gradient Descent method. This rather natural interpretation of AGD contrasts with earlier ones which lacked a simple, yet solid, motivation.
[ "['Yossi Arjevani' 'Shai Shalev-Shwartz' 'Ohad Shamir']", "Yossi Arjevani, Shai Shalev-Shwartz, Ohad Shamir" ]
cs.LG
null
1503.06858
null
null
http://arxiv.org/pdf/1503.06858v4
2016-02-13T23:40:11Z
2015-03-23T22:00:51Z
Communication Efficient Distributed Kernel Principal Component Analysis
Kernel Principal Component Analysis (KPCA) is a key machine learning algorithm for extracting nonlinear features from data. In the presence of a large volume of high dimensional data collected in a distributed fashion, it becomes very costly to communicate all of this data to a single data center and then perform kernel PCA. Can we perform kernel PCA on the entire dataset in a distributed and communication efficient fashion while maintaining provable and strong guarantees in solution quality? In this paper, we give an affirmative answer to the question by developing a communication efficient algorithm to perform kernel PCA in the distributed setting. The algorithm is a clever combination of subspace embedding and adaptive sampling techniques, and we show that the algorithm can take as input an arbitrary configuration of distributed datasets, and compute a set of global kernel principal components with relative error guarantees independent of the dimension of the feature space or the total number of data points. In particular, computing $k$ principal components with relative error $\epsilon$ over $s$ workers has communication cost $\tilde{O}(s \rho k/\epsilon+s k^2/\epsilon^3)$ words, where $\rho$ is the average number of nonzero entries in each data point. Furthermore, we experimented the algorithm with large-scale real world datasets and showed that the algorithm produces a high quality kernel PCA solution while using significantly less communication than alternative approaches.
[ "['Maria-Florina Balcan' 'Yingyu Liang' 'Le Song' 'David Woodruff' 'Bo Xie']", "Maria-Florina Balcan, Yingyu Liang, Le Song, David Woodruff, Bo Xie" ]
cs.LG cs.AI
null
1503.06902
null
null
http://arxiv.org/pdf/1503.06902v1
2015-03-24T03:26:28Z
2015-03-24T03:26:28Z
A Note on Information-Directed Sampling and Thompson Sampling
This note introduce three Bayesian style Multi-armed bandit algorithms: Information-directed sampling, Thompson Sampling and Generalized Thompson Sampling. The goal is to give an intuitive explanation for these three algorithms and their regret bounds, and provide some derivations that are omitted in the original papers.
[ "['Li Zhou']", "Li Zhou" ]
stat.ML cs.LG
null
1503.06944
null
null
http://arxiv.org/pdf/1503.06944v3
2016-08-09T08:34:04Z
2015-03-24T08:17:44Z
PAC-Bayesian Theorems for Domain Adaptation with Specialization to Linear Classifiers
In this paper, we provide two main contributions in PAC-Bayesian theory for domain adaptation where the objective is to learn, from a source distribution, a well-performing majority vote on a different target distribution. On the one hand, we propose an improvement of the previous approach proposed by Germain et al. (2013), that relies on a novel distribution pseudodistance based on a disagreement averaging, allowing us to derive a new tighter PAC-Bayesian domain adaptation bound for the stochastic Gibbs classifier. We specialize it to linear classifiers, and design a learning algorithm which shows interesting results on a synthetic problem and on a popular sentiment annotation task. On the other hand, we generalize these results to multisource domain adaptation allowing us to take into account different source domains. This study opens the door to tackle domain adaptation tasks by making use of all the PAC-Bayesian tools.
[ "Pascal Germain (SIERRA), Amaury Habrard (LHC), Fran\\c{c}ois\n Laviolette, Emilie Morvant (LHC)", "['Pascal Germain' 'Amaury Habrard' 'François Laviolette' 'Emilie Morvant']" ]
cs.LG
null
1503.06952
null
null
http://arxiv.org/pdf/1503.06952v1
2015-03-24T08:57:25Z
2015-03-24T08:57:25Z
Comparing published multi-label classifier performance measures to the ones obtained by a simple multi-label baseline classifier
In supervised learning, simple baseline classifiers can be constructed by only looking at the class, i.e., ignoring any other information from the dataset. The single-label learning community frequently uses as a reference the one which always predicts the majority class. Although a classifier might perform worse than this simple baseline classifier, this behaviour requires a special explanation. Aiming to motivate the community to compare experimental results with the ones provided by a multi-label baseline classifier, calling the attention about the need of special explanations related to classifiers which perform worse than the baseline, in this work we propose the use of General_B, a multi-label baseline classifier. General_B was evaluated in contrast to results published in the literature which were carefully selected using a systematic review process. It was found that a considerable number of published results on 10 frequently used datasets are worse than or equal to the ones obtained by General_B, and for one dataset it reaches up to 43% of the dataset published results. Moreover, although a simple baseline classifier was not considered in these publications, it was observed that even for very poor results no special explanations were provided in most of them. We hope that the findings of this work would encourage the multi-label community to consider the idea of using a simple baseline classifier, such that further explanations are provided when a classifiers performs worse than a baseline.
[ "Jean Metz and Newton Spola\\^or and Everton A. Cherman and Maria C.\n Monard", "['Jean Metz' 'Newton Spolaôr' 'Everton A. Cherman' 'Maria C. Monard']" ]
cs.LG
null
1503.06960
null
null
http://arxiv.org/pdf/1503.06960v2
2015-04-14T12:18:14Z
2015-03-24T09:30:33Z
Sample compression schemes for VC classes
Sample compression schemes were defined by Littlestone and Warmuth (1986) as an abstraction of the structure underlying many learning algorithms. Roughly speaking, a sample compression scheme of size $k$ means that given an arbitrary list of labeled examples, one can retain only $k$ of them in a way that allows to recover the labels of all other examples in the list. They showed that compression implies PAC learnability for binary-labeled classes, and asked whether the other direction holds. We answer their question and show that every concept class $C$ with VC dimension $d$ has a sample compression scheme of size exponential in $d$. The proof uses an approximate minimax phenomenon for binary matrices of low VC dimension, which may be of interest in the context of game theory.
[ "['Shay Moran' 'Amir Yehudayoff']", "Shay Moran, Amir Yehudayoff" ]
cs.SD cs.LG cs.NE
null
1503.06962
null
null
http://arxiv.org/pdf/1503.06962v1
2015-03-24T09:34:51Z
2015-03-24T09:34:51Z
Probabilistic Binary-Mask Cocktail-Party Source Separation in a Convolutional Deep Neural Network
Separation of competing speech is a key challenge in signal processing and a feat routinely performed by the human auditory brain. A long standing benchmark of the spectrogram approach to source separation is known as the ideal binary mask. Here, we train a convolutional deep neural network, on a two-speaker cocktail party problem, to make probabilistic predictions about binary masks. Our results approach ideal binary mask performance, illustrating that relatively simple deep neural networks are capable of robust binary mask prediction. We also illustrate the trade-off between prediction statistics and separation quality.
[ "Andrew J.R. Simpson", "['Andrew J. R. Simpson']" ]
cs.LG cs.IT math.IT
null
1503.07027
null
null
http://arxiv.org/pdf/1503.07027v4
2016-08-08T08:07:39Z
2015-03-24T13:29:12Z
Convergence radius and sample complexity of ITKM algorithms for dictionary learning
In this work we show that iterative thresholding and K-means (ITKM) algorithms can recover a generating dictionary with K atoms from noisy $S$ sparse signals up to an error $\tilde \varepsilon$ as long as the initialisation is within a convergence radius, that is up to a $\log K$ factor inversely proportional to the dynamic range of the signals, and the sample size is proportional to $K \log K \tilde \varepsilon^{-2}$. The results are valid for arbitrary target errors if the sparsity level is of the order of the square root of the signal dimension $d$ and for target errors down to $K^{-\ell}$ if $S$ scales as $S \leq d/(\ell \log K)$.
[ "Karin Schnass", "['Karin Schnass']" ]
astro-ph.IM astro-ph.GA cs.CV cs.LG cs.NE stat.ML
10.1093/mnras/stv632
1503.07077
null
null
http://arxiv.org/abs/1503.07077v1
2015-03-24T15:34:06Z
2015-03-24T15:34:06Z
Rotation-invariant convolutional neural networks for galaxy morphology prediction
Measuring the morphological parameters of galaxies is a key requirement for studying their formation and evolution. Surveys such as the Sloan Digital Sky Survey (SDSS) have resulted in the availability of very large collections of images, which have permitted population-wide analyses of galaxy morphology. Morphological analysis has traditionally been carried out mostly via visual inspection by trained experts, which is time-consuming and does not scale to large ($\gtrsim10^4$) numbers of images. Although attempts have been made to build automated classification systems, these have not been able to achieve the desired level of accuracy. The Galaxy Zoo project successfully applied a crowdsourcing strategy, inviting online users to classify images by answering a series of questions. Unfortunately, even this approach does not scale well enough to keep up with the increasing availability of galaxy images. We present a deep neural network model for galaxy morphology classification which exploits translational and rotational symmetry. It was developed in the context of the Galaxy Challenge, an international competition to build the best model for morphology classification based on annotated images from the Galaxy Zoo project. For images with high agreement among the Galaxy Zoo participants, our model is able to reproduce their consensus with near-perfect accuracy ($> 99\%$) for most questions. Confident model predictions are highly accurate, which makes the model suitable for filtering large collections of images and forwarding challenging images to experts for manual annotation. This approach greatly reduces the experts' workload without affecting accuracy. The application of these algorithms to larger sets of training data will be critical for analysing results from future surveys such as the LSST.
[ "['Sander Dieleman' 'Kyle W. Willett' 'Joni Dambre']", "Sander Dieleman, Kyle W. Willett, Joni Dambre" ]
cs.NI cs.LG
null
1503.07104
null
null
http://arxiv.org/pdf/1503.07104v1
2015-03-24T16:38:32Z
2015-03-24T16:38:32Z
Analysis of Spectrum Occupancy Using Machine Learning Algorithms
In this paper, we analyze the spectrum occupancy using different machine learning techniques. Both supervised techniques (naive Bayesian classifier (NBC), decision trees (DT), support vector machine (SVM), linear regression (LR)) and unsupervised algorithm (hidden markov model (HMM)) are studied to find the best technique with the highest classification accuracy (CA). A detailed comparison of the supervised and unsupervised algorithms in terms of the computational time and classification accuracy is performed. The classified occupancy status is further utilized to evaluate the probability of secondary user outage for the future time slots, which can be used by system designers to define spectrum allocation and spectrum sharing policies. Numerical results show that SVM is the best algorithm among all the supervised and unsupervised classifiers. Based on this, we proposed a new SVM algorithm by combining it with fire fly algorithm (FFA), which is shown to outperform all other algorithms.
[ "['Freeha Azmat' 'Yunfei Chen' 'Nigel Stocks']", "Freeha Azmat, Yunfei Chen (Senior Member, IEEE) and Nigel Stocks" ]
cs.LG stat.ML
null
1503.07211
null
null
http://arxiv.org/pdf/1503.07211v1
2015-03-24T21:38:59Z
2015-03-24T21:38:59Z
Universal Approximation of Markov Kernels by Shallow Stochastic Feedforward Networks
We establish upper bounds for the minimal number of hidden units for which a binary stochastic feedforward network with sigmoid activation probabilities and a single hidden layer is a universal approximator of Markov kernels. We show that each possible probabilistic assignment of the states of $n$ output units, given the states of $k\geq1$ input units, can be approximated arbitrarily well by a network with $2^{k-1}(2^{n-1}-1)$ hidden units.
[ "Guido Montufar", "['Guido Montufar']" ]
cs.LG stat.ML
null
1503.07240
null
null
http://arxiv.org/pdf/1503.07240v1
2015-03-25T00:10:11Z
2015-03-25T00:10:11Z
Regularized Minimax Conditional Entropy for Crowdsourcing
There is a rapidly increasing interest in crowdsourcing for data labeling. By crowdsourcing, a large number of labels can be often quickly gathered at low cost. However, the labels provided by the crowdsourcing workers are usually not of high quality. In this paper, we propose a minimax conditional entropy principle to infer ground truth from noisy crowdsourced labels. Under this principle, we derive a unique probabilistic labeling model jointly parameterized by worker ability and item difficulty. We also propose an objective measurement principle, and show that our method is the only method which satisfies this objective measurement principle. We validate our method through a variety of real crowdsourcing datasets with binary, multiclass or ordinal labels.
[ "['Dengyong Zhou' 'Qiang Liu' 'John C. Platt' 'Christopher Meek'\n 'Nihar B. Shah']", "Dengyong Zhou, Qiang Liu, John C. Platt, Christopher Meek, Nihar B.\n Shah" ]
cs.CV cs.LG
null
1503.07274
null
null
http://arxiv.org/pdf/1503.07274v1
2015-03-25T03:41:47Z
2015-03-25T03:41:47Z
Initialization Strategies of Spatio-Temporal Convolutional Neural Networks
We propose a new way of incorporating temporal information present in videos into Spatial Convolutional Neural Networks (ConvNets) trained on images, that avoids training Spatio-Temporal ConvNets from scratch. We describe several initializations of weights in 3D Convolutional Layers of Spatio-Temporal ConvNet using 2D Convolutional Weights learned from ImageNet. We show that it is important to initialize 3D Convolutional Weights judiciously in order to learn temporal representations of videos. We evaluate our methods on the UCF-101 dataset and demonstrate improvement over Spatial ConvNets.
[ "['Elman Mansimov' 'Nitish Srivastava' 'Ruslan Salakhutdinov']", "Elman Mansimov, Nitish Srivastava, Ruslan Salakhutdinov" ]
cs.LG
null
1503.07477
null
null
http://arxiv.org/pdf/1503.07477v1
2015-03-25T17:56:19Z
2015-03-25T17:56:19Z
A Survey of Classification Techniques in the Area of Big Data
Big Data concern large-volume, growing data sets that are complex and have multiple autonomous sources. Earlier technologies were not able to handle storage and processing of huge data thus Big Data concept comes into existence. This is a tedious job for users unstructured data. So, there should be some mechanism which classify unstructured data into organized form which helps user to easily access required data. Classification techniques over big transactional database provide required data to the users from large datasets more simple way. There are two main classification techniques, supervised and unsupervised. In this paper we focused on to study of different supervised classification techniques. Further this paper shows a advantages and limitations.
[ "Praful Koturwar, Sheetal Girase, Debajyoti Mukhopadhyay", "['Praful Koturwar' 'Sheetal Girase' 'Debajyoti Mukhopadhyay']" ]
cs.LG stat.ML
null
1503.07508
null
null
http://arxiv.org/pdf/1503.07508v1
2015-03-25T19:30:14Z
2015-03-25T19:30:14Z
Stable Feature Selection from Brain sMRI
Neuroimage analysis usually involves learning thousands or even millions of variables using only a limited number of samples. In this regard, sparse models, e.g. the lasso, are applied to select the optimal features and achieve high diagnosis accuracy. The lasso, however, usually results in independent unstable features. Stability, a manifest of reproducibility of statistical results subject to reasonable perturbations to data and the model, is an important focus in statistics, especially in the analysis of high dimensional data. In this paper, we explore a nonnegative generalized fused lasso model for stable feature selection in the diagnosis of Alzheimer's disease. In addition to sparsity, our model incorporates two important pathological priors: the spatial cohesion of lesion voxels and the positive correlation between the features and the disease labels. To optimize the model, we propose an efficient algorithm by proving a novel link between total variation and fast network flow algorithms via conic duality. Experiments show that the proposed nonnegative model performs much better in exploring the intrinsic structure of data via selecting stable features compared with other state-of-the-arts.
[ "Bo Xin, Lingjing Hu, Yizhou Wang and Wen Gao", "['Bo Xin' 'Lingjing Hu' 'Yizhou Wang' 'Wen Gao']" ]
cs.LG cs.CV
null
1503.07790
null
null
http://arxiv.org/pdf/1503.07790v1
2015-03-26T17:12:34Z
2015-03-26T17:12:34Z
Transductive Multi-label Zero-shot Learning
Zero-shot learning has received increasing interest as a means to alleviate the often prohibitive expense of annotating training data for large scale recognition problems. These methods have achieved great success via learning intermediate semantic representations in the form of attributes and more recently, semantic word vectors. However, they have thus far been constrained to the single-label case, in contrast to the growing popularity and importance of more realistic multi-label data. In this paper, for the first time, we investigate and formalise a general framework for multi-label zero-shot learning, addressing the unique challenge therein: how to exploit multi-label correlation at test time with no training data for those classes? In particular, we propose (1) a multi-output deep regression model to project an image into a semantic word space, which explicitly exploits the correlations in the intermediate semantic layer of word vectors; (2) a novel zero-shot learning algorithm for multi-label data that exploits the unique compositionality property of semantic word vector representations; and (3) a transductive learning strategy to enable the regression model learned from seen classes to generalise well to unseen classes. Our zero-shot learning experiments on a number of standard multi-label datasets demonstrate that our method outperforms a variety of baselines.
[ "Yanwei Fu, Yongxin Yang, Tim Hospedales, Tao Xiang and Shaogang Gong", "['Yanwei Fu' 'Yongxin Yang' 'Tim Hospedales' 'Tao Xiang' 'Shaogang Gong']" ]
cs.LG
null
1503.07795
null
null
http://arxiv.org/pdf/1503.07795v1
2015-03-26T17:22:26Z
2015-03-26T17:22:26Z
Multi-Labeled Classification of Demographic Attributes of Patients: a case study of diabetics patients
Automated learning of patients demographics can be seen as multi-label problem where a patient model is based on different race and gender groups. The resulting model can be further integrated into Privacy-Preserving Data Mining, where it can be used to assess risk of identification of different patient groups. Our project considers relations between diabetes and demographics of patients as a multi-labelled problem. Most research in this area has been done as binary classification, where the target class is finding if a person has diabetes or not. But very few, and maybe no work has been done in multi-labeled analysis of the demographics of patients who are likely to be diagnosed with diabetes. To identify such groups, we applied ensembles of several multi-label learning algorithms.
[ "Naveen Kumar Parachur Cotha and Marina Sokolova", "['Naveen Kumar Parachur Cotha' 'Marina Sokolova']" ]
cs.LG cs.CV
null
1503.07884
null
null
http://arxiv.org/pdf/1503.07884v1
2015-03-26T20:07:37Z
2015-03-26T20:07:37Z
Transductive Multi-class and Multi-label Zero-shot Learning
Recently, zero-shot learning (ZSL) has received increasing interest. The key idea underpinning existing ZSL approaches is to exploit knowledge transfer via an intermediate-level semantic representation which is assumed to be shared between the auxiliary and target datasets, and is used to bridge between these domains for knowledge transfer. The semantic representation used in existing approaches varies from visual attributes to semantic word vectors and semantic relatedness. However, the overall pipeline is similar: a projection mapping low-level features to the semantic representation is learned from the auxiliary dataset by either classification or regression models and applied directly to map each instance into the same semantic representation space where a zero-shot classifier is used to recognise the unseen target class instances with a single known 'prototype' of each target class. In this paper we discuss two related lines of work improving the conventional approach: exploiting transductive learning ZSL, and generalising ZSL to the multi-label case.
[ "Yanwei Fu, Yongxin Yang, Timothy M. Hospedales, Tao Xiang and Shaogang\n Gong", "['Yanwei Fu' 'Yongxin Yang' 'Timothy M. Hospedales' 'Tao Xiang'\n 'Shaogang Gong']" ]
cs.LG stat.ML
null
1503.07906
null
null
http://arxiv.org/pdf/1503.07906v1
2015-03-26T21:17:46Z
2015-03-26T21:17:46Z
Generalized K-fan Multimodal Deep Model with Shared Representations
Multimodal learning with deep Boltzmann machines (DBMs) is an generative approach to fuse multimodal inputs, and can learn the shared representation via Contrastive Divergence (CD) for classification and information retrieval tasks. However, it is a 2-fan DBM model, and cannot effectively handle multiple prediction tasks. Moreover, this model cannot recover the hidden representations well by sampling from the conditional distribution when more than one modalities are missing. In this paper, we propose a K-fan deep structure model, which can handle the multi-input and muti-output learning problems effectively. In particular, the deep structure has K-branch for different inputs where each branch can be composed of a multi-layer deep model, and a shared representation is learned in an discriminative manner to tackle multimodal tasks. Given the deep structure, we propose two objective functions to handle two multi-input and multi-output tasks: joint visual restoration and labeling, and the multi-view multi-calss object recognition tasks. To estimate the model parameters, we initialize the deep model parameters with CD to maximize the joint distribution, and then we use backpropagation to update the model according to specific objective function. The experimental results demonstrate that the model can effectively leverages multi-source information and predict multiple tasks well over competitive baselines.
[ "Gang Chen and Sargur N. Srihari", "['Gang Chen' 'Sargur N. Srihari']" ]
cs.IT cs.DS cs.LG math.IT math.ST stat.TH
null
1503.07940
null
null
http://arxiv.org/pdf/1503.07940v1
2015-03-27T01:41:48Z
2015-03-27T01:41:48Z
Competitive Distribution Estimation
Estimating an unknown distribution from its samples is a fundamental problem in statistics. The common, min-max, formulation of this goal considers the performance of the best estimator over all distributions in a class. It shows that with $n$ samples, distributions over $k$ symbols can be learned to a KL divergence that decreases to zero with the sample size $n$, but grows unboundedly with the alphabet size $k$. Min-max performance can be viewed as regret relative to an oracle that knows the underlying distribution. We consider two natural and modest limits on the oracle's power. One where it knows the underlying distribution only up to symbol permutations, and the other where it knows the exact distribution but is restricted to use natural estimators that assign the same probability to symbols that appeared equally many times in the sample. We show that in both cases the competitive regret reduces to $\min(k/n,\tilde{\mathcal{O}}(1/\sqrt n))$, a quantity upper bounded uniformly for every alphabet size. This shows that distributions can be estimated nearly as well as when they are essentially known in advance, and nearly as well as when they are completely known in advance but need to be estimated via a natural estimator. We also provide an estimator that runs in linear time and incurs competitive regret of $\tilde{\mathcal{O}}(\min(k/n,1/\sqrt n))$, and show that for natural estimators this competitive regret is inevitable. We also demonstrate the effectiveness of competitive estimators using simulations.
[ "['Alon Orlitsky' 'Ananda Theertha Suresh']", "Alon Orlitsky and Ananda Theertha Suresh" ]
cs.LG stat.ML
null
1503.07970
null
null
http://arxiv.org/pdf/1503.07970v1
2015-03-27T06:21:06Z
2015-03-27T06:21:06Z
Bayesian Cross Validation and WAIC for Predictive Prior Design in Regular Asymptotic Theory
Prior design is one of the most important problems in both statistics and machine learning. The cross validation (CV) and the widely applicable information criterion (WAIC) are predictive measures of the Bayesian estimation, however, it has been difficult to apply them to find the optimal prior because their mathematical properties in prior evaluation have been unknown and the region of the hyperparameters is too wide to be examined. In this paper, we derive a new formula by which the theoretical relation among CV, WAIC, and the generalization loss is clarified and the optimal hyperparameter can be directly found. By the formula, three facts are clarified about predictive prior design. Firstly, CV and WAIC have the same second order asymptotic expansion, hence they are asymptotically equivalent to each other as the optimizer of the hyperparameter. Secondly, the hyperparameter which minimizes CV or WAIC makes the average generalization loss to be minimized asymptotically but does not the random generalization loss. And lastly, by using the mathematical relation between priors, the variances of the optimized hyperparameters by CV and WAIC are made smaller with small computational costs. Also we show that the optimized hyperparameter by DIC or the marginal likelihood does not minimize the average or random generalization loss in general.
[ "['Sumio Watanabe']", "Sumio Watanabe" ]
cs.CV cs.LG
null
1503.07989
null
null
http://arxiv.org/pdf/1503.07989v1
2015-03-27T08:36:15Z
2015-03-27T08:36:15Z
Discriminative Bayesian Dictionary Learning for Classification
We propose a Bayesian approach to learn discriminative dictionaries for sparse representation of data. The proposed approach infers probability distributions over the atoms of a discriminative dictionary using a Beta Process. It also computes sets of Bernoulli distributions that associate class labels to the learned dictionary atoms. This association signifies the selection probabilities of the dictionary atoms in the expansion of class-specific data. Furthermore, the non-parametric character of the proposed approach allows it to infer the correct size of the dictionary. We exploit the aforementioned Bernoulli distributions in separately learning a linear classifier. The classifier uses the same hierarchical Bayesian model as the dictionary, which we present along the analytical inference solution for Gibbs sampling. For classification, a test instance is first sparsely encoded over the learned dictionary and the codes are fed to the classifier. We performed experiments for face and action recognition; and object and scene-category classification using five public datasets and compared the results with state-of-the-art discriminative sparse representation approaches. Experiments show that the proposed Bayesian approach consistently outperforms the existing approaches.
[ "['Naveed Akhtar' 'Faisal Shafait' 'Ajmal Mian']", "Naveed Akhtar, Faisal Shafait, Ajmal Mian" ]
cs.DC cs.LG
null
1503.08169
null
null
http://arxiv.org/pdf/1503.08169v2
2016-10-27T14:29:44Z
2015-03-27T18:02:51Z
RankMap: A Platform-Aware Framework for Distributed Learning from Dense Datasets
This paper introduces RankMap, a platform-aware end-to-end framework for efficient execution of a broad class of iterative learning algorithms for massive and dense datasets. Our framework exploits data structure to factorize it into an ensemble of lower rank subspaces. The factorization creates sparse low-dimensional representations of the data, a property which is leveraged to devise effective mapping and scheduling of iterative learning algorithms on the distributed computing machines. We provide two APIs, one matrix-based and one graph-based, which facilitate automated adoption of the framework for performing several contemporary learning applications. To demonstrate the utility of RankMap, we solve sparse recovery and power iteration problems on various real-world datasets with up to 1.8 billion non-zeros. Our evaluations are performed on Amazon EC2 and IBM iDataPlex servers using up to 244 cores. The results demonstrate up to two orders of magnitude improvements in memory usage, execution speed, and bandwidth compared with the best reported prior work, while achieving the same level of learning accuracy.
[ "Azalia Mirhoseini, Eva L. Dyer, Ebrahim.M. Songhori, Richard G.\n Baraniuk, Farinaz Koushanfar", "['Azalia Mirhoseini' 'Eva L. Dyer' 'Ebrahim. M. Songhori'\n 'Richard G. Baraniuk' 'Farinaz Koushanfar']" ]
cs.LG
null
1503.08316
null
null
http://arxiv.org/pdf/1503.08316v4
2015-06-09T19:24:03Z
2015-03-28T15:51:48Z
A Variance Reduced Stochastic Newton Method
Quasi-Newton methods are widely used in practise for convex loss minimization problems. These methods exhibit good empirical performance on a wide variety of tasks and enjoy super-linear convergence to the optimal solution. For large-scale learning problems, stochastic Quasi-Newton methods have been recently proposed. However, these typically only achieve sub-linear convergence rates and have not been shown to consistently perform well in practice since noisy Hessian approximations can exacerbate the effect of high-variance stochastic gradient estimates. In this work we propose Vite, a novel stochastic Quasi-Newton algorithm that uses an existing first-order technique to reduce this variance. Without exploiting the specific form of the approximate Hessian, we show that Vite reaches the optimum at a geometric rate with a constant step-size when dealing with smooth strongly convex functions. Empirically, we demonstrate improvements over existing stochastic Quasi-Newton and variance reduced stochastic gradient methods.
[ "['Aurelien Lucchi' 'Brian McWilliams' 'Thomas Hofmann']", "Aurelien Lucchi, Brian McWilliams, Thomas Hofmann" ]
stat.ML cs.LG
null
1503.08329
null
null
http://arxiv.org/pdf/1503.08329v2
2015-07-28T20:08:16Z
2015-03-28T17:19:49Z
Risk Bounds for the Majority Vote: From a PAC-Bayesian Analysis to a Learning Algorithm
We propose an extensive analysis of the behavior of majority votes in binary classification. In particular, we introduce a risk bound for majority votes, called the C-bound, that takes into account the average quality of the voters and their average disagreement. We also propose an extensive PAC-Bayesian analysis that shows how the C-bound can be estimated from various observations contained in the training data. The analysis intends to be self-contained and can be used as introductory material to PAC-Bayesian statistical learning theory. It starts from a general PAC-Bayesian perspective and ends with uncommon PAC-Bayesian bounds. Some of these bounds contain no Kullback-Leibler divergence and others allow kernel functions to be used as voters (via the sample compression setting). Finally, out of the analysis, we propose the MinCq learning algorithm that basically minimizes the C-bound. MinCq reduces to a simple quadratic program. Aside from being theoretically grounded, MinCq achieves state-of-the-art performance, as shown in our extensive empirical comparison with both AdaBoost and the Support Vector Machine.
[ "Pascal Germain, Alexandre Lacasse, Fran\\c{c}ois Laviolette, Mario\n Marchand, Jean-Francis Roy", "['Pascal Germain' 'Alexandre Lacasse' 'François Laviolette'\n 'Mario Marchand' 'Jean-Francis Roy']" ]
stat.ML cs.LG stat.ME
null
1503.08348
null
null
http://arxiv.org/pdf/1503.08348v1
2015-03-28T21:03:32Z
2015-03-28T21:03:32Z
Sparse Linear Regression With Missing Data
This paper proposes a fast and accurate method for sparse regression in the presence of missing data. The underlying statistical model encapsulates the low-dimensional structure of the incomplete data matrix and the sparsity of the regression coefficients, and the proposed algorithm jointly learns the low-dimensional structure of the data and a linear regressor with sparse coefficients. The proposed stochastic optimization method, Sparse Linear Regression with Missing Data (SLRM), performs an alternating minimization procedure and scales well with the problem size. Large deviation inequalities shed light on the impact of the various problem-dependent parameters on the expected squared loss of the learned regressor. Extensive simulations on both synthetic and real datasets show that SLRM performs better than competing algorithms in a variety of contexts.
[ "Ravi Ganti and Rebecca M. Willett", "['Ravi Ganti' 'Rebecca M. Willett']" ]
stat.ML cs.AI cs.LG
null
1503.08363
null
null
http://arxiv.org/pdf/1503.08363v1
2015-03-28T22:54:12Z
2015-03-28T22:54:12Z
Active Model Aggregation via Stochastic Mirror Descent
We consider the problem of learning convex aggregation of models, that is as good as the best convex aggregation, for the binary classification problem. Working in the stream based active learning setting, where the active learner has to make a decision on-the-fly, if it wants to query for the label of the point currently seen in the stream, we propose a stochastic-mirror descent algorithm, called SMD-AMA, with entropy regularization. We establish an excess risk bounds for the loss of the convex aggregate returned by SMD-AMA to be of the order of $O\left(\sqrt{\frac{\log(M)}{{T^{1-\mu}}}}\right)$, where $\mu\in [0,1)$ is an algorithm dependent parameter, that trades-off the number of labels queried, and excess risk.
[ "Ravi Ganti", "['Ravi Ganti']" ]
cs.LG
null
1503.08370
null
null
http://arxiv.org/pdf/1503.08370v3
2018-03-21T07:43:48Z
2015-03-29T00:16:58Z
Global Bandits
Multi-armed bandits (MAB) model sequential decision making problems, in which a learner sequentially chooses arms with unknown reward distributions in order to maximize its cumulative reward. Most of the prior work on MAB assumes that the reward distributions of each arm are independent. But in a wide variety of decision problems -- from drug dosage to dynamic pricing -- the expected rewards of different arms are correlated, so that selecting one arm provides information about the expected rewards of other arms as well. We propose and analyze a class of models of such decision problems, which we call {\em global bandits}. In the case in which rewards of all arms are deterministic functions of a single unknown parameter, we construct a greedy policy that achieves {\em bounded regret}, with a bound that depends on the single true parameter of the problem. Hence, this policy selects suboptimal arms only finitely many times with probability one. For this case we also obtain a bound on regret that is {\em independent of the true parameter}; this bound is sub-linear, with an exponent that depends on the informativeness of the arms. We also propose a variant of the greedy policy that achieves $\tilde{\mathcal{O}}(\sqrt{T})$ worst-case and $\mathcal{O}(1)$ parameter dependent regret. Finally, we perform experiments on dynamic pricing and show that the proposed algorithms achieve significant gains with respect to the well-known benchmarks.
[ "['Onur Atan' 'Cem Tekin' 'Mihaela van der Schaar']", "Onur Atan, Cem Tekin, Mihaela van der Schaar" ]
cs.LG cs.AI
null
1503.08381
null
null
http://arxiv.org/pdf/1503.08381v4
2018-11-19T11:11:36Z
2015-03-29T03:41:03Z
Towards Easier and Faster Sequence Labeling for Natural Language Processing: A Search-based Probabilistic Online Learning Framework (SAPO)
There are two major approaches for sequence labeling. One is the probabilistic gradient-based methods such as conditional random fields (CRF) and neural networks (e.g., RNN), which have high accuracy but drawbacks: slow training, and no support of search-based optimization (which is important in many cases). The other is the search-based learning methods such as structured perceptron and margin infused relaxed algorithm (MIRA), which have fast training but also drawbacks: low accuracy, no probabilistic information, and non-convergence in real-world tasks. We propose a novel and "easy" solution, a search-based probabilistic online learning method, to address most of those issues. The method is "easy", because the optimization algorithm at the training stage is as simple as the decoding algorithm at the test stage. This method searches the output candidates, derives probabilities, and conducts efficient online learning. We show that this method with fast training and theoretical guarantee of convergence, which is easy to implement, can support search-based optimization and obtain top accuracy. Experiments on well-known tasks show that our method has better accuracy than CRF and BiLSTM\footnote{The SAPO code is released at \url{https://github.com/lancopku/SAPO}.}.
[ "Xu Sun, Shuming Ma, Yi Zhang, Xuancheng Ren", "['Xu Sun' 'Shuming Ma' 'Yi Zhang' 'Xuancheng Ren']" ]
cs.LG
null
1503.08395
null
null
http://arxiv.org/pdf/1503.08395v6
2016-12-10T04:20:46Z
2015-03-29T07:25:32Z
Towards More Efficient SPSD Matrix Approximation and CUR Matrix Decomposition
Symmetric positive semi-definite (SPSD) matrix approximation methods have been extensively used to speed up large-scale eigenvalue computation and kernel learning methods. The standard sketch based method, which we call the prototype model, produces relatively accurate approximations, but is inefficient on large square matrices. The Nystr\"om method is highly efficient, but can only achieve low accuracy. In this paper we propose a novel model that we call the {\it fast SPSD matrix approximation model}. The fast model is nearly as efficient as the Nystr\"om method and as accurate as the prototype model. We show that the fast model can potentially solve eigenvalue problems and kernel learning problems in linear time with respect to the matrix size $n$ to achieve $1+\epsilon$ relative-error, whereas both the prototype model and the Nystr\"om method cost at least quadratic time to attain comparable error bound. Empirical comparisons among the prototype model, the Nystr\"om method, and our fast model demonstrate the superiority of the fast model. We also contribute new understandings of the Nystr\"om method. The Nystr\"om method is a special instance of our fast model and is approximation to the prototype model. Our technique can be straightforwardly applied to make the CUR matrix decomposition more efficiently computed without much affecting the accuracy.
[ "['Shusen Wang' 'Zhihua Zhang' 'Tong Zhang']", "Shusen Wang and Zhihua Zhang and Tong Zhang" ]
stat.ML cs.LG
null
1503.08471
null
null
http://arxiv.org/pdf/1503.08471v3
2015-12-14T02:03:47Z
2015-03-29T18:21:22Z
Cross-validation of matching correlation analysis by resampling matching weights
The strength of association between a pair of data vectors is represented by a nonnegative real number, called matching weight. For dimensionality reduction, we consider a linear transformation of data vectors, and define a matching error as the weighted sum of squared distances between transformed vectors with respect to the matching weights. Given data vectors and matching weights, the optimal linear transformation minimizing the matching error is solved by the spectral graph embedding of Yan et al. (2007). This method is a generalization of the canonical correlation analysis, and will be called as matching correlation analysis (MCA). In this paper, we consider a novel sampling scheme where the observed matching weights are randomly sampled from underlying true matching weights with small probability, whereas the data vectors are treated as constants. We then investigate a cross-validation by resampling the matching weights. Our asymptotic theory shows that the cross-validation, if rescaled properly, computes an unbiased estimate of the matching error with respect to the true matching weights. Existing ideas of cross-validation for resampling data vectors, instead of resampling matching weights, are not applicable here. MCA can be used for data vectors from multiple domains with different dimensions via an embarrassingly simple idea of coding the data vectors. This method will be called as cross-domain matching correlation analysis (CDMCA), and an interesting connection to the classical associative memory model of neural networks is also discussed.
[ "Hidetoshi Shimodaira", "['Hidetoshi Shimodaira']" ]
cs.SI cs.LG
null
1503.08528
null
null
http://arxiv.org/pdf/1503.08528v6
2015-06-26T09:39:52Z
2015-03-30T03:50:27Z
Average Distance Queries through Weighted Samples in Graphs and Metric Spaces: High Scalability with Tight Statistical Guarantees
The average distance from a node to all other nodes in a graph, or from a query point in a metric space to a set of points, is a fundamental quantity in data analysis. The inverse of the average distance, known as the (classic) closeness centrality of a node, is a popular importance measure in the study of social networks. We develop novel structural insights on the sparsifiability of the distance relation via weighted sampling. Based on that, we present highly practical algorithms with strong statistical guarantees for fundamental problems. We show that the average distance (and hence the centrality) for all nodes in a graph can be estimated using $O(\epsilon^{-2})$ single-source distance computations. For a set $V$ of $n$ points in a metric space, we show that after preprocessing which uses $O(n)$ distance computations we can compute a weighted sample $S\subset V$ of size $O(\epsilon^{-2})$ such that the average distance from any query point $v$ to $V$ can be estimated from the distances from $v$ to $S$. Finally, we show that for a set of points $V$ in a metric space, we can estimate the average pairwise distance using $O(n+\epsilon^{-2})$ distance computations. The estimate is based on a weighted sample of $O(\epsilon^{-2})$ pairs of points, which is computed using $O(n)$ distance computations. Our estimates are unbiased with normalized mean square error (NRMSE) of at most $\epsilon$. Increasing the sample size by a $O(\log n)$ factor ensures that the probability that the relative error exceeds $\epsilon$ is polynomially small.
[ "Shiri Chechik and Edith Cohen and Haim Kaplan", "['Shiri Chechik' 'Edith Cohen' 'Haim Kaplan']" ]
stat.ML cs.IR cs.LG
null
1503.08535
null
null
http://arxiv.org/pdf/1503.08535v1
2015-03-30T05:03:37Z
2015-03-30T05:03:37Z
Infinite Author Topic Model based on Mixed Gamma-Negative Binomial Process
Incorporating the side information of text corpus, i.e., authors, time stamps, and emotional tags, into the traditional text mining models has gained significant interests in the area of information retrieval, statistical natural language processing, and machine learning. One branch of these works is the so-called Author Topic Model (ATM), which incorporates the authors's interests as side information into the classical topic model. However, the existing ATM needs to predefine the number of topics, which is difficult and inappropriate in many real-world settings. In this paper, we propose an Infinite Author Topic (IAT) model to resolve this issue. Instead of assigning a discrete probability on fixed number of topics, we use a stochastic process to determine the number of topics from the data itself. To be specific, we extend a gamma-negative binomial process to three levels in order to capture the author-document-keyword hierarchical structure. Furthermore, each document is assigned a mixed gamma process that accounts for the multi-author's contribution towards this document. An efficient Gibbs sampling inference algorithm with each conditional distribution being closed-form is developed for the IAT model. Experiments on several real-world datasets show the capabilities of our IAT model to learn the hidden topics, authors' interests on these topics and the number of topics simultaneously.
[ "['Junyu Xuan' 'Jie Lu' 'Guangquan Zhang' 'Richard Yi Da Xu'\n 'Xiangfeng Luo']", "Junyu Xuan, Jie Lu, Guangquan Zhang, Richard Yi Da Xu, Xiangfeng Luo" ]
stat.ML cs.CL cs.IR cs.LG
null
1503.08542
null
null
http://arxiv.org/pdf/1503.08542v1
2015-03-30T05:40:41Z
2015-03-30T05:40:41Z
Nonparametric Relational Topic Models through Dependent Gamma Processes
Traditional Relational Topic Models provide a way to discover the hidden topics from a document network. Many theoretical and practical tasks, such as dimensional reduction, document clustering, link prediction, benefit from this revealed knowledge. However, existing relational topic models are based on an assumption that the number of hidden topics is known in advance, and this is impractical in many real-world applications. Therefore, in order to relax this assumption, we propose a nonparametric relational topic model in this paper. Instead of using fixed-dimensional probability distributions in its generative model, we use stochastic processes. Specifically, a gamma process is assigned to each document, which represents the topic interest of this document. Although this method provides an elegant solution, it brings additional challenges when mathematically modeling the inherent network structure of typical document network, i.e., two spatially closer documents tend to have more similar topics. Furthermore, we require that the topics are shared by all the documents. In order to resolve these challenges, we use a subsampling strategy to assign each document a different gamma process from the global gamma process, and the subsampling probabilities of documents are assigned with a Markov Random Field constraint that inherits the document network structure. Through the designed posterior inference algorithm, we can discover the hidden topics and its number simultaneously. Experimental results on both synthetic and real-world network datasets demonstrate the capabilities of learning the hidden topics and, more importantly, the number of topics.
[ "['Junyu Xuan' 'Jie Lu' 'Guangquan Zhang' 'Richard Yi Da Xu'\n 'Xiangfeng Luo']", "Junyu Xuan, Jie Lu, Guangquan Zhang, Richard Yi Da Xu, Xiangfeng Luo" ]
cs.IR cs.CL cs.LG
null
1503.08581
null
null
http://arxiv.org/pdf/1503.08581v1
2015-03-30T08:03:47Z
2015-03-30T08:03:47Z
LSHTC: A Benchmark for Large-Scale Text Classification
LSHTC is a series of challenges which aims to assess the performance of classification systems in large-scale classification in a a large number of classes (up to hundreds of thousands). This paper describes the dataset that have been released along the LSHTC series. The paper details the construction of the datsets and the design of the tracks as well as the evaluation measures that we implemented and a quick overview of the results. All of these datasets are available online and runs may still be submitted on the online server of the challenges.
[ "['Ioannis Partalas' 'Aris Kosmopoulos' 'Nicolas Baskiotis'\n 'Thierry Artieres' 'George Paliouras' 'Eric Gaussier'\n 'Ion Androutsopoulos' 'Massih-Reza Amini' 'Patrick Galinari']", "Ioannis Partalas, Aris Kosmopoulos, Nicolas Baskiotis, Thierry\n Artieres, George Paliouras, Eric Gaussier, Ion Androutsopoulos, Massih-Reza\n Amini, Patrick Galinari" ]
cs.LG cs.SY
null
1503.08639
null
null
http://arxiv.org/pdf/1503.08639v1
2015-03-30T11:11:57Z
2015-03-30T11:11:57Z
Sparse plus low-rank autoregressive identification in neuroimaging time series
This paper considers the problem of identifying multivariate autoregressive (AR) sparse plus low-rank graphical models. Based on the corresponding problem formulation recently presented, we use the alternating direction method of multipliers (ADMM) to efficiently solve it and scale it to sizes encountered in neuroimaging applications. We apply this decomposition on synthetic and real neuroimaging datasets with a specific focus on the information encoded in the low-rank structure of our model. In particular, we illustrate that this information captures the spatio-temporal structure of the original data, generalizing classical component analysis approaches.
[ "Rapha\\\"el Li\\'egeois, Bamdev Mishra, Mattia Zorzi, Rodolphe Sepulchre", "['Raphaël Liégeois' 'Bamdev Mishra' 'Mattia Zorzi' 'Rodolphe Sepulchre']" ]
stat.ME cs.LG
10.1007/s11222-016-9649-y
1503.08650
null
null
http://arxiv.org/abs/1503.08650v4
2016-03-23T11:28:15Z
2015-03-30T11:58:20Z
Comparison of Bayesian predictive methods for model selection
The goal of this paper is to compare several widely used Bayesian model selection methods in practical model selection problems, highlight their differences and give recommendations about the preferred approaches. We focus on the variable subset selection for regression and classification and perform several numerical experiments using both simulated and real world data. The results show that the optimization of a utility estimate such as the cross-validation (CV) score is liable to finding overfitted models due to relatively high variance in the utility estimates when the data is scarce. This can also lead to substantial selection induced bias and optimism in the performance evaluation for the selected model. From a predictive viewpoint, best results are obtained by accounting for model uncertainty by forming the full encompassing model, such as the Bayesian model averaging solution over the candidate models. If the encompassing model is too complex, it can be robustly simplified by the projection method, in which the information of the full model is projected onto the submodels. This approach is substantially less prone to overfitting than selection based on CV-score. Overall, the projection method appears to outperform also the maximum a posteriori model and the selection of the most probable variables. The study also demonstrates that the model selection can greatly benefit from using cross-validation outside the searching process both for guiding the model size selection and assessing the predictive performance of the finally selected model.
[ "['Juho Piironen' 'Aki Vehtari']", "Juho Piironen, Aki Vehtari" ]
cs.CY cs.LG
null
1503.08818
null
null
http://arxiv.org/pdf/1503.08818v1
2015-03-29T09:35:17Z
2015-03-29T09:35:17Z
Founding Digital Currency on Imprecise Commodity
Current digital currency schemes provide instantaneous exchange on precise commodity, in which "precise" means a buyer can possibly verify the function of the commodity without error. However, imprecise commodities, e.g. statistical data, with error existing are abundant in digital world. Existing digital currency schemes do not offer a mechanism to help the buyer for payment decision on precision of commodity, which may lead the buyer to a dilemma between having to buy and being unconfident. In this paper, we design a currency schemes IDCS for imprecise digital commodity. IDCS completes a trade in three stages of handshake between a buyer and providers. We present an IDCS prototype implementation that assigns weights on the trustworthy of the providers, and calculates a confidence level for the buyer to decide the quality of a imprecise commodity. In experiment, we characterize the performance of IDCS prototype under varying impact factors.
[ "Zimu Yuan, Zhiwei Xu", "['Zimu Yuan' 'Zhiwei Xu']" ]
math.OC cs.IT cs.LG cs.MA cs.SY math.IT stat.ML
null
1503.08855
null
null
http://arxiv.org/pdf/1503.08855v1
2015-03-30T21:18:38Z
2015-03-30T21:18:38Z
Decentralized learning for wireless communications and networking
This chapter deals with decentralized learning algorithms for in-network processing of graph-valued data. A generic learning problem is formulated and recast into a separable form, which is iteratively minimized using the alternating-direction method of multipliers (ADMM) so as to gain the desired degree of parallelization. Without exchanging elements from the distributed training sets and keeping inter-node communications at affordable levels, the local (per-node) learners consent to the desired quantity inferred globally, meaning the one obtained if the entire training data set were centrally available. Impact of the decentralized learning framework to contemporary wireless communications and networking tasks is illustrated through case studies including target tracking using wireless sensor networks, unveiling Internet traffic anomalies, power system state estimation, as well as spectrum cartography for wireless cognitive radio networks.
[ "['Georgios B. Giannakis' 'Qing Ling' 'Gonzalo Mateos' 'Ioannis D. Schizas'\n 'Hao Zhu']", "Georgios B. Giannakis, Qing Ling, Gonzalo Mateos, Ioannis D. Schizas,\n and Hao Zhu" ]
cs.LG
null
1503.08873
null
null
http://arxiv.org/pdf/1503.08873v1
2015-03-30T23:29:46Z
2015-03-30T23:29:46Z
Fast Label Embeddings for Extremely Large Output Spaces
Many modern multiclass and multilabel problems are characterized by increasingly large output spaces. For these problems, label embeddings have been shown to be a useful primitive that can improve computational and statistical efficiency. In this work we utilize a correspondence between rank constrained estimation and low dimensional label embeddings that uncovers a fast label embedding algorithm which works in both the multiclass and multilabel settings. The result is a randomized algorithm for partial least squares, whose running time is exponentially faster than naive algorithms. We demonstrate our techniques on two large-scale public datasets, from the Large Scale Hierarchical Text Challenge and the Open Directory Project, where we obtain state of the art results.
[ "Paul Mineiro and Nikos Karampatziakis", "['Paul Mineiro' 'Nikos Karampatziakis']" ]
stat.ML cs.LG
null
1503.09022
null
null
http://arxiv.org/pdf/1503.09022v3
2017-07-18T12:12:49Z
2015-03-31T12:29:13Z
Multi-label Classification using Labels as Hidden Nodes
Competitive methods for multi-label classification typically invest in learning labels together. To do so in a beneficial way, analysis of label dependence is often seen as a fundamental step, separate and prior to constructing a classifier. Some methods invest up to hundreds of times more computational effort in building dependency models, than training the final classifier itself. We extend some recent discussion in the literature and provide a deeper analysis, namely, developing the view that label dependence is often introduced by an inadequate base classifier, rather than being inherent to the data or underlying concept; showing how even an exhaustive analysis of label dependence may not lead to an optimal classification structure. Viewing labels as additional features (a transformation of the input), we create neural-network inspired novel methods that remove the emphasis of a prior dependency structure. Our methods have an important advantage particular to multi-label data: they leverage labels to create effective units in middle layers, rather than learning these units from scratch in an unsupervised fashion with gradient-based methods. Results are promising. The methods we propose perform competitively, and also have very important qualities of scalability.
[ "Jesse Read and Jaakko Hollm\\'en", "['Jesse Read' 'Jaakko Hollmén']" ]
cs.LG cs.LO
null
1503.09025
null
null
http://arxiv.org/pdf/1503.09025v3
2015-11-09T12:02:25Z
2015-03-31T12:41:01Z
Learning Definite Horn Formulas from Closure Queries
A definite Horn theory is a set of n-dimensional Boolean vectors whose characteristic function is expressible as a definite Horn formula, that is, as conjunction of definite Horn clauses. The class of definite Horn theories is known to be learnable under different query learning settings, such as learning from membership and equivalence queries or learning from entailment. We propose yet a different type of query: the closure query. Closure queries are a natural extension of membership queries and also a variant, appropriate in the context of definite Horn formulas, of the so-called correction queries. We present an algorithm that learns conjunctions of definite Horn clauses in polynomial time, using closure and equivalence queries, and show how it relates to the canonical Guigues-Duquenne basis for implicational systems. We also show how the different query models mentioned relate to each other by either showing full-fledged reductions by means of query simulation (where possible), or by showing their connections in the context of particular algorithms that use them for learning definite Horn formulas.
[ "['Marta Arias' 'José L. Balcázar' 'Cristina Tîrnăucă']", "Marta Arias, Jos\\'e L. Balc\\'azar, Cristina T\\^irn\\u{a}uc\\u{a}" ]
cs.LG
null
1503.09082
null
null
null
null
null
Generalized Categorization Axioms
Categorization axioms have been proposed to axiomatizing clustering results, which offers a hint of bridging the difference between human recognition system and machine learning through an intuitive observation: an object should be assigned to its most similar category. However, categorization axioms cannot be generalized into a general machine learning system as categorization axioms become trivial when the number of categories becomes one. In order to generalize categorization axioms into general cases, categorization input and categorization output are reinterpreted by inner and outer category representation. According to the categorization reinterpretation, two category representation axioms are presented. Category representation axioms and categorization axioms can be combined into a generalized categorization axiomatic framework, which accurately delimit the theoretical categorization constraints and overcome the shortcoming of categorization axioms. The proposed axiomatic framework not only discuses categorization test issue but also reinterprets many results in machine learning in a unified way, such as dimensionality reduction,density estimation, regression, clustering and classification.
[ "Jian Yu" ]
null
null
1503.09082v
null
null
http://arxiv.org/pdf/1503.09082v11
2016-01-15T01:26:03Z
2015-03-31T15:17:57Z
Generalized Categorization Axioms
Categorization axioms have been proposed to axiomatizing clustering results, which offers a hint of bridging the difference between human recognition system and machine learning through an intuitive observation: an object should be assigned to its most similar category. However, categorization axioms cannot be generalized into a general machine learning system as categorization axioms become trivial when the number of categories becomes one. In order to generalize categorization axioms into general cases, categorization input and categorization output are reinterpreted by inner and outer category representation. According to the categorization reinterpretation, two category representation axioms are presented. Category representation axioms and categorization axioms can be combined into a generalized categorization axiomatic framework, which accurately delimit the theoretical categorization constraints and overcome the shortcoming of categorization axioms. The proposed axiomatic framework not only discuses categorization test issue but also reinterprets many results in machine learning in a unified way, such as dimensionality reduction,density estimation, regression, clustering and classification.
[ "['Jian Yu']" ]
cs.CV cs.LG
null
1504.00028
null
null
http://arxiv.org/pdf/1504.00028v1
2015-03-31T20:30:00Z
2015-03-31T20:30:00Z
Real-World Font Recognition Using Deep Network and Domain Adaptation
We address a challenging fine-grain classification problem: recognizing a font style from an image of text. In this task, it is very easy to generate lots of rendered font examples but very hard to obtain real-world labeled images. This real-to-synthetic domain gap caused poor generalization to new real data in previous methods (Chen et al. (2014)). In this paper, we refer to Convolutional Neural Networks, and use an adaptation technique based on a Stacked Convolutional Auto-Encoder that exploits unlabeled real-world images combined with synthetic data. The proposed method achieves an accuracy of higher than 80% (top-5) on a real-world dataset.
[ "Zhangyang Wang, Jianchao Yang, Hailin Jin, Eli Shechtman, Aseem\n Agarwala, Jonathan Brandt, Thomas S. Huang", "['Zhangyang Wang' 'Jianchao Yang' 'Hailin Jin' 'Eli Shechtman'\n 'Aseem Agarwala' 'Jonathan Brandt' 'Thomas S. Huang']" ]
stat.ML cs.IT cs.LG math.IT math.PR
null
1504.00052
null
null
http://arxiv.org/pdf/1504.00052v1
2015-03-31T21:48:56Z
2015-03-31T21:48:56Z
Improved Error Bounds Based on Worst Likely Assignments
Error bounds based on worst likely assignments use permutation tests to validate classifiers. Worst likely assignments can produce effective bounds even for data sets with 100 or fewer training examples. This paper introduces a statistic for use in the permutation tests of worst likely assignments that improves error bounds, especially for accurate classifiers, which are typically the classifiers of interest.
[ "['Eric Bax']", "Eric Bax" ]
stat.ML cs.LG
null
1504.00064
null
null
http://arxiv.org/pdf/1504.00064v1
2015-03-31T23:27:03Z
2015-03-31T23:27:03Z
Crowdsourcing Feature Discovery via Adaptively Chosen Comparisons
We introduce an unsupervised approach to efficiently discover the underlying features in a data set via crowdsourcing. Our queries ask crowd members to articulate a feature common to two out of three displayed examples. In addition we also ask the crowd to provide binary labels to the remaining examples based on the discovered features. The triples are chosen adaptively based on the labels of the previously discovered features on the data set. In two natural models of features, hierarchical and independent, we show that a simple adaptive algorithm, using "two-out-of-three" similarity queries, recovers all features with less labor than any nonadaptive algorithm. Experimental results validate the theoretical findings.
[ "James Y. Zou, Kamalika Chaudhuri, Adam Tauman Kalai", "['James Y. Zou' 'Kamalika Chaudhuri' 'Adam Tauman Kalai']" ]
stat.ML cs.LG
null
1504.00083
null
null
http://arxiv.org/pdf/1504.00083v1
2015-04-01T02:31:55Z
2015-04-01T02:31:55Z
A Theory of Feature Learning
Feature Learning aims to extract relevant information contained in data sets in an automated fashion. It is driving force behind the current deep learning trend, a set of methods that have had widespread empirical success. What is lacking is a theoretical understanding of different feature learning schemes. This work provides a theoretical framework for feature learning and then characterizes when features can be learnt in an unsupervised fashion. We also provide means to judge the quality of features via rate-distortion theory and its generalizations.
[ "['Brendan van Rooyen' 'Robert C. Williamson']", "Brendan van Rooyen, Robert C. Williamson" ]
stat.ML cs.LG
null
1504.00091
null
null
http://arxiv.org/pdf/1504.00091v2
2015-07-04T14:09:14Z
2015-04-01T02:54:38Z
Learning in the Presence of Corruption
In supervised learning one wishes to identify a pattern present in a joint distribution $P$, of instances, label pairs, by providing a function $f$ from instances to labels that has low risk $\mathbb{E}_{P}\ell(y,f(x))$. To do so, the learner is given access to $n$ iid samples drawn from $P$. In many real world problems clean samples are not available. Rather, the learner is given access to samples from a corrupted distribution $\tilde{P}$ from which to learn, while the goal of predicting the clean pattern remains. There are many different types of corruption one can consider, and as of yet there is no general means to compare the relative ease of learning under these different corruption processes. In this paper we develop a general framework for tackling such problems as well as introducing upper and lower bounds on the risk for learning in the presence of corruption. Our ultimate goal is to be able to make informed economic decisions in regards to the acquisition of data sets. For a certain subclass of corruption processes (those that are \emph{reconstructible}) we achieve this goal in a particular sense. Our lower bounds are in terms of the coefficient of ergodicity, a simple to calculate property of stochastic matrices. Our upper bounds proceed via a generalization of the method of unbiased estimators appearing in recent work of Natarajan et al and implicit in the earlier work of Kearns.
[ "['Brendan van Rooyen' 'Robert C. Williamson']", "Brendan van Rooyen, Robert C. Williamson" ]
cs.LG cs.AI
null
1504.00110
null
null
http://arxiv.org/pdf/1504.00110v1
2015-04-01T06:05:40Z
2015-04-01T06:05:40Z
The Libra Toolkit for Probabilistic Models
The Libra Toolkit is a collection of algorithms for learning and inference with discrete probabilistic models, including Bayesian networks, Markov networks, dependency networks, and sum-product networks. Compared to other toolkits, Libra places a greater emphasis on learning the structure of tractable models in which exact inference is efficient. It also includes a variety of algorithms for learning graphical models in which inference is potentially intractable, and for performing exact and approximate inference. Libra is released under a 2-clause BSD license to encourage broad use in academia and industry.
[ "['Daniel Lowd' 'Amirmohammad Rooshenas']", "Daniel Lowd, Amirmohammad Rooshenas" ]
cs.LG stat.ML
null
1504.00284
null
null
http://arxiv.org/pdf/1504.00284v3
2015-12-22T16:11:15Z
2015-04-01T16:39:26Z
A New Vision of Collaborative Active Learning
Active learning (AL) is a learning paradigm where an active learner has to train a model (e.g., a classifier) which is in principal trained in a supervised way, but in AL it has to be done by means of a data set with initially unlabeled samples. To get labels for these samples, the active learner has to ask an oracle (e.g., a human expert) for labels. The goal is to maximize the performance of the model and to minimize the number of queries at the same time. In this article, we first briefly discuss the state of the art and own, preliminary work in the field of AL. Then, we propose the concept of collaborative active learning (CAL). With CAL, we will overcome some of the harsh limitations of current AL. In particular, we envision scenarios where an expert may be wrong for various reasons, there might be several or even many experts with different expertise, the experts may label not only samples but also knowledge at a higher level such as rules, and we consider that the labeling costs depend on many conditions. Moreover, in a CAL process human experts will profit by improving their own knowledge, too.
[ "['Adrian Calma' 'Tobias Reitmaier' 'Bernhard Sick' 'Paul Lukowicz'\n 'Mark Embrechts']", "Adrian Calma, Tobias Reitmaier, Bernhard Sick, Paul Lukowicz, Mark\n Embrechts" ]
stat.ML cs.LG
null
1504.00377
null
null
http://arxiv.org/pdf/1504.00377v1
2015-04-01T20:35:33Z
2015-04-01T20:35:33Z
Bayesian Clustering of Shapes of Curves
Unsupervised clustering of curves according to their shapes is an important problem with broad scientific applications. The existing model-based clustering techniques either rely on simple probability models (e.g., Gaussian) that are not generally valid for shape analysis or assume the number of clusters. We develop an efficient Bayesian method to cluster curve data using an elastic shape metric that is based on joint registration and comparison of shapes of curves. The elastic-inner product matrix obtained from the data is modeled using a Wishart distribution whose parameters are assigned carefully chosen prior distributions to allow for automatic inference on the number of clusters. Posterior is sampled through an efficient Markov chain Monte Carlo procedure based on the Chinese restaurant process to infer (1) the posterior distribution on the number of clusters, and (2) clustering configuration of shapes. This method is demonstrated on a variety of synthetic data and real data examples on protein structure analysis, cell shape analysis in microscopy images, and clustering of shaped from MPEG7 database.
[ "['Zhengwu Zhang' 'Debdeep Pati' 'Anuj Srivastava']", "Zhengwu Zhang, Debdeep Pati, Anuj Srivastava" ]
cond-mat.stat-mech cs.IT cs.LG math.IT stat.ML
null
1504.00386
null
null
http://arxiv.org/pdf/1504.00386v1
2015-04-01T20:55:10Z
2015-04-01T20:55:10Z
Signatures of Infinity: Nonergodicity and Resource Scaling in Prediction, Complexity, and Learning
We introduce a simple analysis of the structural complexity of infinite-memory processes built from random samples of stationary, ergodic finite-memory component processes. Such processes are familiar from the well known multi-arm Bandit problem. We contrast our analysis with computation-theoretic and statistical inference approaches to understanding their complexity. The result is an alternative view of the relationship between predictability, complexity, and learning that highlights the distinct ways in which informational and correlational divergences arise in complex ergodic and nonergodic processes. We draw out consequences for the resource divergences that delineate the structural hierarchy of ergodic processes and for processes that are themselves hierarchical.
[ "James P. Crutchfield and Sarah Marzen", "['James P. Crutchfield' 'Sarah Marzen']" ]
cs.LG cs.CV
null
1504.00430
null
null
http://arxiv.org/pdf/1504.00430v1
2015-04-02T02:16:39Z
2015-04-02T02:16:39Z
Direct l_(2,p)-Norm Learning for Feature Selection
In this paper, we propose a novel sparse learning based feature selection method that directly optimizes a large margin linear classification model sparsity with l_(2,p)-norm (0 < p < 1)subject to data-fitting constraints, rather than using the sparsity as a regularization term. To solve the direct sparsity optimization problem that is non-smooth and non-convex when 0<p<1, we provide an efficient iterative algorithm with proved convergence by converting it to a convex and smooth optimization problem at every iteration step. The proposed algorithm has been evaluated based on publicly available datasets, and extensive comparison experiments have demonstrated that our algorithm could achieve feature selection performance competitive to state-of-the-art algorithms.
[ "['Hanyang Peng' 'Yong Fan']", "Hanyang Peng, Yong Fan" ]
quant-ph cs.CV cs.LG
10.20904/271001
1504.00580
null
null
http://arxiv.org/abs/1504.00580v1
2015-04-02T14:53:51Z
2015-04-02T14:53:51Z
Quantum image classification using principal component analysis
We present a novel quantum algorithm for classification of images. The algorithm is constructed using principal component analysis and von Neuman quantum measurements. In order to apply the algorithm we present a new quantum representation of grayscale images.
[ "Mateusz Ostaszewski and Przemys{\\l}aw Sadowski and Piotr Gawron", "['Mateusz Ostaszewski' 'Przemysław Sadowski' 'Piotr Gawron']" ]
stat.ML cs.CV cs.LG cs.NE
null
1504.00641
null
null
http://arxiv.org/pdf/1504.00641v1
2015-04-02T18:38:38Z
2015-04-02T18:38:38Z
A Probabilistic Theory of Deep Learning
A grand challenge in machine learning is the development of computational algorithms that match or outperform humans in perceptual inference tasks that are complicated by nuisance variation. For instance, visual object recognition involves the unknown object position, orientation, and scale in object recognition while speech recognition involves the unknown voice pronunciation, pitch, and speed. Recently, a new breed of deep learning algorithms have emerged for high-nuisance inference tasks that routinely yield pattern recognition systems with near- or super-human capabilities. But a fundamental question remains: Why do they work? Intuitions abound, but a coherent framework for understanding, analyzing, and synthesizing deep learning architectures has remained elusive. We answer this question by developing a new probabilistic framework for deep learning based on the Deep Rendering Model: a generative probabilistic model that explicitly captures latent nuisance variation. By relaxing the generative model to a discriminative one, we can recover two of the current leading deep learning systems, deep convolutional neural networks and random decision forests, providing insights into their successes and shortcomings, as well as a principled route to their improvement.
[ "['Ankit B. Patel' 'Tan Nguyen' 'Richard G. Baraniuk']", "Ankit B. Patel, Tan Nguyen and Richard G. Baraniuk" ]
cs.LG cs.CV cs.RO
null
1504.00702
null
null
http://arxiv.org/pdf/1504.00702v5
2016-04-19T01:33:13Z
2015-04-02T22:23:51Z
End-to-End Training of Deep Visuomotor Policies
Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a partially observed guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.
[ "['Sergey Levine' 'Chelsea Finn' 'Trevor Darrell' 'Pieter Abbeel']", "Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel" ]
cs.LG
null
1504.00736
null
null
http://arxiv.org/pdf/1504.00736v1
2015-04-03T03:21:15Z
2015-04-03T03:21:15Z
Unsupervised Feature Selection with Adaptive Structure Learning
The problem of feature selection has raised considerable interests in the past decade. Traditional unsupervised methods select the features which can faithfully preserve the intrinsic structures of data, where the intrinsic structures are estimated using all the input features of data. However, the estimated intrinsic structures are unreliable/inaccurate when the redundant and noisy features are not removed. Therefore, we face a dilemma here: one need the true structures of data to identify the informative features, and one need the informative features to accurately estimate the true structures of data. To address this, we propose a unified learning framework which performs structure learning and feature selection simultaneously. The structures are adaptively learned from the results of feature selection, and the informative features are reselected to preserve the refined structures of data. By leveraging the interactions between these two essential tasks, we are able to capture accurate structures and select more informative features. Experimental results on many benchmark data sets demonstrate that the proposed method outperforms many state of the art unsupervised feature selection methods.
[ "['Liang Du' 'Yi-Dong Shen']", "Liang Du, Yi-Dong Shen" ]
cs.LG stat.ML
null
1504.00757
null
null
http://arxiv.org/pdf/1504.00757v1
2015-04-03T07:02:49Z
2015-04-03T07:02:49Z
Learning Mixed Membership Mallows Models from Pairwise Comparisons
We propose a novel parameterized family of Mixed Membership Mallows Models (M4) to account for variability in pairwise comparisons generated by a heterogeneous population of noisy and inconsistent users. M4 models individual preferences as a user-specific probabilistic mixture of shared latent Mallows components. Our key algorithmic insight for estimation is to establish a statistical connection between M4 and topic models by viewing pairwise comparisons as words, and users as documents. This key insight leads us to explore Mallows components with a separable structure and leverage recent advances in separable topic discovery. While separability appears to be overly restrictive, we nevertheless show that it is an inevitable outcome of a relatively small number of latent Mallows components in a world of large number of items. We then develop an algorithm based on robust extreme-point identification of convex polygons to learn the reference rankings, and is provably consistent with polynomial sample complexity guarantees. We demonstrate that our new model is empirically competitive with the current state-of-the-art approaches in predicting real-world preferences.
[ "Weicong Ding, Prakash Ishwar, Venkatesh Saligrama", "['Weicong Ding' 'Prakash Ishwar' 'Venkatesh Saligrama']" ]
cs.LG stat.CO stat.ME stat.ML
null
1504.00781
null
null
http://arxiv.org/pdf/1504.00781v1
2015-04-03T08:42:44Z
2015-04-03T08:42:44Z
The Gram-Charlier A Series based Extended Rule-of-Thumb for Bandwidth Selection in Univariate and Multivariate Kernel Density Estimations
The article derives a novel Gram-Charlier A (GCA) Series based Extended Rule-of-Thumb (ExROT) for bandwidth selection in Kernel Density Estimation (KDE). There are existing various bandwidth selection rules achieving minimization of the Asymptotic Mean Integrated Square Error (AMISE) between the estimated probability density function (PDF) and the actual PDF. The rules differ in a way to estimate the integration of the squared second order derivative of an unknown PDF $(f(\cdot))$, identified as the roughness $R(f''(\cdot))$. The simplest Rule-of-Thumb (ROT) estimates $R(f''(\cdot))$ with an assumption that the density being estimated is Gaussian. Intuitively, better estimation of $R(f''(\cdot))$ and consequently better bandwidth selection rules can be derived, if the unknown PDF is approximated through an infinite series expansion based on a more generalized density assumption. As a demonstration and verification to this concept, the ExROT derived in the article uses an extended assumption that the density being estimated is near Gaussian. This helps use of the GCA expansion as an approximation to the unknown near Gaussian PDF. The ExROT for univariate KDE is extended to that for multivariate KDE. The required multivariate AMISE criteria is re-derived using elementary calculus of several variables, instead of Tensor calculus. The derivation uses the Kronecker product and the vector differential operator to achieve the AMISE expression in vector notations. There is also derived ExROT for kernel based density derivative estimator.
[ "['Dharmani Bhaveshkumar C']", "Dharmani Bhaveshkumar C" ]
math.OC cs.CV cs.LG cs.SY
null
1504.00905
null
null
http://arxiv.org/pdf/1504.00905v2
2015-05-30T15:58:36Z
2015-04-03T18:20:36Z
Robust Anomaly Detection Using Semidefinite Programming
This paper presents a new approach, based on polynomial optimization and the method of moments, to the problem of anomaly detection. The proposed technique only requires information about the statistical moments of the normal-state distribution of the features of interest and compares favorably with existing approaches (such as Parzen windows and 1-class SVM). In addition, it provides a succinct description of the normal state. Thus, it leads to a substantial simplification of the the anomaly detection problem when working with higher dimensional datasets.
[ "['Jose A. Lopez' 'Octavia Camps' 'Mario Sznaier']", "Jose A. Lopez, Octavia Camps, Mario Sznaier" ]
cs.CL cs.CV cs.LG cs.NE stat.ML
null
1504.00923
null
null
http://arxiv.org/pdf/1504.00923v1
2015-04-03T19:57:06Z
2015-04-03T19:57:06Z
A Unified Deep Neural Network for Speaker and Language Recognition
Learned feature representations and sub-phoneme posteriors from Deep Neural Networks (DNNs) have been used separately to produce significant performance gains for speaker and language recognition tasks. In this work we show how these gains are possible using a single DNN for both speaker and language recognition. The unified DNN approach is shown to yield substantial performance improvements on the the 2013 Domain Adaptation Challenge speaker recognition task (55% reduction in EER for the out-of-domain condition) and on the NIST 2011 Language Recognition Evaluation (48% reduction in EER for the 30s test condition).
[ "['Fred Richardson' 'Douglas Reynolds' 'Najim Dehak']", "Fred Richardson, Douglas Reynolds, Najim Dehak" ]
cs.NE cs.LG
null
1504.00941
null
null
http://arxiv.org/pdf/1504.00941v2
2015-04-07T22:39:18Z
2015-04-03T21:22:52Z
A Simple Way to Initialize Recurrent Networks of Rectified Linear Units
Learning long term dependencies in recurrent networks is difficult due to vanishing and exploding gradients. To overcome this difficulty, researchers have developed sophisticated optimization techniques and network architectures. In this paper, we propose a simpler solution that use recurrent neural networks composed of rectified linear units. Key to our solution is the use of the identity matrix or its scaled version to initialize the recurrent weight matrix. We find that our solution is comparable to LSTM on our four benchmarks: two toy problems involving long-range temporal structures, a large language modeling problem and a benchmark speech recognition problem.
[ "Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton", "['Quoc V. Le' 'Navdeep Jaitly' 'Geoffrey E. Hinton']" ]
cs.LG
10.1145/2783258.2783340
1504.00948
null
null
http://arxiv.org/abs/1504.00948v3
2015-08-01T02:41:40Z
2015-04-03T22:04:05Z
The Child is Father of the Man: Foresee the Success at the Early Stage
Understanding the dynamic mechanisms that drive the high-impact scientific work (e.g., research papers, patents) is a long-debated research topic and has many important implications, ranging from personal career development and recruitment search, to the jurisdiction of research resources. Recent advances in characterizing and modeling scientific success have made it possible to forecast the long-term impact of scientific work, where data mining techniques, supervised learning in particular, play an essential role. Despite much progress, several key algorithmic challenges in relation to predicting long-term scientific impact have largely remained open. In this paper, we propose a joint predictive model to forecast the long-term scientific impact at the early stage, which simultaneously addresses a number of these open challenges, including the scholarly feature design, the non-linearity, the domain-heterogeneity and dynamics. In particular, we formulate it as a regularized optimization problem and propose effective and scalable algorithms to solve it. We perform extensive empirical evaluations on large, real scholarly data sets to validate the effectiveness and the efficiency of our method.
[ "['Liangyue Li' 'Hanghang Tong']", "Liangyue Li, Hanghang Tong" ]
cs.LG math.OC
null
1504.00981
null
null
http://arxiv.org/pdf/1504.00981v2
2015-11-30T05:48:56Z
2015-04-04T04:40:48Z
ELM-Based Distributed Cooperative Learning Over Networks
This paper investigates distributed cooperative learning algorithms for data processing in a network setting. Specifically, the extreme learning machine (ELM) is introduced to train a set of data distributed across several components, and each component runs a program on a subset of the entire data. In this scheme, there is no requirement for a fusion center in the network due to e.g., practical limitations, security, or privacy reasons. We first reformulate the centralized ELM training problem into a separable form among nodes with consensus constraints. Then, we solve the equivalent problem using distributed optimization tools. A new distributed cooperative learning algorithm based on ELM, called DC-ELM, is proposed. The architecture of this algorithm differs from that of some existing parallel/distributed ELMs based on MapReduce or cloud computing. We also present an online version of the proposed algorithm that can learn data sequentially in a one-by-one or chunk-by-chunk mode. The novel algorithm is well suited for potential applications such as artificial intelligence, computational biology, finance, wireless sensor networks, and so on, involving datasets that are often extremely large, high-dimensional and located on distributed data sources. We show simulation results on both synthetic and real-world data sets.
[ "Wu Ai and Weisheng Chen", "['Wu Ai' 'Weisheng Chen']" ]
cs.DS cs.GT cs.LG
null
1504.01033
null
null
http://arxiv.org/pdf/1504.01033v2
2015-11-18T03:34:42Z
2015-04-04T18:13:15Z
Watch and Learn: Optimizing from Revealed Preferences Feedback
A Stackelberg game is played between a leader and a follower. The leader first chooses an action, then the follower plays his best response. The goal of the leader is to pick the action that will maximize his payoff given the follower's best response. In this paper we present an approach to solving for the leader's optimal strategy in certain Stackelberg games where the follower's utility function (and thus the subsequent best response of the follower) is unknown. Stackelberg games capture, for example, the following interaction between a producer and a consumer. The producer chooses the prices of the goods he produces, and then a consumer chooses to buy a utility maximizing bundle of goods. The goal of the seller here is to set prices to maximize his profit---his revenue, minus the production cost of the purchased bundle. It is quite natural that the seller in this example should not know the buyer's utility function. However, he does have access to revealed preference feedback---he can set prices, and then observe the purchased bundle and his own profit. We give algorithms for efficiently solving, in terms of both computational and query complexity, a broad class of Stackelberg games in which the follower's utility function is unknown, using only "revealed preference" access to it. This class includes in particular the profit maximization problem, as well as the optimal tolling problem in nonatomic congestion games, when the latency functions are unknown. Surprisingly, we are able to solve these problems even though the optimization problems are non-convex in the leader's actions.
[ "['Aaron Roth' 'Jonathan Ullman' 'Zhiwei Steven Wu']", "Aaron Roth, Jonathan Ullman, Zhiwei Steven Wu" ]
stat.ML cs.LG
null
1504.01044
null
null
http://arxiv.org/pdf/1504.01044v2
2015-05-03T22:11:21Z
2015-04-04T19:55:35Z
Concept Drift Detection for Streaming Data
Common statistical prediction models often require and assume stationarity in the data. However, in many practical applications, changes in the relationship of the response and predictor variables are regularly observed over time, resulting in the deterioration of the predictive performance of these models. This paper presents Linear Four Rates (LFR), a framework for detecting these concept drifts and subsequently identifying the data points that belong to the new concept (for relearning the model). Unlike conventional concept drift detection approaches, LFR can be applied to both batch and stream data; is not limited by the distribution properties of the response variable (e.g., datasets with imbalanced labels); is independent of the underlying statistical-model; and uses user-specified parameters that are intuitively comprehensible. The performance of LFR is compared to benchmark approaches using both simulated and commonly used public datasets that span the gamut of concept drift types. The results show LFR significantly outperforms benchmark approaches in terms of recall, accuracy and delay in detection of concept drifts across datasets.
[ "Heng Wang and Zubin Abraham", "['Heng Wang' 'Zubin Abraham']" ]
stat.ML cs.LG
null
1504.01046
null
null
http://arxiv.org/pdf/1504.01046v2
2016-04-11T15:30:48Z
2015-04-04T20:05:17Z
Graph Connectivity in Noisy Sparse Subspace Clustering
Subspace clustering is the problem of clustering data points into a union of low-dimensional linear/affine subspaces. It is the mathematical abstraction of many important problems in computer vision, image processing and machine learning. A line of recent work (4, 19, 24, 20) provided strong theoretical guarantee for sparse subspace clustering (4), the state-of-the-art algorithm for subspace clustering, on both noiseless and noisy data sets. It was shown that under mild conditions, with high probability no two points from different subspaces are clustered together. Such guarantee, however, is not sufficient for the clustering to be correct, due to the notorious "graph connectivity problem" (15). In this paper, we investigate the graph connectivity problem for noisy sparse subspace clustering and show that a simple post-processing procedure is capable of delivering consistent clustering under certain "general position" or "restricted eigenvalue" assumptions. We also show that our condition is almost tight with adversarial noise perturbation by constructing a counter-example. These results provide the first exact clustering guarantee of noisy SSC for subspaces of dimension greater then 3.
[ "Yining Wang, Yu-Xiang Wang and Aarti Singh", "['Yining Wang' 'Yu-Xiang Wang' 'Aarti Singh']" ]
cs.LG cs.SY
null
1504.01050
null
null
http://arxiv.org/pdf/1504.01050v1
2015-04-04T20:37:52Z
2015-04-04T20:37:52Z
An Online Approach to Dynamic Channel Access and Transmission Scheduling
Making judicious channel access and transmission scheduling decisions is essential for improving performance as well as energy and spectral efficiency in multichannel wireless systems. This problem has been a subject of extensive study in the past decade, and the resulting dynamic and opportunistic channel access schemes can bring potentially significant improvement over traditional schemes. However, a common and severe limitation of these dynamic schemes is that they almost always require some form of a priori knowledge of the channel statistics. A natural remedy is a learning framework, which has also been extensively studied in the same context, but a typical learning algorithm in this literature seeks only the best static policy, with performance measured by weak regret, rather than learning a good dynamic channel access policy. There is thus a clear disconnect between what an optimal channel access policy can achieve with known channel statistics that actively exploits temporal, spatial and spectral diversity, and what a typical existing learning algorithm aims for, which is the static use of a single channel devoid of diversity gain. In this paper we bridge this gap by designing learning algorithms that track known optimal or sub-optimal dynamic channel access and transmission scheduling policies, thereby yielding performance measured by a form of strong regret, the accumulated difference between the reward returned by an optimal solution when a priori information is available and that by our online algorithm. We do so in the context of two specific algorithms that appeared in [1] and [2], respectively, the former for a multiuser single-channel setting and the latter for a single-user multichannel setting. In both cases we show that our algorithms achieve sub-linear regret uniform in time and outperforms the standard weak-regret learning algorithms.
[ "['Yang Liu' 'Mingyan Liu']", "Yang Liu and Mingyan Liu" ]
cs.LG cs.SI math.OC stat.ML
null
1504.01070
null
null
http://arxiv.org/pdf/1504.01070v1
2015-04-05T01:40:35Z
2015-04-05T01:40:35Z
Sync-Rank: Robust Ranking, Constrained Ranking and Rank Aggregation via Eigenvector and Semidefinite Programming Synchronization
We consider the classic problem of establishing a statistical ranking of a set of n items given a set of inconsistent and incomplete pairwise comparisons between such items. Instantiations of this problem occur in numerous applications in data analysis (e.g., ranking teams in sports data), computer vision, and machine learning. We formulate the above problem of ranking with incomplete noisy information as an instance of the group synchronization problem over the group SO(2) of planar rotations, whose usefulness has been demonstrated in numerous applications in recent years. Its least squares solution can be approximated by either a spectral or a semidefinite programming (SDP) relaxation, followed by a rounding procedure. We perform extensive numerical simulations on both synthetic and real-world data sets, showing that our proposed method compares favorably to other algorithms from the recent literature. Existing theoretical guarantees on the group synchronization problem imply lower bounds on the largest amount of noise permissible in the ranking data while still achieving exact recovery. We propose a similar synchronization-based algorithm for the rank-aggregation problem, which integrates in a globally consistent ranking pairwise comparisons given by different rating systems on the same set of items. We also discuss the problem of semi-supervised ranking when there is available information on the ground truth rank of a subset of players, and propose an algorithm based on SDP which recovers the ranks of the remaining players. Finally, synchronization-based ranking, combined with a spectral technique for the densest subgraph problem, allows one to extract locally-consistent partial rankings, in other words, to identify the rank of a small subset of players whose pairwise comparisons are less noisy than the rest of the data, which other methods are not able to identify.
[ "['Mihai Cucuringu']", "Mihai Cucuringu" ]
cs.LG
null
1504.01072
null
null
http://arxiv.org/pdf/1504.01072v1
2015-04-05T02:20:55Z
2015-04-05T02:20:55Z
EM-Based Channel Estimation from Crowd-Sourced RSSI Samples Corrupted by Noise and Interference
We propose a method for estimating channel parameters from RSSI measurements and the lost packet count, which can work in the presence of losses due to both interference and signal attenuation below the noise floor. This is especially important in the wireless networks, such as vehicular, where propagation model changes with the density of nodes. The method is based on Stochastic Expectation Maximization, where the received data is modeled as a mixture of distributions (no/low interference and strong interference), incomplete (censored) due to packet losses. The PDFs in the mixture are Gamma, according to the commonly accepted model for wireless signal and interference power. This approach leverages the loss count as additional information, hence outperforming maximum likelihood estimation, which does not use this information (ML-), for a small number of received RSSI samples. Hence, it allows inexpensive on-line channel estimation from ad-hoc collected data. The method also outperforms ML- on uncensored data mixtures, as ML- assumes that samples are from a single-mode PDF.
[ "['Silvija Kokalj-Filipovic' 'Larry Greenstein']", "Silvija Kokalj-Filipovic and Larry Greenstein" ]
cs.CL cs.LG cs.NE
null
1504.01106
null
null
http://arxiv.org/pdf/1504.01106v5
2015-06-02T05:56:06Z
2015-04-05T10:18:32Z
Discriminative Neural Sentence Modeling by Tree-Based Convolution
This paper proposes a tree-based convolutional neural network (TBCNN) for discriminative sentence modeling. Our models leverage either constituency trees or dependency trees of sentences. The tree-based convolution process extracts sentences' structural features, and these features are aggregated by max pooling. Such architecture allows short propagation paths between the output layer and underlying feature detectors, which enables effective structural feature learning and extraction. We evaluate our models on two tasks: sentiment analysis and question classification. In both experiments, TBCNN outperforms previous state-of-the-art results, including existing neural networks and dedicated feature/rule engineering. We also make efforts to visualize the tree-based convolution process, shedding light on how our models work.
[ "['Lili Mou' 'Hao Peng' 'Ge Li' 'Yan Xu' 'Lu Zhang' 'Zhi Jin']", "Lili Mou, Hao Peng, Ge Li, Yan Xu, Lu Zhang, Zhi Jin" ]
q-bio.GN cs.CE cs.LG
null
1504.01142
null
null
http://arxiv.org/pdf/1504.01142v1
2015-04-05T17:15:38Z
2015-04-05T17:15:38Z
Ultra-large alignments using Phylogeny-aware Profiles
Many biological questions, including the estimation of deep evolutionary histories and the detection of remote homology between protein sequences, rely upon multiple sequence alignments (MSAs) and phylogenetic trees of large datasets. However, accurate large-scale multiple sequence alignment is very difficult, especially when the dataset contains fragmentary sequences. We present UPP, an MSA method that uses a new machine learning technique - the Ensemble of Hidden Markov Models - that we propose here. UPP produces highly accurate alignments for both nucleotide and amino acid sequences, even on ultra-large datasets or datasets containing fragmentary sequences. UPP is available at https://github.com/smirarab/sepp.
[ "['Nam-phuong Nguyen' 'Siavash Mirarab' 'Keerthana Kumar' 'Tandy Warnow']", "Nam-phuong Nguyen, Siavash Mirarab, Keerthana Kumar, Tandy Warnow" ]
stat.ML cs.LG
null
1504.01169
null
null
http://arxiv.org/pdf/1504.01169v1
2015-04-05T23:20:47Z
2015-04-05T23:20:47Z
Efficient Dictionary Learning via Very Sparse Random Projections
Performing signal processing tasks on compressive measurements of data has received great attention in recent years. In this paper, we extend previous work on compressive dictionary learning by showing that more general random projections may be used, including sparse ones. More precisely, we examine compressive K-means clustering as a special case of compressive dictionary learning and give theoretical guarantees for its performance for a very general class of random projections. We then propose a memory and computation efficient dictionary learning algorithm, specifically designed for analyzing large volumes of high-dimensional data, which learns the dictionary from very sparse random projections. Experimental results demonstrate that our approach allows for reduction of computational complexity and memory/data access, with controllable loss in accuracy.
[ "['Farhad Pourkamali-Anaraki' 'Stephen Becker' 'Shannon M. Hughes']", "Farhad Pourkamali-Anaraki, Stephen Becker, Shannon M. Hughes" ]
stat.ML cs.CL cs.LG
null
1504.01255
null
null
http://arxiv.org/pdf/1504.01255v3
2015-11-01T15:26:16Z
2015-04-06T10:42:07Z
Semi-supervised Convolutional Neural Networks for Text Categorization via Region Embedding
This paper presents a new semi-supervised framework with convolutional neural networks (CNNs) for text categorization. Unlike the previous approaches that rely on word embeddings, our method learns embeddings of small text regions from unlabeled data for integration into a supervised CNN. The proposed scheme for embedding learning is based on the idea of two-view semi-supervised learning, which is intended to be useful for the task of interest even though the training is done on unlabeled data. Our models achieve better results than previous approaches on sentiment classification and topic classification tasks.
[ "Rie Johnson and Tong Zhang", "['Rie Johnson' 'Tong Zhang']" ]
math.ST cs.LG math.OC stat.ML stat.TH
null
1504.01294
null
null
http://arxiv.org/pdf/1504.01294v2
2016-04-22T21:58:42Z
2015-04-06T14:49:13Z
A Probabilistic $\ell_1$ Method for Clustering High Dimensional Data
In general, the clustering problem is NP-hard, and global optimality cannot be established for non-trivial instances. For high-dimensional data, distance-based methods for clustering or classification face an additional difficulty, the unreliability of distances in very high-dimensional spaces. We propose a distance-based iterative method for clustering data in very high-dimensional space, using the $\ell_1$-metric that is less sensitive to high dimensionality than the Euclidean distance. For $K$ clusters in $\mathbb{R}^n$, the problem decomposes to $K$ problems coupled by probabilities, and an iteration reduces to finding $Kn$ weighted medians of points on a line. The complexity of the algorithm is linear in the dimension of the data space, and its performance was observed to improve significantly as the dimension increases.
[ "['Tsvetan Asamov' 'Adi Ben-Israel']", "Tsvetan Asamov and Adi Ben-Israel" ]
stat.ML cs.LG
null
1504.01344
null
null
http://arxiv.org/pdf/1504.01344v1
2015-04-06T18:19:45Z
2015-04-06T18:19:45Z
Early Stopping is Nonparametric Variational Inference
We show that unconverged stochastic gradient descent can be interpreted as a procedure that samples from a nonparametric variational approximate posterior distribution. This distribution is implicitly defined as the transformation of an initial distribution by a sequence of optimization updates. By tracking the change in entropy over this sequence of transformations during optimization, we form a scalable, unbiased estimate of the variational lower bound on the log marginal likelihood. We can use this bound to optimize hyperparameters instead of using cross-validation. This Bayesian interpretation of SGD suggests improved, overfitting-resistant optimization procedures, and gives a theoretical foundation for popular tricks such as early stopping and ensembling. We investigate the properties of this marginal likelihood estimator on neural network models.
[ "['Dougal Maclaurin' 'David Duvenaud' 'Ryan P. Adams']", "Dougal Maclaurin, David Duvenaud, Ryan P. Adams" ]
cs.LG
null
1504.01365
null
null
http://arxiv.org/pdf/1504.01365v1
2015-04-06T19:25:47Z
2015-04-06T19:25:47Z
PASSCoDe: Parallel ASynchronous Stochastic dual Co-ordinate Descent
Stochastic Dual Coordinate Descent (SDCD) has become one of the most efficient ways to solve the family of $\ell_2$-regularized empirical risk minimization problems, including linear SVM, logistic regression, and many others. The vanilla implementation of DCD is quite slow; however, by maintaining primal variables while updating dual variables, the time complexity of SDCD can be significantly reduced. Such a strategy forms the core algorithm in the widely-used LIBLINEAR package. In this paper, we parallelize the SDCD algorithms in LIBLINEAR. In recent research, several synchronized parallel SDCD algorithms have been proposed, however, they fail to achieve good speedup in the shared memory multi-core setting. In this paper, we propose a family of asynchronous stochastic dual coordinate descent algorithms (ASDCD). Each thread repeatedly selects a random dual variable and conducts coordinate updates using the primal variables that are stored in the shared memory. We analyze the convergence properties when different locking/atomic mechanisms are applied. For implementation with atomic operations, we show linear convergence under mild conditions. For implementation without any atomic operations or locking, we present the first {\it backward error analysis} for ASDCD under the multi-core environment, showing that the converged solution is the exact solution for a primal problem with perturbed regularizer. Experimental results show that our methods are much faster than previous parallel coordinate descent solvers.
[ "['Cho-Jui Hsieh' 'Hsiang-Fu Yu' 'Inderjit S. Dhillon']", "Cho-Jui Hsieh and Hsiang-Fu Yu and Inderjit S. Dhillon" ]
cs.IT cs.DM cs.LG math.IT math.ST stat.ML stat.TH
null
1504.01369
null
null
http://arxiv.org/pdf/1504.01369v4
2016-05-06T03:18:52Z
2015-04-06T19:47:01Z
Information Recovery from Pairwise Measurements
This paper is concerned with jointly recovering $n$ node-variables $\left\{ x_{i}\right\}_{1\leq i\leq n}$ from a collection of pairwise difference measurements. Imagine we acquire a few observations taking the form of $x_{i}-x_{j}$; the observation pattern is represented by a measurement graph $\mathcal{G}$ with an edge set $\mathcal{E}$ such that $x_{i}-x_{j}$ is observed if and only if $(i,j)\in\mathcal{E}$. To account for noisy measurements in a general manner, we model the data acquisition process by a set of channels with given input/output transition measures. Employing information-theoretic tools applied to channel decoding problems, we develop a \emph{unified} framework to characterize the fundamental recovery criterion, which accommodates general graph structures, alphabet sizes, and channel transition measures. In particular, our results isolate a family of \emph{minimum} \emph{channel divergence measures} to characterize the degree of measurement corruption, which together with the size of the minimum cut of $\mathcal{G}$ dictates the feasibility of exact information recovery. For various homogeneous graphs, the recovery condition depends almost only on the edge sparsity of the measurement graph irrespective of other graphical metrics; alternatively, the minimum sample complexity required for these graphs scales like \[ \text{minimum sample complexity }\asymp\frac{n\log n}{\mathsf{Hel}_{1/2}^{\min}} \] for certain information metric $\mathsf{Hel}_{1/2}^{\min}$ defined in the main text, as long as the alphabet size is not super-polynomial in $n$. We apply our general theory to three concrete applications, including the stochastic block model, the outlier model, and the haplotype assembly problem. Our theory leads to order-wise tight recovery conditions for all these scenarios.
[ "Yuxin Chen, Changho Suh, Andrea J. Goldsmith", "['Yuxin Chen' 'Changho Suh' 'Andrea J. Goldsmith']" ]
cs.LG quant-ph
null
1504.01446
null
null
http://arxiv.org/pdf/1504.01446v1
2015-04-07T00:50:30Z
2015-04-07T00:50:30Z
Totally Corrective Boosting with Cardinality Penalization
We propose a totally corrective boosting algorithm with explicit cardinality regularization. The resulting combinatorial optimization problems are not known to be efficiently solvable with existing classical methods, but emerging quantum optimization technology gives hope for achieving sparser models in practice. In order to demonstrate the utility of our algorithm, we use a distributed classical heuristic optimizer as a stand-in for quantum hardware. Even though this evaluation methodology incurs large time and resource costs on classical computing machinery, it allows us to gauge the potential gains in generalization performance and sparsity of the resulting boosted ensembles. Our experimental results on public data sets commonly used for benchmarking of boosting algorithms decidedly demonstrate the existence of such advantages. If actual quantum optimization were to be used with this algorithm in the future, we would expect equivalent or superior results at much smaller time and energy costs during training. Moreover, studying cardinality-penalized boosting also sheds light on why unregularized boosting algorithms with early stopping often yield better results than their counterparts with explicit convex regularization: Early stopping performs suboptimal cardinality regularization. The results that we present here indicate it is beneficial to explicitly solve the combinatorial problem still left open at early termination.
[ "['Vasil S. Denchev' 'Nan Ding' 'Shin Matsushima' 'S. V. N. Vishwanathan'\n 'Hartmut Neven']", "Vasil S. Denchev, Nan Ding, Shin Matsushima, S.V.N. Vishwanathan,\n Hartmut Neven" ]
cs.LG cs.CL cs.NE stat.ML
null
1504.01482
null
null
http://arxiv.org/pdf/1504.01482v1
2015-04-07T06:12:14Z
2015-04-07T06:12:14Z
Deep Recurrent Neural Networks for Acoustic Modelling
We present a novel deep Recurrent Neural Network (RNN) model for acoustic modelling in Automatic Speech Recognition (ASR). We term our contribution as a TC-DNN-BLSTM-DNN model, the model combines a Deep Neural Network (DNN) with Time Convolution (TC), followed by a Bidirectional Long Short-Term Memory (BLSTM), and a final DNN. The first DNN acts as a feature processor to our model, the BLSTM then generates a context from the sequence acoustic signal, and the final DNN takes the context and models the posterior probabilities of the acoustic states. We achieve a 3.47 WER on the Wall Street Journal (WSJ) eval92 task or more than 8% relative improvement over the baseline DNN models.
[ "['William Chan' 'Ian Lane']", "William Chan, Ian Lane" ]
cs.LG cs.CL cs.NE stat.ML
null
1504.01483
null
null
http://arxiv.org/pdf/1504.01483v1
2015-04-07T06:15:44Z
2015-04-07T06:15:44Z
Transferring Knowledge from a RNN to a DNN
Deep Neural Network (DNN) acoustic models have yielded many state-of-the-art results in Automatic Speech Recognition (ASR) tasks. More recently, Recurrent Neural Network (RNN) models have been shown to outperform DNNs counterparts. However, state-of-the-art DNN and RNN models tend to be impractical to deploy on embedded systems with limited computational capacity. Traditionally, the approach for embedded platforms is to either train a small DNN directly, or to train a small DNN that learns the output distribution of a large DNN. In this paper, we utilize a state-of-the-art RNN to transfer knowledge to small DNN. We use the RNN model to generate soft alignments and minimize the Kullback-Leibler divergence against the small DNN. The small DNN trained on the soft RNN alignments achieved a 3.93 WER on the Wall Street Journal (WSJ) eval92 task compared to a baseline 4.54 WER or more than 13% relative improvement.
[ "['William Chan' 'Nan Rosemary Ke' 'Ian Lane']", "William Chan and Nan Rosemary Ke and Ian Lane" ]
cs.CV cs.LG stat.ML
10.1109/CVPR.2015.7298942
1504.01492
null
null
http://arxiv.org/abs/1504.01492v1
2015-04-07T06:43:50Z
2015-04-07T06:43:50Z
Efficient SDP Inference for Fully-connected CRFs Based on Low-rank Decomposition
Conditional Random Fields (CRF) have been widely used in a variety of computer vision tasks. Conventional CRFs typically define edges on neighboring image pixels, resulting in a sparse graph such that efficient inference can be performed. However, these CRFs fail to model long-range contextual relationships. Fully-connected CRFs have thus been proposed. While there are efficient approximate inference methods for such CRFs, usually they are sensitive to initialization and make strong assumptions. In this work, we develop an efficient, yet general algorithm for inference on fully-connected CRFs. The algorithm is based on a scalable SDP algorithm and the low- rank approximation of the similarity/kernel matrix. The core of the proposed algorithm is a tailored quasi-Newton method that takes advantage of the low-rank matrix approximation when solving the specialized SDP dual problem. Experiments demonstrate that our method can be applied on fully-connected CRFs that cannot be solved previously, such as pixel-level image co-segmentation.
[ "Peng Wang, Chunhua Shen, Anton van den Hengel", "['Peng Wang' 'Chunhua Shen' 'Anton van den Hengel']" ]
cs.LG cs.NE
null
1504.01575
null
null
http://arxiv.org/pdf/1504.01575v3
2015-11-02T07:46:24Z
2015-04-07T12:21:03Z
Bidirectional Recurrent Neural Networks as Generative Models - Reconstructing Gaps in Time Series
Bidirectional recurrent neural networks (RNN) are trained to predict both in the positive and negative time directions simultaneously. They have not been used commonly in unsupervised tasks, because a probabilistic interpretation of the model has been difficult. Recently, two different frameworks, GSN and NADE, provide a connection between reconstruction and probabilistic modeling, which makes the interpretation possible. As far as we know, neither GSN or NADE have been studied in the context of time series before. As an example of an unsupervised task, we study the problem of filling in gaps in high-dimensional time series with complex dynamics. Although unidirectional RNNs have recently been trained successfully to model such time series, inference in the negative time direction is non-trivial. We propose two probabilistic interpretations of bidirectional RNNs that can be used to reconstruct missing gaps efficiently. Our experiments on text data show that both proposed methods are much more accurate than unidirectional reconstructions, although a bit less accurate than a computationally complex bidirectional Bayesian inference on the unidirectional RNN. We also provide results on music data for which the Bayesian inference is computationally infeasible, demonstrating the scalability of the proposed methods.
[ "['Mathias Berglund' 'Tapani Raiko' 'Mikko Honkala' 'Leo Kärkkäinen'\n 'Akos Vetek' 'Juha Karhunen']", "Mathias Berglund, Tapani Raiko, Mikko Honkala, Leo K\\\"arkk\\\"ainen,\n Akos Vetek, Juha Karhunen" ]
cs.LG stat.ML
null
1504.01697
null
null
http://arxiv.org/pdf/1504.01697v1
2015-04-07T18:21:37Z
2015-04-07T18:21:37Z
Tensor machines for learning target-specific polynomial features
Recent years have demonstrated that using random feature maps can significantly decrease the training and testing times of kernel-based algorithms without significantly lowering their accuracy. Regrettably, because random features are target-agnostic, typically thousands of such features are necessary to achieve acceptable accuracies. In this work, we consider the problem of learning a small number of explicit polynomial features. Our approach, named Tensor Machines, finds a parsimonious set of features by optimizing over the hypothesis class introduced by Kar and Karnick for random feature maps in a target-specific manner. Exploiting a natural connection between polynomials and tensors, we provide bounds on the generalization error of Tensor Machines. Empirically, Tensor Machines behave favorably on several real-world datasets compared to other state-of-the-art techniques for learning polynomial features, and deliver significantly more parsimonious models.
[ "Jiyan Yang and Alex Gittens", "['Jiyan Yang' 'Alex Gittens']" ]
cs.LG
null
1504.01840
null
null
http://arxiv.org/pdf/1504.01840v1
2015-04-08T06:22:44Z
2015-04-08T06:22:44Z
Autonomous CRM Control via CLV Approximation with Deep Reinforcement Learning in Discrete and Continuous Action Space
The paper outlines a framework for autonomous control of a CRM (customer relationship management) system. First, it explores how a modified version of the widely accepted Recency-Frequency-Monetary Value system of metrics can be used to define the state space of clients or donors. Second, it describes a procedure to determine the optimal direct marketing action in discrete and continuous action space for the given individual, based on his position in the state space. The procedure involves the use of model-free Q-learning to train a deep neural network that relates a client's position in the state space to rewards associated with possible marketing actions. The estimated value function over the client state space can be interpreted as customer lifetime value, and thus allows for a quick plug-in estimation of CLV for a given client. Experimental results are presented, based on KDD Cup 1998 mailing dataset of donation solicitations.
[ "Yegor Tkachenko", "['Yegor Tkachenko']" ]