categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
cs.LG cs.AI
10.1016/j.bspc.2013.06.004
1212.2262
null
null
http://arxiv.org/abs/1212.2262v1
2012-12-11T00:49:27Z
2012-12-11T00:49:27Z
Bag-of-Words Representation for Biomedical Time Series Classification
Automatic analysis of biomedical time series such as electroencephalogram (EEG) and electrocardiographic (ECG) signals has attracted great interest in the community of biomedical engineering due to its important applications in medicine. In this work, a simple yet effective bag-of-words representation that is able to capture both local and global structure similarity information is proposed for biomedical time series representation. In particular, similar to the bag-of-words model used in text document domain, the proposed method treats a time series as a text document and extracts local segments from the time series as words. The biomedical time series is then represented as a histogram of codewords, each entry of which is the count of a codeword appeared in the time series. Although the temporal order of the local segments is ignored, the bag-of-words representation is able to capture high-level structural information because both local and global structural information are well utilized. The performance of the bag-of-words model is validated on three datasets extracted from real EEG and ECG signals. The experimental results demonstrate that the proposed method is not only insensitive to parameters of the bag-of-words model such as local segment length and codebook size, but also robust to noise.
[ "Jin Wang, Ping Liu, Mary F.H.She, Saeid Nahavandi and and Abbas\n Kouzani", "['Jin Wang' 'Ping Liu' 'Mary F. H. She' 'Saeid Nahavandi'\n 'and Abbas Kouzani']" ]
cs.DB cs.IR cs.LG
null
1212.2287
null
null
http://arxiv.org/pdf/1212.2287v2
2013-04-26T16:33:08Z
2012-12-11T03:20:46Z
Runtime Optimizations for Prediction with Tree-Based Models
Tree-based models have proven to be an effective solution for web ranking as well as other problems in diverse domains. This paper focuses on optimizing the runtime performance of applying such models to make predictions, given an already-trained model. Although exceedingly simple conceptually, most implementations of tree-based models do not efficiently utilize modern superscalar processor architectures. By laying out data structures in memory in a more cache-conscious fashion, removing branches from the execution flow using a technique called predication, and micro-batching predictions using a technique called vectorization, we are able to better exploit modern processor architectures and significantly improve the speed of tree-based models over hard-coded if-else blocks. Our work contributes to the exploration of architecture-conscious runtime implementations of machine learning algorithms.
[ "Nima Asadi, Jimmy Lin, and Arjen P. de Vries", "['Nima Asadi' 'Jimmy Lin' 'Arjen P. de Vries']" ]
stat.ML cs.LG
null
1212.2340
null
null
http://arxiv.org/pdf/1212.2340v1
2012-12-11T09:03:17Z
2012-12-11T09:03:17Z
PAC-Bayesian Learning and Domain Adaptation
In machine learning, Domain Adaptation (DA) arises when the distribution gen- erating the test (target) data differs from the one generating the learning (source) data. It is well known that DA is an hard task even under strong assumptions, among which the covariate-shift where the source and target distributions diverge only in their marginals, i.e. they have the same labeling function. Another popular approach is to consider an hypothesis class that moves closer the two distributions while implying a low-error for both tasks. This is a VC-dim approach that restricts the complexity of an hypothesis class in order to get good generalization. Instead, we propose a PAC-Bayesian approach that seeks for suitable weights to be given to each hypothesis in order to build a majority vote. We prove a new DA bound in the PAC-Bayesian context. This leads us to design the first DA-PAC-Bayesian algorithm based on the minimization of the proposed bound. Doing so, we seek for a \rho-weighted majority vote that takes into account a trade-off between three quantities. The first two quantities being, as usual in the PAC-Bayesian approach, (a) the complexity of the majority vote (measured by a Kullback-Leibler divergence) and (b) its empirical risk (measured by the \rho-average errors on the source sample). The third quantity is (c) the capacity of the majority vote to distinguish some structural difference between the source and target samples.
[ "['Pascal Germain' 'Amaury Habrard' 'François Laviolette' 'Emilie Morvant']", "Pascal Germain, Amaury Habrard (LAHC), Fran\\c{c}ois Laviolette, Emilie\n Morvant (LIF)" ]
cs.CL cs.LG
null
1212.2390
null
null
http://arxiv.org/pdf/1212.2390v1
2012-12-11T11:35:30Z
2012-12-11T11:35:30Z
On the complexity of learning a language: An improvement of Block's algorithm
Language learning is thought to be a highly complex process. One of the hurdles in learning a language is to learn the rules of syntax of the language. Rules of syntax are often ordered in that before one rule can applied one must apply another. It has been thought that to learn the order of n rules one must go through all n! permutations. Thus to learn the order of 27 rules would require 27! steps or 1.08889x10^{28} steps. This number is much greater than the number of seconds since the beginning of the universe! In an insightful analysis the linguist Block ([Block 86], pp. 62-63, p.238) showed that with the assumption of transitivity this vast number of learning steps reduces to a mere 377 steps. We present a mathematical analysis of the complexity of Block's algorithm. The algorithm has a complexity of order n^2 given n rules. In addition, we improve Block's results exponentially, by introducing an algorithm that has complexity of order less than n log n.
[ "Eric Werner", "['Eric Werner']" ]
cs.CR cs.LG
10.5121/ijnsa
1212.2414
null
null
http://arxiv.org/abs/1212.2414v1
2012-12-11T13:14:42Z
2012-12-11T13:14:42Z
Mining Techniques in Network Security to Enhance Intrusion Detection Systems
In intrusion detection systems, classifiers still suffer from several drawbacks such as data dimensionality and dominance, different network feature types, and data impact on the classification. In this paper two significant enhancements are presented to solve these drawbacks. The first enhancement is an improved feature selection using sequential backward search and information gain. This, in turn, extracts valuable features that enhance positively the detection rate and reduce the false positive rate. The second enhancement is transferring nominal network features to numeric ones by exploiting the discrete random variable and the probability mass function to solve the problem of different feature types, the problem of data dominance, and data impact on the classification. The latter is combined to known normalization methods to achieve a significant hybrid normalization approach. Finally, an intensive and comparative study approves the efficiency of these enhancements and shows better performance comparing to other proposed methods.
[ "['Maher Salem' 'Ulrich Buehler']", "Maher Salem and Ulrich Buehler" ]
cs.LG cs.CV
null
1212.2415
null
null
http://arxiv.org/pdf/1212.2415v1
2012-12-11T13:19:54Z
2012-12-11T13:19:54Z
Robust Face Recognition using Local Illumination Normalization and Discriminant Feature Point Selection
Face recognition systems must be robust to the variation of various factors such as facial expression, illumination, head pose and aging. Especially, the robustness against illumination variation is one of the most important problems to be solved for the practical use of face recognition systems. Gabor wavelet is widely used in face detection and recognition because it gives the possibility to simulate the function of human visual system. In this paper, we propose a method for extracting Gabor wavelet features which is stable under the variation of local illumination and show experiment results demonstrating its effectiveness.
[ "['Song Han' 'Jinsong Kim' 'Cholhun Kim' 'Jongchol Jo' 'Sunam Han']", "Song Han, Jinsong Kim, Cholhun Kim, Jongchol Jo, and Sunam Han" ]
cs.IR cs.LG stat.ML
null
1212.2442
null
null
http://arxiv.org/pdf/1212.2442v1
2012-10-19T15:04:12Z
2012-10-19T15:04:12Z
Active Collaborative Filtering
Collaborative filtering (CF) allows the preferences of multiple users to be pooled to make recommendations regarding unseen products. We consider in this paper the problem of online and interactive CF: given the current ratings associated with a user, what queries (new ratings) would most improve the quality of the recommendations made? We cast this terms of expected value of information (EVOI); but the online computational cost of computing optimal queries is prohibitive. We show how offline prototyping and computation of bounds on EVOI can be used to dramatically reduce the required online computation. The framework we develop is general, but we focus on derivations and empirical study in the specific case of the multiple-cause vector quantization model.
[ "Craig Boutilier, Richard S. Zemel, Benjamin Marlin", "['Craig Boutilier' 'Richard S. Zemel' 'Benjamin Marlin']" ]
cs.LG stat.ML
null
1212.2447
null
null
http://arxiv.org/pdf/1212.2447v1
2012-10-19T15:03:51Z
2012-10-19T15:03:51Z
Bayesian Hierarchical Mixtures of Experts
The Hierarchical Mixture of Experts (HME) is a well-known tree-based model for regression and classification, based on soft probabilistic splits. In its original formulation it was trained by maximum likelihood, and is therefore prone to over-fitting. Furthermore the maximum likelihood framework offers no natural metric for optimizing the complexity and structure of the tree. Previous attempts to provide a Bayesian treatment of the HME model have relied either on ad-hoc local Gaussian approximations or have dealt with related models representing the joint distribution of both input and output variables. In this paper we describe a fully Bayesian treatment of the HME model based on variational inference. By combining local and global variational methods we obtain a rigourous lower bound on the marginal probability of the data under the model. This bound is optimized during the training phase, and its resulting value can be used for model order selection. We present results using this approach for a data set describing robot arm kinematics.
[ "Christopher M. Bishop, Markus Svensen", "['Christopher M. Bishop' 'Markus Svensen']" ]
cs.LG stat.ML
null
1212.2460
null
null
http://arxiv.org/pdf/1212.2460v1
2012-10-19T15:05:02Z
2012-10-19T15:05:02Z
The Information Bottleneck EM Algorithm
Learning with hidden variables is a central challenge in probabilistic graphical models that has important implications for many real-life problems. The classical approach is using the Expectation Maximization (EM) algorithm. This algorithm, however, can get trapped in local maxima. In this paper we explore a new approach that is based on the Information Bottleneck principle. In this approach, we view the learning problem as a tradeoff between two information theoretic objectives. The first is to make the hidden variables uninformative about the identity of specific instances. The second is to make the hidden variables informative about the observed attributes. By exploring different tradeoffs between these two objectives, we can gradually converge on a high-scoring solution. As we show, the resulting, Information Bottleneck Expectation Maximization (IB-EM) algorithm, manages to find solutions that are superior to standard EM methods.
[ "['Gal Elidan' 'Nir Friedman']", "Gal Elidan, Nir Friedman" ]
stat.ME cs.LG stat.ML
null
1212.2462
null
null
http://arxiv.org/pdf/1212.2462v1
2012-10-19T15:04:52Z
2012-10-19T15:04:52Z
A New Algorithm for Maximum Likelihood Estimation in Gaussian Graphical Models for Marginal Independence
Graphical models with bi-directed edges (<->) represent marginal independence: the absence of an edge between two vertices indicates that the corresponding variables are marginally independent. In this paper, we consider maximum likelihood estimation in the case of continuous variables with a Gaussian joint distribution, sometimes termed a covariance graph model. We present a new fitting algorithm which exploits standard regression techniques and establish its convergence properties. Moreover, we contrast our procedure to existing estimation methods.
[ "['Mathias Drton' 'Thomas S. Richardson']", "Mathias Drton, Thomas S. Richardson" ]
cs.AI cs.LG stat.ML
null
1212.2464
null
null
http://arxiv.org/pdf/1212.2464v1
2012-10-19T15:04:44Z
2012-10-19T15:04:44Z
A Robust Independence Test for Constraint-Based Learning of Causal Structure
Constraint-based (CB) learning is a formalism for learning a causal network with a database D by performing a series of conditional-independence tests to infer structural information. This paper considers a new test of independence that combines ideas from Bayesian learning, Bayesian network inference, and classical hypothesis testing to produce a more reliable and robust test. The new test can be calculated in the same asymptotic time and space required for the standard tests such as the chi-squared test, but it allows the specification of a prior distribution over parameters and can be used when the database is incomplete. We prove that the test is correct, and we demonstrate empirically that, when used with a CB causal discovery algorithm with noninformative priors, it recovers structural features more reliably and it produces networks with smaller KL-Divergence, especially as the number of nodes increases or the number of records decreases. Another benefit is the dramatic reduction in the probability that a CB algorithm will stall during the search, providing a remedy for an annoying problem plaguing CB learning when the database is small.
[ "['Denver Dash' 'Marek J. Druzdzel']", "Denver Dash, Marek J. Druzdzel" ]
cs.LG stat.ML
null
1212.2466
null
null
http://arxiv.org/pdf/1212.2466v1
2012-10-19T15:04:36Z
2012-10-19T15:04:36Z
On Information Regularization
We formulate a principle for classification with the knowledge of the marginal distribution over the data points (unlabeled data). The principle is cast in terms of Tikhonov style regularization where the regularization penalty articulates the way in which the marginal density should constrain otherwise unrestricted conditional distributions. Specifically, the regularization penalty penalizes any information introduced between the examples and labels beyond what is provided by the available labeled examples. The work extends Szummer and Jaakkola's information regularization (NIPS 2002) to multiple dimensions, providing a regularizer independent of the covering of the space used in the derivation. We show in addition how the information regularizer can be used as a measure of complexity of the classification task with unlabeled data and prove a relevant sample-complexity bound. We illustrate the regularization principle in practice by restricting the class of conditional distributions to be logistic regression models and constructing the regularization penalty from a finite set of unlabeled examples.
[ "['Adrian Corduneanu' 'Tommi S. Jaakkola']", "Adrian Corduneanu, Tommi S. Jaakkola" ]
cs.LG cs.AI stat.ML
null
1212.2468
null
null
http://arxiv.org/pdf/1212.2468v1
2012-10-19T15:04:28Z
2012-10-19T15:04:28Z
Large-Sample Learning of Bayesian Networks is NP-Hard
In this paper, we provide new complexity results for algorithms that learn discrete-variable Bayesian networks from data. Our results apply whenever the learning algorithm uses a scoring criterion that favors the simplest model able to represent the generative distribution exactly. Our results therefore hold whenever the learning algorithm uses a consistent scoring criterion and is applied to a sufficiently large dataset. We show that identifying high-scoring structures is hard, even when we are given an independence oracle, an inference oracle, and/or an information oracle. Our negative results also apply to the learning of discrete-variable Bayesian networks in which each node has at most k parents, for all k > 3.
[ "David Maxwell Chickering, Christopher Meek, David Heckerman", "['David Maxwell Chickering' 'Christopher Meek' 'David Heckerman']" ]
cs.LG cs.AI stat.ML
null
1212.2470
null
null
http://arxiv.org/pdf/1212.2470v1
2012-10-19T15:04:17Z
2012-10-19T15:04:17Z
Reasoning about Bayesian Network Classifiers
Bayesian network classifiers are used in many fields, and one common class of classifiers are naive Bayes classifiers. In this paper, we introduce an approach for reasoning about Bayesian network classifiers in which we explicitly convert them into Ordered Decision Diagrams (ODDs), which are then used to reason about the properties of these classifiers. Specifically, we present an algorithm for converting any naive Bayes classifier into an ODD, and we show theoretically and experimentally that this algorithm can give us an ODD that is tractable in size even given an intractable number of instances. Since ODDs are tractable representations of classifiers, our algorithm allows us to efficiently test the equivalence of two naive Bayes classifiers and characterize discrepancies between them. We also show a number of additional results including a count of distinct classifiers that can be induced by changing some CPT in a naive Bayes classifier, and the range of allowable changes to a CPT which keeps the current classifier unchanged.
[ "Hei Chan, Adnan Darwiche", "['Hei Chan' 'Adnan Darwiche']" ]
cs.LG cs.AI cs.NA
null
1212.2471
null
null
http://arxiv.org/pdf/1212.2471v1
2012-10-19T15:06:41Z
2012-10-19T15:06:41Z
Monte Carlo Matrix Inversion Policy Evaluation
In 1950, Forsythe and Leibler (1950) introduced a statistical technique for finding the inverse of a matrix by characterizing the elements of the matrix inverse as expected values of a sequence of random walks. Barto and Duff (1994) subsequently showed relations between this technique and standard dynamic programming and temporal differencing methods. The advantage of the Monte Carlo matrix inversion (MCMI) approach is that it scales better with respect to state-space size than alternative techniques. In this paper, we introduce an algorithm for performing reinforcement learning policy evaluation using MCMI. We demonstrate that MCMI improves on runtime over a maximum likelihood model-based policy evaluation approach and on both runtime and accuracy over the temporal differencing (TD) policy evaluation approach. We further improve on MCMI policy evaluation by adding an importance sampling technique to our algorithm to reduce the variance of our estimator. Lastly, we illustrate techniques for scaling up MCMI to large state spaces in order to perform policy improvement.
[ "Fletcher Lu, Dale Schuurmans", "['Fletcher Lu' 'Dale Schuurmans']" ]
cs.LG stat.ML
null
1212.2472
null
null
http://arxiv.org/pdf/1212.2472v1
2012-10-19T15:06:36Z
2012-10-19T15:06:36Z
Budgeted Learning of Naive-Bayes Classifiers
Frequently, acquiring training data has an associated cost. We consider the situation where the learner may purchase data during training, subject TO a budget. IN particular, we examine the CASE WHERE each feature label has an associated cost, AND the total cost OF ALL feature labels acquired during training must NOT exceed the budget.This paper compares methods FOR choosing which feature label TO purchase next, given the budget AND the CURRENT belief state OF naive Bayes model parameters.Whereas active learning has traditionally focused ON myopic(greedy) strategies FOR query selection, this paper presents a tractable method FOR incorporating knowledge OF the budget INTO the decision making process, which improves performance.
[ "Daniel J. Lizotte, Omid Madani, Russell Greiner", "['Daniel J. Lizotte' 'Omid Madani' 'Russell Greiner']" ]
cs.LG stat.ML
null
1212.2474
null
null
http://arxiv.org/pdf/1212.2474v1
2012-10-19T15:06:27Z
2012-10-19T15:06:27Z
Learning Riemannian Metrics
We propose a solution to the problem of estimating a Riemannian metric associated with a given differentiable manifold. The metric learning problem is based on minimizing the relative volume of a given set of points. We derive the details for a family of metrics on the multinomial simplex. The resulting metric has applications in text classification and bears some similarity to TFIDF representation of text documents.
[ "Guy Lebanon", "['Guy Lebanon']" ]
cs.LG cs.SY
null
1212.2475
null
null
http://arxiv.org/pdf/1212.2475v1
2012-10-19T15:06:23Z
2012-10-19T15:06:23Z
Efficient Gradient Estimation for Motor Control Learning
The task of estimating the gradient of a function in the presence of noise is central to several forms of reinforcement learning, including policy search methods. We present two techniques for reducing gradient estimation errors in the presence of observable input noise applied to the control signal. The first method extends the idea of a reinforcement baseline by fitting a local linear model to the function whose gradient is being estimated; we show how to find the linear model that minimizes the variance of the gradient estimate, and how to estimate the model from data. The second method improves this further by discounting components of the gradient vector that have high variance. These methods are applied to the problem of motor control learning, where actuator noise has a significant influence on behavior. In particular, we apply the techniques to learn locally optimal controllers for a dart-throwing task using a simulated three-link arm; we demonstrate that proposed methods significantly improve the reward function gradient estimate and, consequently, the learning curve, over existing methods.
[ "Gregory Lawrence, Noah Cowan, Stuart Russell", "['Gregory Lawrence' 'Noah Cowan' 'Stuart Russell']" ]
cs.LG cs.AI stat.ML
null
1212.2480
null
null
http://arxiv.org/pdf/1212.2480v1
2012-10-19T15:06:00Z
2012-10-19T15:06:00Z
Approximate Inference and Constrained Optimization
Loopy and generalized belief propagation are popular algorithms for approximate inference in Markov random fields and Bayesian networks. Fixed points of these algorithms correspond to extrema of the Bethe and Kikuchi free energy. However, belief propagation does not always converge, which explains the need for approaches that explicitly minimize the Kikuchi/Bethe free energy, such as CCCP and UPS. Here we describe a class of algorithms that solves this typically nonconvex constrained minimization of the Kikuchi free energy through a sequence of convex constrained minimizations of upper bounds on the Kikuchi free energy. Intuitively one would expect tighter bounds to lead to faster algorithms, which is indeed convincingly demonstrated in our simulations. Several ideas are applied to obtain tight convex bounds that yield dramatic speed-ups over CCCP.
[ "Tom Heskes, Kees Albers, Hilbert Kappen", "['Tom Heskes' 'Kees Albers' 'Hilbert Kappen']" ]
cs.LG stat.ML
null
1212.2483
null
null
http://arxiv.org/pdf/1212.2483v1
2012-10-19T15:05:46Z
2012-10-19T15:05:46Z
Sufficient Dimensionality Reduction with Irrelevant Statistics
The problem of finding a reduced dimensionality representation of categorical variables while preserving their most relevant characteristics is fundamental for the analysis of complex data. Specifically, given a co-occurrence matrix of two variables, one often seeks a compact representation of one variable which preserves information about the other variable. We have recently introduced ``Sufficient Dimensionality Reduction' [GT-2003], a method that extracts continuous reduced dimensional features whose measurements (i.e., expectation values) capture maximal mutual information among the variables. However, such measurements often capture information that is irrelevant for a given task. Widely known examples are illumination conditions, which are irrelevant as features for face recognition, writing style which is irrelevant as a feature for content classification, and intonation which is irrelevant as a feature for speech recognition. Such irrelevance cannot be deduced apriori, since it depends on the details of the task, and is thus inherently ill defined in the purely unsupervised case. Separating relevant from irrelevant features can be achieved using additional side data that contains such irrelevant structures. This approach was taken in [CT-2002], extending the information bottleneck method, which uses clustering to compress the data. Here we use this side-information framework to identify features whose measurements are maximally informative for the original data set, but carry as little information as possible on a side data set. In statistical terms this can be understood as extracting statistics which are maximally sufficient for the original dataset, while simultaneously maximally ancillary for the side dataset. We formulate this tradeoff as a constrained optimization problem and characterize its solutions. We then derive a gradient descent algorithm for this problem, which is based on the Generalized Iterative Scaling method for finding maximum entropy distributions. The method is demonstrated on synthetic data, as well as on real face recognition datasets, and is shown to outperform standard methods such as oriented PCA.
[ "Amir Globerson, Gal Chechik, Naftali Tishby", "['Amir Globerson' 'Gal Chechik' 'Naftali Tishby']" ]
cs.LG stat.ML
null
1212.2487
null
null
http://arxiv.org/pdf/1212.2487v1
2012-10-19T15:05:29Z
2012-10-19T15:05:29Z
Locally Weighted Naive Bayes
Despite its simplicity, the naive Bayes classifier has surprised machine learning researchers by exhibiting good performance on a variety of learning problems. Encouraged by these results, researchers have looked to overcome naive Bayes primary weakness - attribute independence - and improve the performance of the algorithm. This paper presents a locally weighted version of naive Bayes that relaxes the independence assumption by learning local models at prediction time. Experimental results show that locally weighted naive Bayes rarely degrades accuracy compared to standard naive Bayes and, in many cases, improves accuracy dramatically. The main advantage of this method compared to other techniques for enhancing naive Bayes is its conceptual and computational simplicity.
[ "['Eibe Frank' 'Mark Hall' 'Bernhard Pfahringer']", "Eibe Frank, Mark Hall, Bernhard Pfahringer" ]
cs.LG stat.ML
null
1212.2488
null
null
http://arxiv.org/pdf/1212.2488v1
2012-10-19T15:05:25Z
2012-10-19T15:05:25Z
A Distance-Based Branch and Bound Feature Selection Algorithm
There is no known efficient method for selecting k Gaussian features from n which achieve the lowest Bayesian classification error. We show an example of how greedy algorithms faced with this task are led to give results that are not optimal. This motivates us to propose a more robust approach. We present a Branch and Bound algorithm for finding a subset of k independent Gaussian features which minimizes the naive Bayesian classification error. Our algorithm uses additive monotonic distance measures to produce bounds for the Bayesian classification error in order to exclude many feature subsets from evaluation, while still returning an optimal solution. We test our method on synthetic data as well as data obtained from gene expression profiling.
[ "['Ari Frank' 'Dan Geiger' 'Zohar Yakhini']", "Ari Frank, Dan Geiger, Zohar Yakhini" ]
cs.LG stat.ML
null
1212.2490
null
null
http://arxiv.org/pdf/1212.2490v1
2012-10-19T15:07:56Z
2012-10-19T15:07:56Z
On the Convergence of Bound Optimization Algorithms
Many practitioners who use the EM algorithm complain that it is sometimes slow. When does this happen, and what can be done about it? In this paper, we study the general class of bound optimization algorithms - including Expectation-Maximization, Iterative Scaling and CCCP - and their relationship to direct optimization algorithms such as gradient-based methods for parameter learning. We derive a general relationship between the updates performed by bound optimization methods and those of gradient and second-order methods and identify analytic conditions under which bound optimization algorithms exhibit quasi-Newton behavior, and conditions under which they possess poor, first-order convergence. Based on this analysis, we consider several specific algorithms, interpret and analyze their convergence properties and provide some recipes for preprocessing input to these algorithms to yield faster convergence behavior. We report empirical results supporting our analysis and showing that simple data preprocessing can result in dramatically improved performance of bound optimizers in practice.
[ "Ruslan R Salakhutdinov, Sam T Roweis, Zoubin Ghahramani", "['Ruslan R Salakhutdinov' 'Sam T Roweis' 'Zoubin Ghahramani']" ]
cs.LG stat.ML
null
1212.2491
null
null
http://arxiv.org/pdf/1212.2491v1
2012-10-19T15:07:51Z
2012-10-19T15:07:51Z
Automated Analytic Asymptotic Evaluation of the Marginal Likelihood for Latent Models
We present and implement two algorithms for analytic asymptotic evaluation of the marginal likelihood of data given a Bayesian network with hidden nodes. As shown by previous work, this evaluation is particularly hard for latent Bayesian network models, namely networks that include hidden variables, where asymptotic approximation deviates from the standard BIC score. Our algorithms solve two central difficulties in asymptotic evaluation of marginal likelihood integrals, namely, evaluation of regular dimensionality drop for latent Bayesian network models and computation of non-standard approximation formulas for singular statistics for these models. The presented algorithms are implemented in Matlab and Maple and their usage is demonstrated for marginal likelihood approximations for Bayesian networks with hidden variables.
[ "['Dmitry Rusakov' 'Dan Geiger']", "Dmitry Rusakov, Dan Geiger" ]
cs.LG stat.ML
null
1212.2494
null
null
http://arxiv.org/pdf/1212.2494v1
2012-10-19T15:07:42Z
2012-10-19T15:07:42Z
Learning Generative Models of Similarity Matrices
We describe a probabilistic (generative) view of affinity matrices along with inference algorithms for a subclass of problems associated with data clustering. This probabilistic view is helpful in understanding different models and algorithms that are based on affinity functions OF the data. IN particular, we show how(greedy) inference FOR a specific probabilistic model IS equivalent TO the spectral clustering algorithm.It also provides a framework FOR developing new algorithms AND extended models. AS one CASE, we present new generative data clustering models that allow us TO infer the underlying distance measure suitable for the clustering problem at hand. These models seem to perform well in a larger class of problems for which other clustering algorithms (including spectral clustering) usually fail. Experimental evaluation was performed in a variety point data sets, showing excellent performance.
[ "Romer Rosales, Brendan J. Frey", "['Romer Rosales' 'Brendan J. Frey']" ]
cs.LG stat.ML
null
1212.2498
null
null
http://arxiv.org/pdf/1212.2498v1
2012-10-19T15:07:23Z
2012-10-19T15:07:23Z
Learning Continuous Time Bayesian Networks
Continuous time Bayesian networks (CTBNs) describe structured stochastic processes with finitely many states that evolve over continuous time. A CTBN is a directed (possibly cyclic) dependency graph over a set of variables, each of which represents a finite state continuous time Markov process whose transition model is a function of its parents. We address the problem of learning parameters and structure of a CTBN from fully observed data. We define a conjugate prior for CTBNs, and show how it can be used both for Bayesian parameter estimation and as the basis of a Bayesian score for structure learning. Because acyclicity is not a constraint in CTBNs, we can show that the structure learning problem is significantly easier, both in theory and in practice, than structure learning for dynamic Bayesian networks (DBNs). Furthermore, as CTBNs can tailor the parameters and dependency structure to the different time granularities of the evolution of different variables, they can provide a better fit to continuous-time processes than DBNs with a fixed time granularity.
[ "['Uri Nodelman' 'Christian R. Shelton' 'Daphne Koller']", "Uri Nodelman, Christian R. Shelton, Daphne Koller" ]
cs.LG cs.AI stat.ML
null
1212.2500
null
null
http://arxiv.org/pdf/1212.2500v1
2012-10-19T15:07:12Z
2012-10-19T15:07:12Z
On Local Optima in Learning Bayesian Networks
This paper proposes and evaluates the k-greedy equivalence search algorithm (KES) for learning Bayesian networks (BNs) from complete data. The main characteristic of KES is that it allows a trade-off between greediness and randomness, thus exploring different good local optima. When greediness is set at maximum, KES corresponds to the greedy equivalence search algorithm (GES). When greediness is kept at minimum, we prove that under mild assumptions KES asymptotically returns any inclusion optimal BN with nonzero probability. Experimental results for both synthetic and real data are reported showing that KES often finds a better local optima than GES. Moreover, we use KES to experimentally confirm that the number of different local optima is often huge.
[ "Jens D. Nielsen, Tomas Kocka, Jose M. Pena", "['Jens D. Nielsen' 'Tomas Kocka' 'Jose M. Pena']" ]
cs.LG stat.ML
null
1212.2504
null
null
http://arxiv.org/pdf/1212.2504v1
2012-10-19T15:06:52Z
2012-10-19T15:06:52Z
Efficiently Inducing Features of Conditional Random Fields
Conditional Random Fields (CRFs) are undirected graphical models, a special case of which correspond to conditionally-trained finite state machines. A key advantage of these models is their great flexibility to include a wide array of overlapping, multi-granularity, non-independent features of the input. In face of this freedom, an important question that remains is, what features should be used? This paper presents a feature induction method for CRFs. Founded on the principle of constructing only those feature conjunctions that significantly increase log-likelihood, the approach is based on that of Della Pietra et al [1997], but altered to work with conditional rather than joint probabilities, and with additional modifications for providing tractability specifically for a sequence model. In comparison with traditional approaches, automated feature induction offers both improved accuracy and more than an order of magnitude reduction in feature count; it enables the use of richer, higher-order Markov models, and offers more freedom to liberally guess about which atomic features may be relevant to a task. The induction method applies to linear-chain CRFs, as well as to more arbitrary CRF structures, also known as Relational Markov Networks [Taskar & Koller, 2002]. We present experimental results on a named entity extraction task.
[ "Andrew McCallum", "['Andrew McCallum']" ]
cs.LG cs.IR stat.ML
null
1212.2508
null
null
http://arxiv.org/pdf/1212.2508v1
2012-10-19T15:08:51Z
2012-10-19T15:08:51Z
Collaborative Ensemble Learning: Combining Collaborative and Content-Based Information Filtering via Hierarchical Bayes
Collaborative filtering (CF) and content-based filtering (CBF) have widely been used in information filtering applications. Both approaches have their strengths and weaknesses which is why researchers have developed hybrid systems. This paper proposes a novel approach to unify CF and CBF in a probabilistic framework, named collaborative ensemble learning. It uses probabilistic SVMs to model each user's profile (as CBF does).At the prediction phase, it combines a society OF users profiles, represented by their respective SVM models, to predict an active users preferences(the CF idea).The combination scheme is embedded in a probabilistic framework and retains an intuitive explanation.Moreover, collaborative ensemble learning does not require a global training stage and thus can incrementally incorporate new data.We report results based on two data sets. For the Reuters-21578 text data set, we simulate user ratings under the assumption that each user is interested in only one category. In the second experiment, we use users' opinions on a set of 642 art images that were collected through a web-based survey. For both data sets, collaborative ensemble achieved excellent performance in terms of recommendation accuracy.
[ "Kai Yu, Anton Schwaighofer, Volker Tresp, Wei-Ying Ma, HongJiang Zhang", "['Kai Yu' 'Anton Schwaighofer' 'Volker Tresp' 'Wei-Ying Ma'\n 'HongJiang Zhang']" ]
cs.LG stat.ML
null
1212.2510
null
null
http://arxiv.org/pdf/1212.2510v1
2012-10-19T15:08:42Z
2012-10-19T15:08:42Z
Markov Random Walk Representations with Continuous Distributions
Representations based on random walks can exploit discrete data distributions for clustering and classification. We extend such representations from discrete to continuous distributions. Transition probabilities are now calculated using a diffusion equation with a diffusion coefficient that inversely depends on the data density. We relate this diffusion equation to a path integral and derive the corresponding path probability measure. The framework is useful for incorporating continuous data densities and prior knowledge.
[ "['Chen-Hsiang Yeang' 'Martin Szummer']", "Chen-Hsiang Yeang, Martin Szummer" ]
cs.LG stat.ML
null
1212.2511
null
null
http://arxiv.org/pdf/1212.2511v1
2012-10-19T15:08:38Z
2012-10-19T15:08:38Z
Stochastic complexity of Bayesian networks
Bayesian networks are now being used in enormous fields, for example, diagnosis of a system, data mining, clustering and so on. In spite of their wide range of applications, the statistical properties have not yet been clarified, because the models are nonidentifiable and non-regular. In a Bayesian network, the set of its parameter for a smaller model is an analytic set with singularities in the space of large ones. Because of these singularities, the Fisher information matrices are not positive definite. In other words, the mathematical foundation for learning was not constructed. In recent years, however, we have developed a method to analyze non-regular models using algebraic geometry. This method revealed the relation between the models singularities and its statistical properties. In this paper, applying this method to Bayesian networks with latent variables, we clarify the order of the stochastic complexities.Our result claims that the upper bound of those is smaller than the dimension of the parameter space. This means that the Bayesian generalization error is also far smaller than that of regular model, and that Schwarzs model selection criterion BIC needs to be improved for Bayesian networks.
[ "Keisuke Yamazaki, Sumio Watanbe", "['Keisuke Yamazaki' 'Sumio Watanbe']" ]
cs.LG stat.ML
null
1212.2512
null
null
http://arxiv.org/pdf/1212.2512v1
2012-10-19T15:08:33Z
2012-10-19T15:08:33Z
A Generalized Mean Field Algorithm for Variational Inference in Exponential Families
The mean field methods, which entail approximating intractable probability distributions variationally with distributions from a tractable family, enjoy high efficiency, guaranteed convergence, and provide lower bounds on the true likelihood. But due to requirement for model-specific derivation of the optimization equations and unclear inference quality in various models, it is not widely used as a generic approximate inference algorithm. In this paper, we discuss a generalized mean field theory on variational approximation to a broad class of intractable distributions using a rich set of tractable distributions via constrained optimization over distribution spaces. We present a class of generalized mean field (GMF) algorithms for approximate inference in complex exponential family models, which entails limiting the optimization over the class of cluster-factorizable distributions. GMF is a generic method requiring no model-specific derivations. It factors a complex model into a set of disjoint variable clusters, and uses a set of canonical fix-point equations to iteratively update the cluster distributions, and converge to locally optimal cluster marginals that preserve the original dependency structure within each cluster, hence, fully decomposed the overall inference problem. We empirically analyzed the effect of different tractable family (clusters of different granularity) on inference quality, and compared GMF with BP on several canonical models. Possible extension to higher-order MF approximation is also discussed.
[ "['Eric P. Xing' 'Michael I. Jordan' 'Stuart Russell']", "Eric P. Xing, Michael I. Jordan, Stuart Russell" ]
cs.LG stat.ML
null
1212.2513
null
null
http://arxiv.org/pdf/1212.2513v1
2012-10-19T15:08:28Z
2012-10-19T15:08:28Z
Efficient Parametric Projection Pursuit Density Estimation
Product models of low dimensional experts are a powerful way to avoid the curse of dimensionality. We present the ``under-complete product of experts' (UPoE), where each expert models a one dimensional projection of the data. The UPoE is fully tractable and may be interpreted as a parametric probabilistic model for projection pursuit. Its ML learning rules are identical to the approximate learning rules proposed before for under-complete ICA. We also derive an efficient sequential learning algorithm and discuss its relationship to projection pursuit density estimation and feature induction algorithms for additive random field models.
[ "Max Welling, Richard S. Zemel, Geoffrey E. Hinton", "['Max Welling' 'Richard S. Zemel' 'Geoffrey E. Hinton']" ]
cs.LG stat.ML
null
1212.2514
null
null
http://arxiv.org/pdf/1212.2514v1
2012-10-19T15:08:24Z
2012-10-19T15:08:24Z
Boltzmann Machine Learning with the Latent Maximum Entropy Principle
We present a new statistical learning paradigm for Boltzmann machines based on a new inference principle we have proposed: the latent maximum entropy principle (LME). LME is different both from Jaynes maximum entropy principle and from standard maximum likelihood estimation.We demonstrate the LME principle BY deriving new algorithms for Boltzmann machine parameter estimation, and show how robust and fast new variant of the EM algorithm can be developed.Our experiments show that estimation based on LME generally yields better results than maximum likelihood estimation, particularly when inferring hidden units from small amounts of data.
[ "['Shaojun Wang' 'Dale Schuurmans' 'Fuchun Peng' 'Yunxin Zhao']", "Shaojun Wang, Dale Schuurmans, Fuchun Peng, Yunxin Zhao" ]
cs.LG stat.ML
null
1212.2516
null
null
http://arxiv.org/pdf/1212.2516v1
2012-10-19T15:08:15Z
2012-10-19T15:08:15Z
Learning Measurement Models for Unobserved Variables
Observed associations in a database may be due in whole or part to variations in unrecorded (latent) variables. Identifying such variables and their causal relationships with one another is a principal goal in many scientific and practical domains. Previous work shows that, given a partition of observed variables such that members of a class share only a single latent common cause, standard search algorithms for causal Bayes nets can infer structural relations between latent variables. We introduce an algorithm for discovering such partitions when they exist. Uniquely among available procedures, the algorithm is (asymptotically) correct under standard assumptions in causal Bayes net search algorithms, requires no prior knowledge of the number of latent variables, and does not depend on the mathematical form of the relationships among the latent variables. We evaluate the algorithm on a variety of simulated data sets.
[ "['Ricardo Silva' 'Richard Scheines' 'Clark Glymour' 'Peter L. Spirtes']", "Ricardo Silva, Richard Scheines, Clark Glymour, Peter L. Spirtes" ]
cs.LG cs.CE stat.ML
null
1212.2517
null
null
http://arxiv.org/pdf/1212.2517v1
2012-10-19T15:08:06Z
2012-10-19T15:08:06Z
Learning Module Networks
Methods for learning Bayesian network structure can discover dependency structure between observed variables, and have been shown to be useful in many applications. However, in domains that involve a large number of variables, the space of possible network structures is enormous, making it difficult, for both computational and statistical reasons, to identify a good model. In this paper, we consider a solution to this problem, suitable for domains where many variables have similar behavior. Our method is based on a new class of models, which we call module networks. A module network explicitly represents the notion of a module - a set of variables that have the same parents in the network and share the same conditional probability distribution. We define the semantics of module networks, and describe an algorithm that learns a module network from data. The algorithm learns both the partitioning of the variables into modules and the dependency structure between the variables. We evaluate our algorithm on synthetic data, and on real data in the domains of gene expression and the stock market. Our results show that module networks generalize better than Bayesian networks, and that the learned module network structure reveals regularities that are obscured in learned Bayesian networks.
[ "Eran Segal, Dana Pe'er, Aviv Regev, Daphne Koller, Nir Friedman", "['Eran Segal' \"Dana Pe'er\" 'Aviv Regev' 'Daphne Koller' 'Nir Friedman']" ]
cs.LG cs.DS stat.ML
null
1212.2573
null
null
http://arxiv.org/pdf/1212.2573v1
2012-12-11T18:22:31Z
2012-12-11T18:22:31Z
Convex Relaxations for Learning Bounded Treewidth Decomposable Graphs
We consider the problem of learning the structure of undirected graphical models with bounded treewidth, within the maximum likelihood framework. This is an NP-hard problem and most approaches consider local search techniques. In this paper, we pose it as a combinatorial optimization problem, which is then relaxed to a convex optimization problem that involves searching over the forest and hyperforest polytopes with special structures, independently. A supergradient method is used to solve the dual problem, with a run-time complexity of $O(k^3 n^{k+2} \log n)$ for each iteration, where $n$ is the number of variables and $k$ is a bound on the treewidth. We compare our approach to state-of-the-art methods on synthetic datasets and classical benchmarks, showing the gains of the novel convex approach.
[ "['K. S. Sesh Kumar' 'Francis Bach']", "K. S. Sesh Kumar (LIENS, INRIA Paris - Rocquencourt), Francis Bach\n (LIENS, INRIA Paris - Rocquencourt)" ]
q-bio.QM cs.LG stat.AP
null
1212.2617
null
null
http://arxiv.org/pdf/1212.2617v1
2012-12-11T20:33:16Z
2012-12-11T20:33:16Z
Optimal diagnostic tests for sporadic Creutzfeldt-Jakob disease based on support vector machine classification of RT-QuIC data
In this work we study numerical construction of optimal clinical diagnostic tests for detecting sporadic Creutzfeldt-Jakob disease (sCJD). A cerebrospinal fluid sample (CSF) from a suspected sCJD patient is subjected to a process which initiates the aggregation of a protein present only in cases of sCJD. This aggregation is indirectly observed in real-time at regular intervals, so that a longitudinal set of data is constructed that is then analysed for evidence of this aggregation. The best existing test is based solely on the final value of this set of data, which is compared against a threshold to conclude whether or not aggregation, and thus sCJD, is present. This test criterion was decided upon by analysing data from a total of 108 sCJD and non-sCJD samples, but this was done subjectively and there is no supporting mathematical analysis declaring this criterion to be exploiting the available data optimally. This paper addresses this deficiency, seeking to validate or improve the test primarily via support vector machine (SVM) classification. Besides this, we address a number of additional issues such as i) early stopping of the measurement process, ii) the possibility of detecting the particular type of sCJD and iii) the incorporation of additional patient data such as age, sex, disease duration and timing of CSF sampling into the construction of the test.
[ "William Hulme, Peter Richt\\'arik, Lynne McGuire and Alison Green", "['William Hulme' 'Peter Richtárik' 'Lynne McGuire' 'Alison Green']" ]
stat.ML cs.LG
null
1212.2686
null
null
http://arxiv.org/pdf/1212.2686v1
2012-12-12T01:59:27Z
2012-12-12T01:59:27Z
Joint Training of Deep Boltzmann Machines
We introduce a new method for training deep Boltzmann machines jointly. Prior methods require an initial learning pass that trains the deep Boltzmann machine greedily, one layer at a time, or do not perform well on classifi- cation tasks.
[ "Ian Goodfellow, Aaron Courville, Yoshua Bengio", "['Ian Goodfellow' 'Aaron Courville' 'Yoshua Bengio']" ]
stat.ML cond-mat.stat-mech cs.LG
null
1212.2767
null
null
http://arxiv.org/pdf/1212.2767v1
2012-12-12T10:55:27Z
2012-12-12T10:55:27Z
Bayesian one-mode projection for dynamic bipartite graphs
We propose a Bayesian methodology for one-mode projecting a bipartite network that is being observed across a series of discrete time steps. The resulting one mode network captures the uncertainty over the presence/absence of each link and provides a probability distribution over its possible weight values. Additionally, the incorporation of prior knowledge over previous states makes the resulting network less sensitive to noise and missing observations that usually take place during the data collection process. The methodology consists of computationally inexpensive update rules and is scalable to large problems, via an appropriate distributed implementation.
[ "Ioannis Psorakis, Iead Rezek, Zach Frankel, Stephen J. Roberts", "['Ioannis Psorakis' 'Iead Rezek' 'Zach Frankel' 'Stephen J. Roberts']" ]
cs.LG math.OC stat.ML
null
1212.2834
null
null
http://arxiv.org/pdf/1212.2834v2
2013-06-10T09:31:45Z
2012-12-12T15:02:20Z
Dictionary Subselection Using an Overcomplete Joint Sparsity Model
Many natural signals exhibit a sparse representation, whenever a suitable describing model is given. Here, a linear generative model is considered, where many sparsity-based signal processing techniques rely on such a simplified model. As this model is often unknown for many classes of the signals, we need to select such a model based on the domain knowledge or using some exemplar signals. This paper presents a new exemplar based approach for the linear model (called the dictionary) selection, for such sparse inverse problems. The problem of dictionary selection, which has also been called the dictionary learning in this setting, is first reformulated as a joint sparsity model. The joint sparsity model here differs from the standard joint sparsity model as it considers an overcompleteness in the representation of each signal, within the range of selected subspaces. The new dictionary selection paradigm is examined with some synthetic and realistic simulations.
[ "['Mehrdad Yaghoobi' 'Laurent Daudet' 'Michael E. Davies']", "Mehrdad Yaghoobi, Laurent Daudet, Michael E. Davies" ]
cs.LG
null
1212.3185
null
null
http://arxiv.org/pdf/1212.3185v3
2013-06-03T02:42:35Z
2012-12-13T14:31:58Z
Cost-Sensitive Feature Selection of Data with Errors
In data mining applications, feature selection is an essential process since it reduces a model's complexity. The cost of obtaining the feature values must be taken into consideration in many domains. In this paper, we study the cost-sensitive feature selection problem on numerical data with measurement errors, test costs and misclassification costs. The major contributions of this paper are four-fold. First, a new data model is built to address test costs and misclassification costs as well as error boundaries. Second, a covering-based rough set with measurement errors is constructed. Given a confidence interval, the neighborhood is an ellipse in a two-dimension space, or an ellipsoidal in a three-dimension space, etc. Third, a new cost-sensitive feature selection problem is defined on this covering-based rough set. Fourth, both backtracking and heuristic algorithms are proposed to deal with this new problem. The algorithms are tested on six UCI (University of California - Irvine) data sets. Experimental results show that (1) the pruning techniques of the backtracking algorithm help reducing the number of operations significantly, and (2) the heuristic algorithm usually obtains optimal results. This study is a step toward realistic applications of cost-sensitive learning.
[ "['Hong Zhao' 'Fan Min' 'William Zhu']", "Hong Zhao, Fan Min and William Zhu" ]
stat.ML cs.LG
null
1212.3276
null
null
http://arxiv.org/pdf/1212.3276v3
2016-04-18T09:17:36Z
2012-12-13T19:20:21Z
Learning Sparse Low-Threshold Linear Classifiers
We consider the problem of learning a non-negative linear classifier with a $1$-norm of at most $k$, and a fixed threshold, under the hinge-loss. This problem generalizes the problem of learning a $k$-monotone disjunction. We prove that we can learn efficiently in this setting, at a rate which is linear in both $k$ and the size of the threshold, and that this is the best possible rate. We provide an efficient online learning algorithm that achieves the optimal rate, and show that in the batch case, empirical risk minimization achieves this rate as well. The rates we show are tighter than the uniform convergence rate, which grows with $k^2$.
[ "['Sivan Sabato' 'Shai Shalev-Shwartz' 'Nathan Srebro' 'Daniel Hsu'\n 'Tong Zhang']", "Sivan Sabato and Shai Shalev-Shwartz and Nathan Srebro and Daniel Hsu\n and Tong Zhang" ]
cs.LG cs.IR
null
1212.3390
null
null
http://arxiv.org/pdf/1212.3390v1
2012-12-14T04:12:21Z
2012-12-14T04:12:21Z
Know Your Personalization: Learning Topic level Personalization in Online Services
Online service platforms (OSPs), such as search engines, news-websites, ad-providers, etc., serve highly pe rsonalized content to the user, based on the profile extracted from his history with the OSP. Although personalization (generally) leads to a better user experience, it also raises privacy concerns for the user---he does not know what is present in his profile and more importantly, what is being used to per sonalize content for him. In this paper, we capture OSP's personalization for an user in a new data structure called the person alization vector ($\eta$), which is a weighted vector over a set of topics, and present techniques to compute it for users of an OSP. Our approach treats OSPs as black-boxes, and extracts $\eta$ by mining only their output, specifical ly, the personalized (for an user) and vanilla (without any user information) contents served, and the differences in these content. We formulate a new model called Latent Topic Personalization (LTP) that captures the personalization vector into a learning framework and present efficient inference algorithms for it. We do extensive experiments for search result personalization using both data from real Google users and synthetic datasets. Our results show high accuracy (R-pre = 84%) of LTP in finding personalized topics. For Google data, our qualitative results show how LTP can also identifies evidences---queries for results on a topic with high $\eta$ value were re-ranked. Finally, we show how our approach can be used to build a new Privacy evaluation framework focused at end-user privacy on commercial OSPs.
[ "['Anirban Majumder' 'Nisheeth Shrivastava']", "Anirban Majumder and Nisheeth Shrivastava" ]
cs.LO cs.FL cs.LG cs.SE
10.4204/EPTCS.103
1212.3454
null
null
http://arxiv.org/abs/1212.3454v1
2012-12-14T12:38:37Z
2012-12-14T12:38:37Z
Proceedings Quantities in Formal Methods
This volume contains the proceedings of the Workshop on Quantities in Formal Methods, QFM 2012, held in Paris, France on 28 August 2012. The workshop was affiliated with the 18th Symposium on Formal Methods, FM 2012. The focus of the workshop was on quantities in modeling, verification, and synthesis. Modern applications of formal methods require to reason formally on quantities such as time, resources, or probabilities. Standard formal methods and tools have gotten very good at modeling (and verifying) qualitative properties: whether or not certain events will occur. During the last years, these methods and tools have been extended to also cover quantitative aspects, notably leading to tools like e.g. UPPAAL (for real-time systems), PRISM (for probabilistic systems), and PHAVer (for hybrid systems). A lot of work remains to be done however before these tools can be used in the industrial applications at which they are aiming.
[ "Uli Fahrenberg (Irisa / INRIA Rennes, France), Axel Legay (Irisa /\n INRIA Rennes, France), Claus Thrane (Aalborg University, Denmark)", "['Uli Fahrenberg' 'Axel Legay' 'Claus Thrane']" ]
cs.AI cs.LG cs.LO
10.4204/EPTCS.118.2
1212.3618
null
null
http://arxiv.org/abs/1212.3618v2
2013-07-08T05:19:38Z
2012-12-14T21:06:34Z
Machine Learning in Proof General: Interfacing Interfaces
We present ML4PG - a machine learning extension for Proof General. It allows users to gather proof statistics related to shapes of goals, sequences of applied tactics, and proof tree structures from the libraries of interactive higher-order proofs written in Coq and SSReflect. The gathered data is clustered using the state-of-the-art machine learning algorithms available in MATLAB and Weka. ML4PG provides automated interfacing between Proof General and MATLAB/Weka. The results of clustering are used by ML4PG to provide proof hints in the process of interactive proof development.
[ "Ekaterina Komendantskaya (School of Computing, University of Dundee),\n J\\'onathan Heras (School of Computing, University of Dundee), Gudmund Grov\n (School of Mathematical and Computer Sciences, Heriot-Watt University)", "['Ekaterina Komendantskaya' 'Jónathan Heras' 'Gudmund Grov']" ]
cs.LG
null
1212.3631
null
null
http://arxiv.org/pdf/1212.3631v1
2012-12-14T22:50:44Z
2012-12-14T22:50:44Z
Learning efficient sparse and low rank models
Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speedup compared to the exact optimization algorithms.
[ "['Pablo Sprechmann' 'Alex M. Bronstein' 'Guillermo Sapiro']", "Pablo Sprechmann, Alex M. Bronstein and Guillermo Sapiro" ]
cs.SE cs.LG
null
1212.3669
null
null
http://arxiv.org/pdf/1212.3669v2
2014-07-21T21:11:36Z
2012-12-15T09:53:16Z
A metric for software vulnerabilities classification
Vulnerability discovery and exploits detection are two wide areas of study in software engineering. This preliminary work tries to combine existing methods with machine learning techniques to define a metric classification of vulnerable computer programs. First a feature set has been defined and later two models have been tested against real world vulnerabilities. A relation between the classifier choice and the features has also been outlined.
[ "['Gabriele Modena']", "Gabriele Modena" ]
cs.LG cs.NE q-bio.NC
10.1109/TCSI.2012.2206463
1212.3765
null
null
http://arxiv.org/abs/1212.3765v1
2012-12-16T09:05:02Z
2012-12-16T09:05:02Z
Biologically Inspired Spiking Neurons : Piecewise Linear Models and Digital Implementation
There has been a strong push recently to examine biological scale simulations of neuromorphic algorithms to achieve stronger inference capabilities. This paper presents a set of piecewise linear spiking neuron models, which can reproduce different behaviors, similar to the biological neuron, both for a single neuron as well as a network of neurons. The proposed models are investigated, in terms of digital implementation feasibility and costs, targeting large scale hardware implementation. Hardware synthesis and physical implementations on FPGA show that the proposed models can produce precise neural behaviors with higher performance and considerably lower implementation costs compared with the original model. Accordingly, a compact structure of the models which can be trained with supervised and unsupervised learning algorithms has been developed. Using this structure and based on a spike rate coding, a character recognition case study has been implemented and tested.
[ "['Hamid Soleimani' 'Arash Ahmadi' 'Mohammad Bavandpour']", "Hamid Soleimani, Arash Ahmadi and Mohammad Bavandpour" ]
cs.IT cs.LG math.IT stat.ML
null
1212.3850
null
null
http://arxiv.org/pdf/1212.3850v1
2012-12-16T23:22:56Z
2012-12-16T23:22:56Z
Belief Propagation for Continuous State Spaces: Stochastic Message-Passing with Quantitative Guarantees
The sum-product or belief propagation (BP) algorithm is a widely used message-passing technique for computing approximate marginals in graphical models. We introduce a new technique, called stochastic orthogonal series message-passing (SOSMP), for computing the BP fixed point in models with continuous random variables. It is based on a deterministic approximation of the messages via orthogonal series expansion, and a stochastic approximation via Monte Carlo estimates of the integral updates of the basis coefficients. We prove that the SOSMP iterates converge to a \delta-neighborhood of the unique BP fixed point for any tree-structured graph, and for any graphs with cycles in which the BP updates satisfy a contractivity condition. In addition, we demonstrate how to choose the number of basis coefficients as a function of the desired approximation accuracy \delta and smoothness of the compatibility functions. We illustrate our theory with both simulated examples and in application to optical flow estimation.
[ "['Nima Noorshams' 'Martin J. Wainwright']", "Nima Noorshams and Martin J. Wainwright" ]
cs.LG cs.LO cs.SE
10.4204/EPTCS.103.6
1212.3873
null
null
http://arxiv.org/abs/1212.3873v1
2012-12-17T03:40:47Z
2012-12-17T03:40:47Z
Learning Markov Decision Processes for Model Checking
Constructing an accurate system model for formal model verification can be both resource demanding and time-consuming. To alleviate this shortcoming, algorithms have been proposed for automatically learning system models based on observed system behaviors. In this paper we extend the algorithm on learning probabilistic automata to reactive systems, where the observed system behavior is in the form of alternating sequences of inputs and outputs. We propose an algorithm for automatically learning a deterministic labeled Markov decision process model from the observed behavior of a reactive system. The proposed learning algorithm is adapted from algorithms for learning deterministic probabilistic finite automata, and extended to include both probabilistic and nondeterministic transitions. The algorithm is empirically analyzed and evaluated by learning system models of slot machines. The evaluation is performed by analyzing the probabilistic linear temporal logic properties of the system as well as by analyzing the schedulers, in particular the optimal schedulers, induced by the learned models.
[ "Hua Mao (AAU), Yingke Chen (AAU), Manfred Jaeger (AAU), Thomas D.\n Nielsen (AAU), Kim G. Larsen (AAU), Brian Nielsen (AAU)", "['Hua Mao' 'Yingke Chen' 'Manfred Jaeger' 'Thomas D. Nielsen'\n 'Kim G. Larsen' 'Brian Nielsen']" ]
stat.ML cs.LG
null
1212.3900
null
null
http://arxiv.org/pdf/1212.3900v2
2012-12-21T19:55:53Z
2012-12-17T06:49:14Z
A Tutorial on Probabilistic Latent Semantic Analysis
In this tutorial, I will discuss the details about how Probabilistic Latent Semantic Analysis (PLSA) is formalized and how different learning algorithms are proposed to learn the model.
[ "['Liangjie Hong']", "Liangjie Hong" ]
cs.CV cs.LG
10.1109/TNNLS.2015.2487364
1212.3913
null
null
http://arxiv.org/abs/1212.3913v4
2017-03-12T08:36:27Z
2012-12-17T07:56:15Z
Group Component Analysis for Multiblock Data: Common and Individual Feature Extraction
Very often data we encounter in practice is a collection of matrices rather than a single matrix. These multi-block data are naturally linked and hence often share some common features and at the same time they have their own individual features, due to the background in which they are measured and collected. In this study we proposed a new scheme of common and individual feature analysis (CIFA) that processes multi-block data in a linked way aiming at discovering and separating their common and individual features. According to whether the number of common features is given or not, two efficient algorithms were proposed to extract the common basis which is shared by all data. Then feature extraction is performed on the common and the individual spaces separately by incorporating the techniques such as dimensionality reduction and blind source separation. We also discussed how the proposed CIFA can significantly improve the performance of classification and clustering tasks by exploiting common and individual features of samples respectively. Our experimental results show some encouraging features of the proposed methods in comparison to the state-of-the-art methods on synthetic and real data.
[ "Guoxu Zhou and Andrzej Cichocki and Yu Zhang and Danilo Mandic", "['Guoxu Zhou' 'Andrzej Cichocki' 'Yu Zhang' 'Danilo Mandic']" ]
stat.ML cs.LG math.OC
null
1212.4137
null
null
http://arxiv.org/pdf/1212.4137v2
2020-05-07T00:50:36Z
2012-12-17T20:53:35Z
Alternating Maximization: Unifying Framework for 8 Sparse PCA Formulations and Efficient Parallel Codes
Given a multivariate data set, sparse principal component analysis (SPCA) aims to extract several linear combinations of the variables that together explain the variance in the data as much as possible, while controlling the number of nonzero loadings in these combinations. In this paper we consider 8 different optimization formulations for computing a single sparse loading vector; these are obtained by combining the following factors: we employ two norms for measuring variance (L2, L1) and two sparsity-inducing norms (L0, L1), which are used in two different ways (constraint, penalty). Three of our formulations, notably the one with L0 constraint and L1 variance, have not been considered in the literature. We give a unifying reformulation which we propose to solve via a natural alternating maximization (AM) method. We show the the AM method is nontrivially equivalent to GPower (Journ\'{e}e et al; JMLR 11:517--553, 2010) for all our formulations. Besides this, we provide 24 efficient parallel SPCA implementations: 3 codes (multi-core, GPU and cluster) for each of the 8 problems. Parallelism in the methods is aimed at i) speeding up computations (our GPU code can be 100 times faster than an efficient serial code written in C++), ii) obtaining solutions explaining more variance and iii) dealing with big data problems (our cluster code is able to solve a 357 GB problem in about a minute).
[ "['Peter Richtárik' 'Majid Jahani' 'Selin Damla Ahipaşaoğlu' 'Martin Takáč']", "Peter Richt\\'arik, Majid Jahani, Selin Damla Ahipa\\c{s}ao\\u{g}lu,\n Martin Tak\\'a\\v{c}" ]
stat.ML cs.DC cs.LG math.OC
null
1212.4174
null
null
http://arxiv.org/pdf/1212.4174v1
2012-12-17T21:43:31Z
2012-12-17T21:43:31Z
Feature Clustering for Accelerating Parallel Coordinate Descent
Large-scale L1-regularized loss minimization problems arise in high-dimensional applications such as compressed sensing and high-dimensional supervised learning, including classification and regression problems. High-performance algorithms and implementations are critical to efficiently solving these problems. Building upon previous work on coordinate descent algorithms for L1-regularized problems, we introduce a novel family of algorithms called block-greedy coordinate descent that includes, as special cases, several existing algorithms such as SCD, Greedy CD, Shotgun, and Thread-Greedy. We give a unified convergence analysis for the family of block-greedy algorithms. The analysis suggests that block-greedy coordinate descent can better exploit parallelism if features are clustered so that the maximum inner product between features in different blocks is small. Our theoretical convergence analysis is supported with experimental re- sults using data from diverse real-world applications. We hope that algorithmic approaches and convergence analysis we provide will not only advance the field, but will also encourage researchers to systematically explore the design space of algorithms for solving large-scale L1-regularization problems.
[ "Chad Scherrer, Ambuj Tewari, Mahantesh Halappanavar, David Haglin", "['Chad Scherrer' 'Ambuj Tewari' 'Mahantesh Halappanavar' 'David Haglin']" ]
cs.LG stat.ML
null
1212.4347
null
null
http://arxiv.org/pdf/1212.4347v1
2012-12-18T13:35:38Z
2012-12-18T13:35:38Z
Bayesian Group Nonnegative Matrix Factorization for EEG Analysis
We propose a generative model of a group EEG analysis, based on appropriate kernel assumptions on EEG data. We derive the variational inference update rule using various approximation techniques. The proposed model outperforms the current state-of-the-art algorithms in terms of common pattern extraction. The validity of the proposed model is tested on the BCI competition dataset.
[ "Bonggun Shin, Alice Oh", "['Bonggun Shin' 'Alice Oh']" ]
stat.ML cs.LG cs.NA
null
1212.4507
null
null
http://arxiv.org/pdf/1212.4507v2
2012-12-20T18:49:18Z
2012-12-18T21:06:10Z
Variational Optimization
We discuss a general technique that can be used to form a differentiable bound on the optima of non-differentiable or discrete objective functions. We form a unified description of these methods and consider under which circumstances the bound is concave. In particular we consider two concrete applications of the method, namely sparse learning and support vector classification.
[ "['Joe Staines' 'David Barber']", "Joe Staines and David Barber" ]
cs.CV cs.IR cs.LG cs.MM
null
1212.4522
null
null
http://arxiv.org/pdf/1212.4522v2
2013-09-02T19:14:58Z
2012-12-18T22:02:43Z
A Multi-View Embedding Space for Modeling Internet Images, Tags, and their Semantics
This paper investigates the problem of modeling Internet images and associated text or tags for tasks such as image-to-image search, tag-to-image search, and image-to-tag search (image annotation). We start with canonical correlation analysis (CCA), a popular and successful approach for mapping visual and textual features to the same latent space, and incorporate a third view capturing high-level image semantics, represented either by a single category or multiple non-mutually-exclusive concepts. We present two ways to train the three-view embedding: supervised, with the third view coming from ground-truth labels or search keywords; and unsupervised, with semantic themes automatically obtained by clustering the tags. To ensure high accuracy for retrieval tasks while keeping the learning process scalable, we combine multiple strong visual features and use explicit nonlinear kernel mappings to efficiently approximate kernel CCA. To perform retrieval, we use a specially designed similarity function in the embedded space, which substantially outperforms the Euclidean distance. The resulting system produces compelling qualitative results and outperforms a number of two-view baselines on retrieval tasks on three large-scale Internet image datasets.
[ "['Yunchao Gong' 'Qifa Ke' 'Michael Isard' 'Svetlana Lazebnik']", "Yunchao Gong and Qifa Ke and Michael Isard and Svetlana Lazebnik" ]
cs.LG
null
1212.4675
null
null
http://arxiv.org/pdf/1212.4675v1
2012-12-18T20:17:56Z
2012-12-18T20:17:56Z
Analysis of Large-scale Traffic Dynamics using Non-negative Tensor Factorization
In this paper, we present our work on clustering and prediction of temporal dynamics of global congestion configurations in large-scale road networks. Instead of looking into temporal traffic state variation of individual links, or of small areas, we focus on spatial congestion configurations of the whole network. In our work, we aim at describing the typical temporal dynamic patterns of this network-level traffic state and achieving long-term prediction of the large-scale traffic dynamics, in a unified data-mining framework. To this end, we formulate this joint task using Non-negative Tensor Factorization (NTF), which has been shown to be a useful decomposition tools for multivariate data sequences. Clustering and prediction are performed based on the compact tensor factorization results. Experiments on large-scale simulated data illustrate the interest of our method with promising results for long-term forecast of traffic evolution.
[ "Yufei Han (INRIA Rocquencourt), Fabien Moutarde (CAOR)", "['Yufei Han' 'Fabien Moutarde']" ]
cs.CR cs.LG stat.ML
null
1212.4775
null
null
http://arxiv.org/pdf/1212.4775v3
2013-01-04T22:24:15Z
2012-12-19T18:12:34Z
Role Mining with Probabilistic Models
Role mining tackles the problem of finding a role-based access control (RBAC) configuration, given an access-control matrix assigning users to access permissions as input. Most role mining approaches work by constructing a large set of candidate roles and use a greedy selection strategy to iteratively pick a small subset such that the differences between the resulting RBAC configuration and the access control matrix are minimized. In this paper, we advocate an alternative approach that recasts role mining as an inference problem rather than a lossy compression problem. Instead of using combinatorial algorithms to minimize the number of roles needed to represent the access-control matrix, we derive probabilistic models to learn the RBAC configuration that most likely underlies the given matrix. Our models are generative in that they reflect the way that permissions are assigned to users in a given RBAC configuration. We additionally model how user-permission assignments that conflict with an RBAC configuration emerge and we investigate the influence of constraints on role hierarchies and on the number of assignments. In experiments with access-control matrices from real-world enterprises, we compare our proposed models with other role mining methods. Our results show that our probabilistic models infer roles that generalize well to new system users for a wide variety of data, while other models' generalization abilities depend on the dataset given.
[ "Mario Frank, Joachim M. Buhmann, David Basin", "['Mario Frank' 'Joachim M. Buhmann' 'David Basin']" ]
cs.LG cs.DS stat.ML
null
1212.4777
null
null
http://arxiv.org/pdf/1212.4777v1
2012-12-19T18:14:51Z
2012-12-19T18:14:51Z
A Practical Algorithm for Topic Modeling with Provable Guarantees
Topic models provide a useful method for dimensionality reduction and exploratory data analysis in large text corpora. Most approaches to topic model inference have been based on a maximum likelihood objective. Efficient algorithms exist that approximate this objective, but they have no provable guarantees. Recently, algorithms have been introduced that provide provable bounds, but these algorithms are not practical because they are inefficient and not robust to violations of model assumptions. In this paper we present an algorithm for topic model inference that is both provable and practical. The algorithm produces results comparable to the best MCMC implementations while running orders of magnitude faster.
[ "Sanjeev Arora, Rong Ge, Yoni Halpern, David Mimno, Ankur Moitra, David\n Sontag, Yichen Wu, Michael Zhu", "['Sanjeev Arora' 'Rong Ge' 'Yoni Halpern' 'David Mimno' 'Ankur Moitra'\n 'David Sontag' 'Yichen Wu' 'Michael Zhu']" ]
cs.LG cs.SD
null
1212.5091
null
null
http://arxiv.org/pdf/1212.5091v1
2012-12-19T17:40:07Z
2012-12-19T17:40:07Z
Maximally Informative Observables and Categorical Perception
We formulate the problem of perception in the framework of information theory, and prove that categorical perception is equivalent to the existence of an observable that has the maximum possible information on the target of perception. We call such an observable maximally informative. Regardless whether categorical perception is real, maximally informative observables can form the basis of a theory of perception. We conclude with the implications of such a theory for the problem of speech perception.
[ "Elaine Tsiang", "['Elaine Tsiang']" ]
cs.LG
null
1212.5101
null
null
http://arxiv.org/pdf/1212.5101v1
2012-12-20T15:53:43Z
2012-12-20T15:53:43Z
Hybrid Fuzzy-ART based K-Means Clustering Methodology to Cellular Manufacturing Using Operational Time
This paper presents a new hybrid Fuzzy-ART based K-Means Clustering technique to solve the part machine grouping problem in cellular manufacturing systems considering operational time. The performance of the proposed technique is tested with problems from open literature and the results are compared to the existing clustering models such as simple K-means algorithm and modified ART1 algorithm using an efficient modified performance measure known as modified grouping efficiency (MGE) as found in the literature. The results support the better performance of the proposed algorithm. The Novelty of this study lies in the simple and efficient methodology to produce quick solutions for shop floor managers with least computational efforts and time.
[ "Sourav Sengupta, Tamal Ghosh, Pranab K Dan, Manojit Chattopadhyay", "['Sourav Sengupta' 'Tamal Ghosh' 'Pranab K Dan' 'Manojit Chattopadhyay']" ]
math.ST cs.LG stat.ML stat.TH
10.1214/14-AOS1218
1212.5156
null
null
http://arxiv.org/abs/1212.5156v3
2014-08-28T08:28:48Z
2012-12-20T17:41:23Z
Nonparametric ridge estimation
We study the problem of estimating the ridges of a density function. Ridge estimation is an extension of mode finding and is useful for understanding the structure of a density. It can also be used to find hidden structure in point cloud data. We show that, under mild regularity conditions, the ridges of the kernel density estimator consistently estimate the ridges of the true density. When the data are noisy measurements of a manifold, we show that the ridges are close and topologically similar to the hidden manifold. To find the estimated ridges in practice, we adapt the modified mean-shift algorithm proposed by Ozertem and Erdogmus [J. Mach. Learn. Res. 12 (2011) 1249-1286]. Some numerical experiments verify that the algorithm is accurate.
[ "['Christopher R. Genovese' 'Marco Perone-Pacifico' 'Isabella Verdinelli'\n 'Larry Wasserman']", "Christopher R. Genovese, Marco Perone-Pacifico, Isabella Verdinelli,\n Larry Wasserman" ]
cs.LG cs.CE
null
1212.5359
null
null
http://arxiv.org/pdf/1212.5359v1
2012-12-21T08:43:05Z
2012-12-21T08:43:05Z
Fuzzy soft rough K-Means clustering approach for gene expression data
Clustering is one of the widely used data mining techniques for medical diagnosis. Clustering can be considered as the most important unsupervised learning technique. Most of the clustering methods group data based on distance and few methods cluster data based on similarity. The clustering algorithms classify gene expression data into clusters and the functionally related genes are grouped together in an efficient manner. The groupings are constructed such that the degree of relationship is strong among members of the same cluster and weak among members of different clusters. In this work, we focus on a similarity relationship among genes with similar expression patterns so that a consequential and simple analytical decision can be made from the proposed Fuzzy Soft Rough K-Means algorithm. The algorithm is developed based on Fuzzy Soft sets and Rough sets. Comparative analysis of the proposed work is made with bench mark algorithms like K-Means and Rough K-Means and efficiency of the proposed algorithm is illustrated in this work by using various cluster validity measures such as DB index and Xie-Beni index.
[ "['K. Dhanalakshmi' 'H. Hannah Inbarani']", "K. Dhanalakshmi, H. Hannah Inbarani" ]
cs.LG cs.CE
null
1212.5391
null
null
http://arxiv.org/pdf/1212.5391v1
2012-12-21T10:46:24Z
2012-12-21T10:46:24Z
Soft Set Based Feature Selection Approach for Lung Cancer Images
Lung cancer is the deadliest type of cancer for both men and women. Feature selection plays a vital role in cancer classification. This paper investigates the feature selection process in Computed Tomographic (CT) lung cancer images using soft set theory. We propose a new soft set based unsupervised feature selection algorithm. Nineteen features are extracted from the segmented lung images using gray level co-occurence matrix (GLCM) and gray level different matrix (GLDM). In this paper, an efficient Unsupervised Soft Set based Quick Reduct (SSUSQR) algorithm is presented. This method is used to select features from the data set and compared with existing rough set based unsupervised feature selection methods. Then K-Means and Self Organizing Map (SOM) clustering algorithms are used to cluster the data. The performance of the feature selection algorithms is evaluated based on performance of clustering techniques. The results show that the proposed method effectively removes redundant features.
[ "G. Jothi, H. Hannah Inbarani", "['G. Jothi' 'H. Hannah Inbarani']" ]
cs.SY cs.LG
10.1109/TCYB.2014.2343194
1212.5524
null
null
http://arxiv.org/abs/1212.5524v2
2013-08-22T16:16:31Z
2012-12-21T16:57:28Z
Reinforcement learning for port-Hamiltonian systems
Passivity-based control (PBC) for port-Hamiltonian systems provides an intuitive way of achieving stabilization by rendering a system passive with respect to a desired storage function. However, in most instances the control law is obtained without any performance considerations and it has to be calculated by solving a complex partial differential equation (PDE). In order to address these issues we introduce a reinforcement learning approach into the energy-balancing passivity-based control (EB-PBC) method, which is a form of PBC in which the closed-loop energy is equal to the difference between the stored and supplied energies. We propose a technique to parameterize EB-PBC that preserves the systems's PDE matching conditions, does not require the specification of a global desired Hamiltonian, includes performance criteria, and is robust to extra non-linearities such as control input saturation. The parameters of the control law are found using actor-critic reinforcement learning, enabling learning near-optimal control policies satisfying a desired closed-loop energy landscape. The advantages are that near-optimal controllers can be generated using standard energy shaping techniques and that the solutions learned can be interpreted in terms of energy shaping and damping injection, which makes it possible to numerically assess stability using passivity theory. From the reinforcement learning perspective, our proposal allows for the class of port-Hamiltonian systems to be incorporated in the actor-critic framework, speeding up the learning thanks to the resulting parameterization of the policy. The method has been successfully applied to the pendulum swing-up problem in simulations and real-life experiments.
[ "['Olivier Sprangers' 'Gabriel A. D. Lopes' 'Robert Babuska']", "Olivier Sprangers and Gabriel A. D. Lopes and Robert Babuska" ]
cs.LG stat.ML
null
1212.5637
null
null
http://arxiv.org/pdf/1212.5637v1
2012-12-21T23:51:21Z
2012-12-21T23:51:21Z
Random Spanning Trees and the Prediction of Weighted Graphs
We investigate the problem of sequentially predicting the binary labels on the nodes of an arbitrary weighted graph. We show that, under a suitable parametrization of the problem, the optimal number of prediction mistakes can be characterized (up to logarithmic factors) by the cutsize of a random spanning tree of the graph. The cutsize is induced by the unknown adversarial labeling of the graph nodes. In deriving our characterization, we obtain a simple randomized algorithm achieving in expectation the optimal mistake bound on any polynomially connected weighted graph. Our algorithm draws a random spanning tree of the original graph and then predicts the nodes of this tree in constant expected amortized time and linear space. Experiments on real-world datasets show that our method compares well to both global (Perceptron) and local (label propagation) methods, while being generally faster in practice.
[ "Nicolo' Cesa-Bianchi, Claudio Gentile, Fabio Vitale, Giovanni Zappella", "[\"Nicolo' Cesa-Bianchi\" 'Claudio Gentile' 'Fabio Vitale'\n 'Giovanni Zappella']" ]
cs.LG
null
1212.5701
null
null
http://arxiv.org/pdf/1212.5701v1
2012-12-22T15:46:49Z
2012-12-22T15:46:49Z
ADADELTA: An Adaptive Learning Rate Method
We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.
[ "Matthew D. Zeiler", "['Matthew D. Zeiler']" ]
cs.LG cs.IT math.IT
10.1016/j.camwa.2012.12.009
1212.5841
null
null
http://arxiv.org/abs/1212.5841v2
2013-01-02T00:00:40Z
2012-12-23T23:20:14Z
Data complexity measured by principal graphs
How to measure the complexity of a finite set of vectors embedded in a multidimensional space? This is a non-trivial question which can be approached in many different ways. Here we suggest a set of data complexity measures using universal approximators, principal cubic complexes. Principal cubic complexes generalise the notion of principal manifolds for datasets with non-trivial topologies. The type of the principal cubic complex is determined by its dimension and a grammar of elementary graph transformations. The simplest grammar produces principal trees. We introduce three natural types of data complexity: 1) geometric (deviation of the data's approximator from some "idealized" configuration, such as deviation from harmonicity); 2) structural (how many elements of a principal graph are needed to approximate the data), and 3) construction complexity (how many applications of elementary graph transformations are needed to construct the principal object starting from the simplest one). We compute these measures for several simulated and real-life data distributions and show them in the "accuracy-complexity" plots, helping to optimize the accuracy/complexity ratio. We discuss various issues connected with measuring data complexity. Software for computing data complexity measures from principal cubic complexes is provided as well.
[ "['Andrei Zinovyev' 'Evgeny Mirkes']", "Andrei Zinovyev and Evgeny Mirkes" ]
math.ST cs.LG stat.TH
null
1212.5860
null
null
http://arxiv.org/pdf/1212.5860v1
2012-12-24T03:31:15Z
2012-12-24T03:31:15Z
A short note on the tail bound of Wishart distribution
We study the tail bound of the emperical covariance of multivariate normal distribution. Following the work of (Gittens & Tropp, 2011), we provide a tail bound with a small constant.
[ "Shenghuo Zhu", "['Shenghuo Zhu']" ]
cs.LG cs.NE math.OC stat.ML
null
1212.5921
null
null
http://arxiv.org/pdf/1212.5921v1
2012-12-24T14:45:25Z
2012-12-24T14:45:25Z
Distributed optimization of deeply nested systems
In science and engineering, intelligent processing of complex signals such as images, sound or language is often performed by a parameterized hierarchy of nonlinear processing layers, sometimes biologically inspired. Hierarchical systems (or, more generally, nested systems) offer a way to generate complex mappings using simple stages. Each layer performs a different operation and achieves an ever more sophisticated representation of the input, as, for example, in an deep artificial neural network, an object recognition cascade in computer vision or a speech front-end processing. Joint estimation of the parameters of all the layers and selection of an optimal architecture is widely considered to be a difficult numerical nonconvex optimization problem, difficult to parallelize for execution in a distributed computation environment, and requiring significant human expert effort, which leads to suboptimal systems in practice. We describe a general mathematical strategy to learn the parameters and, to some extent, the architecture of nested systems, called the method of auxiliary coordinates (MAC). This replaces the original problem involving a deeply nested function with a constrained problem involving a different function in an augmented space without nesting. The constrained problem may be solved with penalty-based methods using alternating optimization over the parameters and the auxiliary coordinates. MAC has provable convergence, is easy to implement reusing existing algorithms for single layers, can be parallelized trivially and massively, applies even when parameter derivatives are not available or not desirable, and is competitive with state-of-the-art nonlinear optimizers even in the serial computation setting, often providing reasonable models within a few iterations.
[ "Miguel \\'A. Carreira-Perpi\\~n\\'an and Weiran Wang", "['Miguel Á. Carreira-Perpiñán' 'Weiran Wang']" ]
q-bio.QM cs.CE cs.LG q-bio.GN stat.AP stat.ML
10.1093/nar/gkt229
1212.5932
null
null
http://arxiv.org/abs/1212.5932v2
2012-12-27T11:23:39Z
2012-12-24T16:41:08Z
Fully scalable online-preprocessing algorithm for short oligonucleotide microarray atlases
Accumulation of standardized data collections is opening up novel opportunities for holistic characterization of genome function. The limited scalability of current preprocessing techniques has, however, formed a bottleneck for full utilization of contemporary microarray collections. While short oligonucleotide arrays constitute a major source of genome-wide profiling data, scalable probe-level preprocessing algorithms have been available only for few measurement platforms based on pre-calculated model parameters from restricted reference training sets. To overcome these key limitations, we introduce a fully scalable online-learning algorithm that provides tools to process large microarray atlases including tens of thousands of arrays. Unlike the alternatives, the proposed algorithm scales up in linear time with respect to sample size and is readily applicable to all short oligonucleotide platforms. This is the only available preprocessing algorithm that can learn probe-level parameters based on sequential hyperparameter updates at small, consecutive batches of data, thus circumventing the extensive memory requirements of the standard approaches and opening up novel opportunities to take full advantage of contemporary microarray data collections. Moreover, using the most comprehensive data collections to estimate probe-level effects can assist in pinpointing individual probes affected by various biases and provide new tools to guide array design and quality control. The implementation is freely available in R/Bioconductor at http://www.bioconductor.org/packages/devel/bioc/html/RPA.html
[ "['Leo Lahti' 'Aurora Torrente' 'Laura L. Elo' 'Alvis Brazma' 'Johan Rung']", "Leo Lahti, Aurora Torrente, Laura L. Elo, Alvis Brazma, Johan Rung" ]
stat.ML cs.LG stat.AP
10.1016/j.patrec.2011.08.019
1212.6018
null
null
http://arxiv.org/abs/1212.6018v1
2012-12-25T11:01:48Z
2012-12-25T11:01:48Z
Exponentially Weighted Moving Average Charts for Detecting Concept Drift
Classifying streaming data requires the development of methods which are computationally efficient and able to cope with changes in the underlying distribution of the stream, a phenomenon known in the literature as concept drift. We propose a new method for detecting concept drift which uses an Exponentially Weighted Moving Average (EWMA) chart to monitor the misclassification rate of an streaming classifier. Our approach is modular and can hence be run in parallel with any underlying classifier to provide an additional layer of concept drift detection. Moreover our method is computationally efficient with overhead O(1) and works in a fully online manner with no need to store data points in memory. Unlike many existing approaches to concept drift detection, our method allows the rate of false positive detections to be controlled and kept constant over time.
[ "Gordon J. Ross, Niall M. Adams, Dimitris K. Tasoulis, David J. Hand", "['Gordon J. Ross' 'Niall M. Adams' 'Dimitris K. Tasoulis' 'David J. Hand']" ]
cs.LG
null
1212.6031
null
null
http://arxiv.org/pdf/1212.6031v1
2012-12-25T12:12:57Z
2012-12-25T12:12:57Z
Tangent Bundle Manifold Learning via Grassmann&Stiefel Eigenmaps
One of the ultimate goals of Manifold Learning (ML) is to reconstruct an unknown nonlinear low-dimensional manifold embedded in a high-dimensional observation space by a given set of data points from the manifold. We derive a local lower bound for the maximum reconstruction error in a small neighborhood of an arbitrary point. The lower bound is defined in terms of the distance between tangent spaces to the original manifold and the estimated manifold at the considered point and reconstructed point, respectively. We propose an amplification of the ML, called Tangent Bundle ML, in which the proximity not only between the original manifold and its estimator but also between their tangent spaces is required. We present a new algorithm that solves this problem and gives a new solution for the ML also.
[ "Alexander V. Bernstein and Alexander P. Kuleshov", "['Alexander V. Bernstein' 'Alexander P. Kuleshov']" ]
cs.LG cs.IR stat.ML
null
1212.6110
null
null
http://arxiv.org/pdf/1212.6110v1
2012-12-26T02:14:41Z
2012-12-26T02:14:41Z
Hyperplane Arrangements and Locality-Sensitive Hashing with Lift
Locality-sensitive hashing converts high-dimensional feature vectors, such as image and speech, into bit arrays and allows high-speed similarity calculation with the Hamming distance. There is a hashing scheme that maps feature vectors to bit arrays depending on the signs of the inner products between feature vectors and the normal vectors of hyperplanes placed in the feature space. This hashing can be seen as a discretization of the feature space by hyperplanes. If labels for data are given, one can determine the hyperplanes by using learning algorithms. However, many proposed learning methods do not consider the hyperplanes' offsets. Not doing so decreases the number of partitioned regions, and the correlation between Hamming distances and Euclidean distances becomes small. In this paper, we propose a lift map that converts learning algorithms without the offsets to the ones that take into account the offsets. With this method, the learning methods without the offsets give the discretizations of spaces as if it takes into account the offsets. For the proposed method, we input several high-dimensional feature data sets and studied the relationship between the statistical characteristics of data, the number of hyperplanes, and the effect of the proposed method.
[ "Makiko Konoshima and Yui Noma", "['Makiko Konoshima' 'Yui Noma']" ]
cs.LG cs.CE
null
1212.6167
null
null
http://arxiv.org/pdf/1212.6167v1
2012-12-26T12:03:26Z
2012-12-26T12:03:26Z
Transfer Learning Using Logistic Regression in Credit Scoring
The credit scoring risk management is a fast growing field due to consumer's credit requests. Credit requests, of new and existing customers, are often evaluated by classical discrimination rules based on customers information. However, these kinds of strategies have serious limits and don't take into account the characteristics difference between current customers and the future ones. The aim of this paper is to measure credit worthiness for non customers borrowers and to model potential risk given a heterogeneous population formed by borrowers customers of the bank and others who are not. We hold on previous works done in generalized gaussian discrimination and transpose them into the logistic model to bring out efficient discrimination rules for non customers' subpopulation. Therefore we obtain several simple models of connection between parameters of both logistic models associated respectively to the two subpopulations. The German credit data set is selected to experiment and to compare these models. Experimental results show that the use of links between the two subpopulations improve the classification accuracy for the new loan applicants.
[ "['Farid Beninel' 'Waad Bouaguel' 'Ghazi Belmufti']", "Farid Beninel, Waad Bouaguel, Ghazi Belmufti" ]
stat.ML cs.LG
null
1212.6246
null
null
http://arxiv.org/pdf/1212.6246v1
2012-12-26T20:45:48Z
2012-12-26T20:45:48Z
Gaussian Process Regression with Heteroscedastic or Non-Gaussian Residuals
Gaussian Process (GP) regression models typically assume that residuals are Gaussian and have the same variance for all observations. However, applications with input-dependent noise (heteroscedastic residuals) frequently arise in practice, as do applications in which the residuals do not have a Gaussian distribution. In this paper, we propose a GP Regression model with a latent variable that serves as an additional unobserved covariate for the regression. This model (which we call GPLC) allows for heteroscedasticity since it allows the function to have a changing partial derivative with respect to this unobserved covariate. With a suitable covariance function, our GPLC model can handle (a) Gaussian residuals with input-dependent variance, or (b) non-Gaussian residuals with input-dependent variance, or (c) Gaussian residuals with constant variance. We compare our model, using synthetic datasets, with a model proposed by Goldberg, Williams and Bishop (1998), which we refer to as GPLV, which only deals with case (a), as well as a standard GP model which can handle only case (c). Markov Chain Monte Carlo methods are developed for both modelsl. Experiments show that when the data is heteroscedastic, both GPLC and GPLV give better results (smaller mean squared error and negative log-probability density) than standard GP regression. In addition, when the residual are Gaussian, our GPLC model is generally nearly as good as GPLV, while when the residuals are non-Gaussian, our GPLC model is better than GPLV.
[ "Chunyi Wang and Radford M. Neal", "['Chunyi Wang' 'Radford M. Neal']" ]
cs.NE cs.AI cs.LG
10.1109/CCNC.2013.6488435
1212.6276
null
null
http://arxiv.org/abs/1212.6276v1
2012-12-26T22:31:13Z
2012-12-26T22:31:13Z
Echo State Queueing Network: a new reservoir computing learning tool
In the last decade, a new computational paradigm was introduced in the field of Machine Learning, under the name of Reservoir Computing (RC). RC models are neural networks which a recurrent part (the reservoir) that does not participate in the learning process, and the rest of the system where no recurrence (no neural circuit) occurs. This approach has grown rapidly due to its success in solving learning tasks and other computational applications. Some success was also observed with another recently proposed neural network designed using Queueing Theory, the Random Neural Network (RandNN). Both approaches have good properties and identified drawbacks. In this paper, we propose a new RC model called Echo State Queueing Network (ESQN), where we use ideas coming from RandNNs for the design of the reservoir. ESQNs consist in ESNs where the reservoir has a new dynamics inspired by recurrent RandNNs. The paper positions ESQNs in the global Machine Learning area, and provides examples of their use and performances. We show on largely used benchmarks that ESQNs are very accurate tools, and we illustrate how they compare with standard ESNs.
[ "['Sebastián Basterrech' 'Gerardo Rubino']", "Sebasti\\'an Basterrech and Gerardo Rubino" ]
stat.ML cs.LG
null
1212.6316
null
null
http://arxiv.org/pdf/1212.6316v1
2012-12-27T07:07:06Z
2012-12-27T07:07:06Z
On-line relational SOM for dissimilarity data
In some applications and in order to address real world situations better, data may be more complex than simple vectors. In some examples, they can be known through their pairwise dissimilarities only. Several variants of the Self Organizing Map algorithm were introduced to generalize the original algorithm to this framework. Whereas median SOM is based on a rough representation of the prototypes, relational SOM allows representing these prototypes by a virtual combination of all elements in the data set. However, this latter approach suffers from two main drawbacks. First, its complexity can be large. Second, only a batch version of this algorithm has been studied so far and it often provides results having a bad topographic organization. In this article, an on-line version of relational SOM is described and justified. The algorithm is tested on several datasets, including categorical data and graphs, and compared with the batch version and with other SOM algorithms for non vector data.
[ "Madalina Olteanu (SAMM), Nathalie Villa-Vialaneix (SAMM), Marie\n Cottrell (SAMM)", "['Madalina Olteanu' 'Nathalie Villa-Vialaneix' 'Marie Cottrell']" ]
stat.ML cs.AI cs.LG
null
1212.6659
null
null
http://arxiv.org/pdf/1212.6659v1
2012-12-29T20:23:48Z
2012-12-29T20:23:48Z
Focus of Attention for Linear Predictors
We present a method to stop the evaluation of a prediction process when the result of the full evaluation is obvious. This trait is highly desirable in prediction tasks where a predictor evaluates all its features for every example in large datasets. We observe that some examples are easier to classify than others, a phenomenon which is characterized by the event when most of the features agree on the class of an example. By stopping the feature evaluation when encountering an easy- to-classify example, the predictor can achieve substantial gains in computation. Our method provides a natural attention mechanism for linear predictors where the predictor concentrates most of its computation on hard-to-classify examples and quickly discards easy-to-classify ones. By modifying a linear prediction algorithm such as an SVM or AdaBoost to include our attentive method we prove that the average number of features computed is O(sqrt(n log 1/sqrt(delta))) where n is the original number of features, and delta is the error rate incurred due to early stopping. We demonstrate the effectiveness of Attentive Prediction on MNIST, Real-sim, Gisette, and synthetic datasets.
[ "['Raphael Pelossof' 'Zhiliang Ying']", "Raphael Pelossof and Zhiliang Ying" ]
cs.DS cs.AI cs.CC cs.LG stat.ML
null
1212.6846
null
null
http://arxiv.org/pdf/1212.6846v2
2013-01-10T21:20:45Z
2012-12-31T09:32:51Z
Maximizing a Nonnegative, Monotone, Submodular Function Constrained to Matchings
Submodular functions have many applications. Matchings have many applications. The bitext word alignment problem can be modeled as the problem of maximizing a nonnegative, monotone, submodular function constrained to matchings in a complete bipartite graph where each vertex corresponds to a word in the two input sentences and each edge represents a potential word-to-word translation. We propose a more general problem of maximizing a nonnegative, monotone, submodular function defined on the edge set of a complete graph constrained to matchings; we call this problem the CSM-Matching problem. CSM-Matching also generalizes the maximum-weight matching problem, which has a polynomial-time algorithm; however, we show that it is NP-hard to approximate CSM-Matching within a factor of e/(e-1) by reducing the max k-cover problem to it. Our main result is a simple, greedy, 3-approximation algorithm for CSM-Matching. Then we reduce CSM-Matching to maximizing a nonnegative, monotone, submodular function over two matroids, i.e., CSM-2-Matroids. CSM-2-Matroids has a (2+epsilon)-approximation algorithm - called LSV2. We show that we can find a (4+epsilon)-approximate solution to CSM-Matching using LSV2. We extend this approach to similar problems.
[ "['Sagar Kale']", "Sagar Kale" ]
cs.NE cs.LG
null
1212.6922
null
null
http://arxiv.org/pdf/1212.6922v1
2012-12-31T16:40:50Z
2012-12-31T16:40:50Z
Training a Functional Link Neural Network Using an Artificial Bee Colony for Solving a Classification Problems
Artificial Neural Networks have emerged as an important tool for classification and have been widely used to classify a non-linear separable pattern. The most popular artificial neural networks model is a Multilayer Perceptron (MLP) as it is able to perform classification task with significant success. However due to the complexity of MLP structure and also problems such as local minima trapping, over fitting and weight interference have made neural network training difficult. Thus, the easy way to avoid these problems is to remove the hidden layers. This paper presents the ability of Functional Link Neural Network (FLNN) to overcome the complexity structure of MLP by using single layer architecture and propose an Artificial Bee Colony (ABC) optimization for training the FLNN. The proposed technique is expected to provide better learning scheme for a classifier in order to get more accurate classification result
[ "['Yana Mazwin Mohmad Hassim' 'Rozaida Ghazali']", "Yana Mazwin Mohmad Hassim and Rozaida Ghazali" ]
cs.LG math.OC
null
1212.6958
null
null
http://arxiv.org/pdf/1212.6958v1
2012-12-31T20:13:23Z
2012-12-31T20:13:23Z
Fast Solutions to Projective Monotone Linear Complementarity Problems
We present a new interior-point potential-reduction algorithm for solving monotone linear complementarity problems (LCPs) that have a particular special structure: their matrix $M\in{\mathbb R}^{n\times n}$ can be decomposed as $M=\Phi U + \Pi_0$, where the rank of $\Phi$ is $k<n$, and $\Pi_0$ denotes Euclidean projection onto the nullspace of $\Phi^\top$. We call such LCPs projective. Our algorithm solves a monotone projective LCP to relative accuracy $\epsilon$ in $O(\sqrt n \ln(1/\epsilon))$ iterations, with each iteration requiring $O(nk^2)$ flops. This complexity compares favorably with interior-point algorithms for general monotone LCPs: these algorithms also require $O(\sqrt n \ln(1/\epsilon))$ iterations, but each iteration needs to solve an $n\times n$ system of linear equations, a much higher cost than our algorithm when $k\ll n$. Our algorithm works even though the solution to a projective LCP is not restricted to lie in any low-rank subspace.
[ "['Geoffrey J. Gordon']", "Geoffrey J. Gordon" ]
cs.LG stat.ML
null
1301.0015
null
null
http://arxiv.org/pdf/1301.0015v1
2012-12-31T21:07:21Z
2012-12-31T21:07:21Z
Bethe Bounds and Approximating the Global Optimum
Inference in general Markov random fields (MRFs) is NP-hard, though identifying the maximum a posteriori (MAP) configuration of pairwise MRFs with submodular cost functions is efficiently solvable using graph cuts. Marginal inference, however, even for this restricted class, is in #P. We prove new formulations of derivatives of the Bethe free energy, provide bounds on the derivatives and bracket the locations of stationary points, introducing a new technique called Bethe bound propagation. Several results apply to pairwise models whether associative or not. Applying these to discretized pseudo-marginals in the associative case we present a polynomial time approximation scheme for global optimization provided the maximum degree is $O(\log n)$, and discuss several extensions.
[ "Adrian Weller and Tony Jebara", "['Adrian Weller' 'Tony Jebara']" ]
math.OC cs.DC cs.LG cs.SI physics.soc-ph
10.1016/j.neucom.2012.12.043
1301.0047
null
null
http://arxiv.org/abs/1301.0047v1
2013-01-01T02:02:51Z
2013-01-01T02:02:51Z
On Distributed Online Classification in the Midst of Concept Drifts
In this work, we analyze the generalization ability of distributed online learning algorithms under stationary and non-stationary environments. We derive bounds for the excess-risk attained by each node in a connected network of learners and study the performance advantage that diffusion strategies have over individual non-cooperative processing. We conduct extensive simulations to illustrate the results.
[ "['Zaid J. Towfic' 'Jianshu Chen' 'Ali H. Sayed']", "Zaid J. Towfic, Jianshu Chen, Ali H. Sayed" ]
cs.LG cs.DC
null
1301.0082
null
null
http://arxiv.org/pdf/1301.0082v1
2013-01-01T13:20:27Z
2013-01-01T13:20:27Z
CloudSVM : Training an SVM Classifier in Cloud Computing Systems
In conventional method, distributed support vector machines (SVM) algorithms are trained over pre-configured intranet/internet environments to find out an optimal classifier. These methods are very complicated and costly for large datasets. Hence, we propose a method that is referred as the Cloud SVM training mechanism (CloudSVM) in a cloud computing environment with MapReduce technique for distributed machine learning applications. Accordingly, (i) SVM algorithm is trained in distributed cloud storage servers that work concurrently; (ii) merge all support vectors in every trained cloud node; and (iii) iterate these two steps until the SVM converges to the optimal classifier function. Large scale data sets are not possible to train using SVM algorithm on a single computer. The results of this study are important for training of large scale data sets for machine learning applications. We provided that iterative training of splitted data set in cloud computing environment using SVM will converge to a global optimal classifier in finite iteration size.
[ "F. Ozgur Catak and M. Erdal Balaban", "['F. Ozgur Catak' 'M. Erdal Balaban']" ]
cs.LG stat.ML
null
1301.0104
null
null
http://arxiv.org/pdf/1301.0104v1
2013-01-01T16:25:17Z
2013-01-01T16:25:17Z
Policy Evaluation with Variance Related Risk Criteria in Markov Decision Processes
In this paper we extend temporal difference policy evaluation algorithms to performance criteria that include the variance of the cumulative reward. Such criteria are useful for risk management, and are important in domains such as finance and process control. We propose both TD(0) and LSTD(lambda) variants with linear function approximation, prove their convergence, and demonstrate their utility in a 4-dimensional continuous state space problem.
[ "['Aviv Tamar' 'Dotan Di Castro' 'Shie Mannor']", "Aviv Tamar, Dotan Di Castro, Shie Mannor" ]
stat.ML cs.LG
null
1301.0142
null
null
http://arxiv.org/pdf/1301.0142v1
2013-01-01T22:52:22Z
2013-01-01T22:52:22Z
Semi-Supervised Domain Adaptation with Non-Parametric Copulas
A new framework based on the theory of copulas is proposed to address semi- supervised domain adaptation problems. The presented method factorizes any multivariate density into a product of marginal distributions and bivariate cop- ula functions. Therefore, changes in each of these factors can be detected and corrected to adapt a density model accross different learning domains. Impor- tantly, we introduce a novel vine copula model, which allows for this factorization in a non-parametric manner. Experimental results on regression problems with real-world data illustrate the efficacy of the proposed approach when compared to state-of-the-art techniques.
[ "David Lopez-Paz, Jos\\'e Miguel Hern\\'andez-Lobato, Bernhard\n Sch\\\"olkopf", "['David Lopez-Paz' 'José Miguel Hernández-Lobato' 'Bernhard Schölkopf']" ]
cs.LG
null
1301.0179
null
null
http://arxiv.org/pdf/1301.0179v1
2013-01-02T07:13:19Z
2013-01-02T07:13:19Z
A Novel Design Specification Distance(DSD) Based K-Mean Clustering Performace Evluation on Engineering Materials Database
Organizing data into semantically more meaningful is one of the fundamental modes of understanding and learning. Cluster analysis is a formal study of methods for understanding and algorithm for learning. K-mean clustering algorithm is one of the most fundamental and simple clustering algorithms. When there is no prior knowledge about the distribution of data sets, K-mean is the first choice for clustering with an initial number of clusters. In this paper a novel distance metric called Design Specification (DS) distance measure function is integrated with K-mean clustering algorithm to improve cluster accuracy. The K-means algorithm with proposed distance measure maximizes the cluster accuracy to 99.98% at P = 1.525, which is determined through the iterative procedure. The performance of Design Specification (DS) distance measure function with K - mean algorithm is compared with the performances of other standard distance functions such as Euclidian, squared Euclidean, City Block, and Chebshew similarity measures deployed with K-mean algorithm.The proposed method is evaluated on the engineering materials database. The experiments on cluster analysis and the outlier profiling show that these is an excellent improvement in the performance of the proposed method.
[ "['Doreswamy' 'K. S. Hemanth']", "Doreswamy, K. S. Hemanth" ]
cs.LG stat.ML
null
1301.0534
null
null
http://arxiv.org/pdf/1301.0534v2
2013-01-17T10:03:03Z
2013-01-03T19:49:14Z
Follow the Leader If You Can, Hedge If You Must
Follow-the-Leader (FTL) is an intuitive sequential prediction strategy that guarantees constant regret in the stochastic setting, but has terrible performance for worst-case data. Other hedging strategies have better worst-case guarantees but may perform much worse than FTL if the data are not maximally adversarial. We introduce the FlipFlop algorithm, which is the first method that provably combines the best of both worlds. As part of our construction, we develop AdaHedge, which is a new way of dynamically tuning the learning rate in Hedge without using the doubling trick. AdaHedge refines a method by Cesa-Bianchi, Mansour and Stoltz (2007), yielding slightly improved worst-case guarantees. By interleaving AdaHedge and FTL, the FlipFlop algorithm achieves regret within a constant factor of the FTL regret, without sacrificing AdaHedge's worst-case guarantees. AdaHedge and FlipFlop do not need to know the range of the losses in advance; moreover, unlike earlier methods, both have the intuitive property that the issued weights are invariant under rescaling and translation of the losses. The losses are also allowed to be negative, in which case they may be interpreted as gains.
[ "['Steven de Rooij' 'Tim van Erven' 'Peter D. Grünwald' 'Wouter M. Koolen']", "Steven de Rooij, Tim van Erven, Peter D. Gr\\\"unwald, Wouter M. Koolen" ]
cs.LG cs.RO stat.ML
null
1301.0551
null
null
http://arxiv.org/pdf/1301.0551v1
2012-12-12T15:55:05Z
2012-12-12T15:55:05Z
Learning Hierarchical Object Maps Of Non-Stationary Environments with mobile robots
Building models, or maps, of robot environments is a highly active research area; however, most existing techniques construct unstructured maps and assume static environments. In this paper, we present an algorithm for learning object models of non-stationary objects found in office-type environments. Our algorithm exploits the fact that many objects found in office environments look alike (e.g., chairs, recycling bins). It does so through a two-level hierarchical representation, which links individual objects with generic shape templates of object classes. We derive an approximate EM algorithm for learning shape parameters at both levels of the hierarchy, using local occupancy grid maps for representing shape. Additionally, we develop a Bayesian model selection algorithm that enables the robot to estimate the total number of objects and object templates in the environment. Experimental results using a real robot equipped with a laser range finder indicate that our approach performs well at learning object-based maps of simple office environments. The approach outperforms a previously developed non-hierarchical algorithm that models objects but lacks class templates.
[ "['Dragomir Anguelov' 'Rahul Biswas' 'Daphne Koller' 'Benson Limketkai'\n 'Sebastian Thrun']", "Dragomir Anguelov, Rahul Biswas, Daphne Koller, Benson Limketkai,\n Sebastian Thrun" ]
cs.LG stat.ML
null
1301.0554
null
null
http://arxiv.org/pdf/1301.0554v1
2012-12-12T15:55:17Z
2012-12-12T15:55:17Z
Tree-dependent Component Analysis
We present a generalization of independent component analysis (ICA), where instead of looking for a linear transform that makes the data components independent, we look for a transform that makes the data components well fit by a tree-structured graphical model. Treating the problem as a semiparametric statistical problem, we show that the optimal transform is found by minimizing a contrast function based on mutual information, a function that directly extends the contrast function used for classical ICA. We provide two approximations of this contrast function, one using kernel density estimation, and another using kernel generalized variance. This tree-dependent component analysis framework leads naturally to an efficient general multivariate density estimation technique where only bivariate density estimation needs to be performed.
[ "Francis R. Bach, Michael I. Jordan", "['Francis R. Bach' 'Michael I. Jordan']" ]
cs.LG cs.IR stat.ML
null
1301.0556
null
null
http://arxiv.org/pdf/1301.0556v1
2012-12-12T15:55:25Z
2012-12-12T15:55:25Z
Learning with Scope, with Application to Information Extraction and Classification
In probabilistic approaches to classification and information extraction, one typically builds a statistical model of words under the assumption that future data will exhibit the same regularities as the training data. In many data sets, however, there are scope-limited features whose predictive power is only applicable to a certain subset of the data. For example, in information extraction from web pages, word formatting may be indicative of extraction category in different ways on different web pages. The difficulty with using such features is capturing and exploiting the new regularities encountered in previously unseen data. In this paper, we propose a hierarchical probabilistic model that uses both local/scope-limited features, such as word formatting, and global features, such as word content. The local regularities are modeled as an unobserved random parameter which is drawn once for each local data set. This random parameter is estimated during the inference process and then used to perform classification with both the local and global features--- a procedure which is akin to automatically retuning the classifier to the local regularities on each newly encountered web page. Exact inference is intractable and we present approximations via point estimates and variational methods. Empirical results on large collections of web data demonstrate that this method significantly improves performance from traditional models of global features alone.
[ "David Blei, J Andrew Bagnell, Andrew McCallum", "['David Blei' 'J Andrew Bagnell' 'Andrew McCallum']" ]
cs.LG stat.ML
null
1301.0562
null
null
http://arxiv.org/pdf/1301.0562v1
2012-12-12T15:55:50Z
2012-12-12T15:55:50Z
Continuation Methods for Mixing Heterogenous Sources
A number of modern learning tasks involve estimation from heterogeneous information sources. This includes classification with labeled and unlabeled data as well as other problems with analogous structure such as competitive (game theoretic) problems. The associated estimation problems can be typically reduced to solving a set of fixed point equations (consistency conditions). We introduce a general method for combining a preferred information source with another in this setting by evolving continuous paths of fixed points at intermediate allocations. We explicitly identify critical points along the unique paths to either increase the stability of estimation or to ensure a significant departure from the initial source. The homotopy continuation approach is guaranteed to terminate at the second source, and involves no combinatorial effort. We illustrate the power of these ideas both in classification tasks with labeled and unlabeled data, as well as in the context of a competitive (min-max) formulation of DNA sequence motif discovery.
[ "['Adrian Corduneanu' 'Tommi S. Jaakkola']", "Adrian Corduneanu, Tommi S. Jaakkola" ]
cs.LG cs.AI stat.ML
null
1301.0563
null
null
http://arxiv.org/pdf/1301.0563v1
2012-12-12T15:55:54Z
2012-12-12T15:55:54Z
Interpolating Conditional Density Trees
Joint distributions over many variables are frequently modeled by decomposing them into products of simpler, lower-dimensional conditional distributions, such as in sparsely connected Bayesian networks. However, automatically learning such models can be very computationally expensive when there are many datapoints and many continuous variables with complex nonlinear relationships, particularly when no good ways of decomposing the joint distribution are known a priori. In such situations, previous research has generally focused on the use of discretization techniques in which each continuous variable has a single discretization that is used throughout the entire network. \ In this paper, we present and compare a wide variety of tree-based algorithms for learning and evaluating conditional density estimates over continuous variables. These trees can be thought of as discretizations that vary according to the particular interactions being modeled; however, the density within a given leaf of the tree need not be assumed constant, and we show that such nonuniform leaf densities lead to more accurate density estimation. We have developed Bayesian network structure-learning algorithms that employ these tree-based conditional density representations, and we show that they can be used to practically learn complex joint probability models over dozens of continuous variables from thousands of datapoints. We focus on finding models that are simultaneously accurate, fast to learn, and fast to evaluate once they are learned.
[ "Scott Davies, Andrew Moore", "['Scott Davies' 'Andrew Moore']" ]
cs.LG stat.ML
null
1301.0565
null
null
http://arxiv.org/pdf/1301.0565v1
2012-12-12T15:56:02Z
2012-12-12T15:56:02Z
An Information-Theoretic External Cluster-Validity Measure
In this paper we propose a measure of clustering quality or accuracy that is appropriate in situations where it is desirable to evaluate a clustering algorithm by somehow comparing the clusters it produces with ``ground truth' consisting of classes assigned to the patterns by manual means or some other means in whose veracity there is confidence. Such measures are refered to as ``external'. Our measure also has the characteristic of allowing clusterings with different numbers of clusters to be compared in a quantitative and principled way. Our evaluation scheme quantitatively measures how useful the cluster labels of the patterns are as predictors of their class labels. In cases where all clusterings to be compared have the same number of clusters, the measure is equivalent to the mutual information between the cluster labels and the class labels. In cases where the numbers of clusters are different, however, it computes the reduction in the number of bits that would be required to encode (compress) the class labels if both the encoder and decoder have free acccess to the cluster labels. To achieve this encoding the estimated conditional probabilities of the class labels given the cluster labels must also be encoded. These estimated probabilities can be seen as a model for the class labels and their associated code length as a model cost.
[ "Byron E Dom", "['Byron E Dom']" ]
cs.LG cs.AI
null
1301.0567
null
null
http://arxiv.org/pdf/1301.0567v1
2012-12-12T15:56:10Z
2012-12-12T15:56:10Z
The Thing That We Tried Didn't Work Very Well : Deictic Representation in Reinforcement Learning
Most reinforcement learning methods operate on propositional representations of the world state. Such representations are often intractably large and generalize poorly. Using a deictic representation is believed to be a viable alternative: they promise generalization while allowing the use of existing reinforcement-learning methods. Yet, there are few experiments on learning with deictic representations reported in the literature. In this paper we explore the effectiveness of two forms of deictic representation and a na\"{i}ve propositional representation in a simple blocks-world domain. We find, empirically, that the deictic representations actually worsen learning performance. We conclude with a discussion of possible causes of these results and strategies for more effective learning in domains with objects.
[ "Sarah Finney, Natalia Gardiol, Leslie Pack Kaelbling, Tim Oates", "['Sarah Finney' 'Natalia Gardiol' 'Leslie Pack Kaelbling' 'Tim Oates']" ]
cs.LG stat.ML
null
1301.0578
null
null
http://arxiv.org/pdf/1301.0578v1
2012-12-12T15:56:54Z
2012-12-12T15:56:54Z
Dimension Correction for Hierarchical Latent Class Models
Model complexity is an important factor to consider when selecting among graphical models. When all variables are observed, the complexity of a model can be measured by its standard dimension, i.e. the number of independent parameters. When hidden variables are present, however, standard dimension might no longer be appropriate. One should instead use effective dimension (Geiger et al. 1996). This paper is concerned with the computation of effective dimension. First we present an upper bound on the effective dimension of a latent class (LC) model. This bound is tight and its computation is easy. We then consider a generalization of LC models called hierarchical latent class (HLC) models (Zhang 2002). We show that the effective dimension of an HLC model can be obtained from the effective dimensions of some related LC models. We also demonstrate empirically that using effective dimension in place of standard dimension improves the quality of models learned from data.
[ "['Tomas Kocka' 'Nevin Lianwen Zhang']", "Tomas Kocka, Nevin Lianwen Zhang" ]
cs.LG stat.ML
null
1301.0579
null
null
http://arxiv.org/pdf/1301.0579v1
2012-12-12T15:56:58Z
2012-12-12T15:56:58Z
Almost-everywhere algorithmic stability and generalization error
We explore in some detail the notion of algorithmic stability as a viable framework for analyzing the generalization error of learning algorithms. We introduce the new notion of training stability of a learning algorithm and show that, in a general setting, it is sufficient for good bounds on generalization error. In the PAC setting, training stability is both necessary and sufficient for learnability.\ The approach based on training stability makes no reference to VC dimension or VC entropy. There is no need to prove uniform convergence, and generalization error is bounded directly via an extended McDiarmid inequality. As a result it potentially allows us to deal with a broader class of learning algorithms than Empirical Risk Minimization. \ We also explore the relationships among VC dimension, generalization error, and various notions of stability. Several examples of learning algorithms are considered.
[ "['Samuel Kutin' 'Partha Niyogi']", "Samuel Kutin, Partha Niyogi" ]