categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
cs.AI cs.LG
null
1101.2320
null
null
http://arxiv.org/pdf/1101.2320v1
2011-01-12T10:49:51Z
2011-01-12T10:49:51Z
Review and Evaluation of Feature Selection Algorithms in Synthetic Problems
The main purpose of Feature Subset Selection is to find a reduced subset of attributes from a data set described by a feature set. The task of a feature selection algorithm (FSA) is to provide with a computational solution motivated by a certain definition of relevance or by a reliable evaluation measure. In this paper several fundamental algorithms are studied to assess their performance in a controlled experimental scenario. A measure to evaluate FSAs is devised that computes the degree of matching between the output given by a FSA and the known optimal solutions. An extensive experimental study on synthetic problems is carried out to assess the behaviour of the algorithms in terms of solution accuracy and size as a function of the relevance, irrelevance, redundancy and size of the data samples. The controlled experimental conditions facilitate the derivation of better-supported and meaningful conclusions.
[ "['L. A. Belanche' 'F. F. González']", "L.A. Belanche and F.F. Gonz\\'alez" ]
cs.CV cs.LG
null
1101.2987
null
null
http://arxiv.org/pdf/1101.2987v1
2011-01-15T13:29:12Z
2011-01-15T13:29:12Z
Support vector machines/relevance vector machine for remote sensing classification: A review
Kernel-based machine learning algorithms are based on mapping data from the original input feature space to a kernel feature space of higher dimensionality to solve a linear problem in that space. Over the last decade, kernel based classification and regression approaches such as support vector machines have widely been used in remote sensing as well as in various civil engineering applications. In spite of their better performance with different datasets, support vector machines still suffer from shortcomings such as visualization/interpretation of model, choice of kernel and kernel specific parameter as well as the regularization parameter. Relevance vector machines are another kernel based approach being explored for classification and regression with in last few years. The advantages of the relevance vector machines over the support vector machines is the availability of probabilistic predictions, using arbitrary kernel functions and not requiring setting of the regularization parameter. This paper presents a state-of-the-art review of SVM and RVM in remote sensing and provides some details of their use in other civil engineering application also.
[ "['Mahesh Pal']", "Mahesh Pal" ]
stat.ME cs.LG stat.ML
null
1101.3594
null
null
http://arxiv.org/pdf/1101.3594v2
2012-01-05T18:04:10Z
2011-01-19T00:41:43Z
Classification under Data Contamination with Application to Remote Sensing Image Mis-registration
This work is motivated by the problem of image mis-registration in remote sensing and we are interested in determining the resulting loss in the accuracy of pattern classification. A statistical formulation is given where we propose to use data contamination to model and understand the phenomenon of image mis-registration. This model is widely applicable to many other types of errors as well, for example, measurement errors and gross errors etc. The impact of data contamination on classification is studied under a statistical learning theoretical framework. A closed-form asymptotic bound is established for the resulting loss in classification accuracy, which is less than $\epsilon/(1-\epsilon)$ for data contamination of an amount of $\epsilon$. Our bound is sharper than similar bounds in the domain adaptation literature and, unlike such bounds, it applies to classifiers with an infinite Vapnik-Chervonekis (VC) dimension. Extensive simulations have been conducted on both synthetic and real datasets under various types of data contamination, including label flipping, feature swapping and the replacement of feature values with data generated from a random source such as a Gaussian or Cauchy distribution. Our simulation results show that the bound we derive is fairly tight.
[ "Donghui Yan, Peng Gong, Aiyou Chen and Liheng Zhong", "['Donghui Yan' 'Peng Gong' 'Aiyou Chen' 'Liheng Zhong']" ]
cs.AI cs.LG cs.SY math.OC
null
1101.4003
null
null
http://arxiv.org/pdf/1101.4003v3
2011-07-30T09:56:22Z
2011-01-20T19:51:58Z
Dyna-H: a heuristic planning reinforcement learning algorithm applied to role-playing-game strategy decision systems
In a Role-Playing Game, finding optimal trajectories is one of the most important tasks. In fact, the strategy decision system becomes a key component of a game engine. Determining the way in which decisions are taken (online, batch or simulated) and the consumed resources in decision making (e.g. execution time, memory) will influence, in mayor degree, the game performance. When classical search algorithms such as A* can be used, they are the very first option. Nevertheless, such methods rely on precise and complete models of the search space, and there are many interesting scenarios where their application is not possible. Then, model free methods for sequential decision making under uncertainty are the best choice. In this paper, we propose a heuristic planning strategy to incorporate the ability of heuristic-search in path-finding into a Dyna agent. The proposed Dyna-H algorithm, as A* does, selects branches more likely to produce outcomes than other branches. Besides, it has the advantages of being a model-free online reinforcement learning algorithm. The proposal was evaluated against the one-step Q-Learning and Dyna-Q algorithms obtaining excellent experimental results: Dyna-H significantly overcomes both methods in all experiments. We suggest also, a functional analogy between the proposed sampling from worst trajectories heuristic and the role of dreams (e.g. nightmares) in human behavior.
[ "['Matilde Santos' 'Jose Antonio Martin H.' 'Victoria Lopez'\n 'Guillermo Botella']", "Matilde Santos, Jose Antonio Martin H., Victoria Lopez and Guillermo\n Botella" ]
cs.LG
null
1101.4170
null
null
http://arxiv.org/pdf/1101.4170v1
2011-01-21T16:11:05Z
2011-01-21T16:11:05Z
The Role of Normalization in the Belief Propagation Algorithm
An important part of problems in statistical physics and computer science can be expressed as the computation of marginal probabilities over a Markov Random Field. The belief propagation algorithm, which is an exact procedure to compute these marginals when the underlying graph is a tree, has gained its popularity as an efficient way to approximate them in the more general case. In this paper, we focus on an aspect of the algorithm that did not get that much attention in the literature, which is the effect of the normalization of the messages. We show in particular that, for a large class of normalization strategies, it is possible to focus only on belief convergence. Following this, we express the necessary and sufficient conditions for local stability of a fixed point in terms of the graph structure and the beliefs values at the fixed point. We also explicit some connexion between the normalization constants and the underlying Bethe Free Energy.
[ "['Victorin Martin' 'Jean-Marc Lasgouttes' 'Cyril Furtlehner']", "Victorin Martin and Jean-Marc Lasgouttes and Cyril Furtlehner" ]
physics.data-an cond-mat.dis-nn cond-mat.stat-mech cs.LG
10.1088/1742-5468/2011/08/P08009
1101.4227
null
null
http://arxiv.org/abs/1101.4227v3
2011-10-31T03:46:11Z
2011-01-21T20:37:31Z
Statistical Mechanics of Semi-Supervised Clustering in Sparse Graphs
We theoretically study semi-supervised clustering in sparse graphs in the presence of pairwise constraints on the cluster assignments of nodes. We focus on bi-cluster graphs, and study the impact of semi-supervision for varying constraint density and overlap between the clusters. Recent results for unsupervised clustering in sparse graphs indicate that there is a critical ratio of within-cluster and between-cluster connectivities below which clusters cannot be recovered with better than random accuracy. The goal of this paper is to examine the impact of pairwise constraints on the clustering accuracy. Our results suggests that the addition of constraints does not provide automatic improvement over the unsupervised case. When the density of the constraints is sufficiently small, their only impact is to shift the detection threshold while preserving the criticality. Conversely, if the density of (hard) constraints is above the percolation threshold, the criticality is suppressed and the detection threshold disappears.
[ "['Greg Ver Steeg' 'Aram Galstyan' 'Armen E. Allahverdyan']", "Greg Ver Steeg, Aram Galstyan, Armen E. Allahverdyan" ]
stat.ML cs.LG math.FA
10.1016/j.acha.2012.03.009
1101.4388
null
null
http://arxiv.org/abs/1101.4388v3
2012-03-28T18:46:47Z
2011-01-23T16:57:03Z
Reproducing Kernel Banach Spaces with the l1 Norm
Targeting at sparse learning, we construct Banach spaces B of functions on an input space X with the properties that (1) B possesses an l1 norm in the sense that it is isometrically isomorphic to the Banach space of integrable functions on X with respect to the counting measure; (2) point evaluations are continuous linear functionals on B and are representable through a bilinear form with a kernel function; (3) regularized learning schemes on B satisfy the linear representer theorem. Examples of kernel functions admissible for the construction of such spaces are given.
[ "['Guohui Song' 'Haizhang Zhang' 'Fred J. Hickernell']", "Guohui Song, Haizhang Zhang, Fred J. Hickernell" ]
stat.ML cs.LG math.FA
null
1101.4439
null
null
http://arxiv.org/pdf/1101.4439v2
2011-01-27T14:45:29Z
2011-01-24T03:39:57Z
Reproducing Kernel Banach Spaces with the l1 Norm II: Error Analysis for Regularized Least Square Regression
A typical approach in estimating the learning rate of a regularized learning scheme is to bound the approximation error by the sum of the sampling error, the hypothesis error and the regularization error. Using a reproducing kernel space that satisfies the linear representer theorem brings the advantage of discarding the hypothesis error from the sum automatically. Following this direction, we illustrate how reproducing kernel Banach spaces with the l1 norm can be applied to improve the learning rate estimate of l1-regularization in machine learning.
[ "['Guohui Song' 'Haizhang Zhang']", "Guohui Song, Haizhang Zhang" ]
cs.LG
null
1101.4681
null
null
http://arxiv.org/pdf/1101.4681v6
2013-06-27T00:48:11Z
2011-01-24T22:12:37Z
Close the Gaps: A Learning-while-Doing Algorithm for a Class of Single-Product Revenue Management Problems
We consider a retailer selling a single product with limited on-hand inventory over a finite selling season. Customer demand arrives according to a Poisson process, the rate of which is influenced by a single action taken by the retailer (such as price adjustment, sales commission, advertisement intensity, etc.). The relationship between the action and the demand rate is not known in advance. However, the retailer is able to learn the optimal action "on the fly" as she maximizes her total expected revenue based on the observed demand reactions. Using the pricing problem as an example, we propose a dynamic "learning-while-doing" algorithm that only involves function value estimation to achieve a near-optimal performance. Our algorithm employs a series of shrinking price intervals and iteratively tests prices within that interval using a set of carefully chosen parameters. We prove that the convergence rate of our algorithm is among the fastest of all possible algorithms in terms of asymptotic "regret" (the relative loss comparing to the full information optimal solution). Our result closes the performance gaps between parametric and non-parametric learning and between a post-price mechanism and a customer-bidding mechanism. Important managerial insight from this research is that the values of information on both the parametric form of the demand function as well as each customer's exact reservation price are less important than prior literature suggests. Our results also suggest that firms would be better off to perform dynamic learning and action concurrently rather than sequentially.
[ "['Zizhuo Wang' 'Shiming Deng' 'Yinyu Ye']", "Zizhuo Wang, Shiming Deng and Yinyu Ye" ]
cs.CV cs.LG
10.1117/1.3595426
1101.4749
null
null
http://arxiv.org/abs/1101.4749v1
2011-01-25T09:11:49Z
2011-01-25T09:11:49Z
Online Adaptive Decision Fusion Framework Based on Entropic Projections onto Convex Sets with Application to Wildfire Detection in Video
In this paper, an Entropy functional based online Adaptive Decision Fusion (EADF) framework is developed for image analysis and computer vision applications. In this framework, it is assumed that the compound algorithm consists of several sub-algorithms each of which yielding its own decision as a real number centered around zero, representing the confidence level of that particular sub-algorithm. Decision values are linearly combined with weights which are updated online according to an active fusion method based on performing entropic projections onto convex sets describing sub-algorithms. It is assumed that there is an oracle, who is usually a human operator, providing feedback to the decision fusion method. A video based wildfire detection system is developed to evaluate the performance of the algorithm in handling the problems where data arrives sequentially. In this case, the oracle is the security guard of the forest lookout tower verifying the decision of the combined algorithm. Simulation results are presented. The EADF framework is also tested with a standard dataset.
[ "Osman Gunay and Behcet Ugur Toreyin and Kivanc Kose and A. Enis Cetin", "['Osman Gunay' 'Behcet Ugur Toreyin' 'Kivanc Kose' 'A. Enis Cetin']" ]
cs.LG math.OC
null
1101.4752
null
null
http://arxiv.org/pdf/1101.4752v3
2012-04-02T22:59:53Z
2011-01-25T09:18:46Z
A Primal-Dual Convergence Analysis of Boosting
Boosting combines weak learners into a predictor with low empirical risk. Its dual constructs a high entropy distribution upon which weak learners and training labels are uncorrelated. This manuscript studies this primal-dual relationship under a broad family of losses, including the exponential loss of AdaBoost and the logistic loss, revealing: - Weak learnability aids the whole loss family: for any {\epsilon}>0, O(ln(1/{\epsilon})) iterations suffice to produce a predictor with empirical risk {\epsilon}-close to the infimum; - The circumstances granting the existence of an empirical risk minimizer may be characterized in terms of the primal and dual problems, yielding a new proof of the known rate O(ln(1/{\epsilon})); - Arbitrary instances may be decomposed into the above two, granting rate O(1/{\epsilon}), with a matching lower bound provided for the logistic loss.
[ "['Matus Telgarsky']", "Matus Telgarsky" ]
cs.LG cs.AI cs.CV
null
1101.4918
null
null
http://arxiv.org/pdf/1101.4918v1
2011-01-25T20:24:25Z
2011-01-25T20:24:25Z
Using Feature Weights to Improve Performance of Neural Networks
Different features have different relevance to a particular learning problem. Some features are less relevant; while some very important. Instead of selecting the most relevant features using feature selection, an algorithm can be given this knowledge of feature importance based on expert opinion or prior learning. Learning can be faster and more accurate if learners take feature importance into account. Correlation aided Neural Networks (CANN) is presented which is such an algorithm. CANN treats feature importance as the correlation coefficient between the target attribute and the features. CANN modifies normal feed-forward Neural Network to fit both correlation values and training data. Empirical evaluation shows that CANN is faster and more accurate than applying the two step approach of feature selection and then using normal learning algorithms.
[ "['Ridwan Al Iqbal']", "Ridwan Al Iqbal" ]
cs.LG cs.AI cs.CV
null
1101.4924
null
null
http://arxiv.org/pdf/1101.4924v1
2011-01-25T20:42:01Z
2011-01-25T20:42:01Z
A Generalized Method for Integrating Rule-based Knowledge into Inductive Methods Through Virtual Sample Creation
Hybrid learning methods use theoretical knowledge of a domain and a set of classified examples to develop a method for classification. Methods that use domain knowledge have been shown to perform better than inductive learners. However, there is no general method to include domain knowledge into all inductive learning algorithms as all hybrid methods are highly specialized for a particular algorithm. We present an algorithm that will take domain knowledge in the form of propositional rules, generate artificial examples from the rules and also remove instances likely to be flawed. This enriched dataset then can be used by any learning algorithm. Experimental results of different scenarios are shown that demonstrate this method to be more effective than simple inductive learning.
[ "['Ridwan Al Iqbal']", "Ridwan Al Iqbal" ]
cs.LG
null
1101.5039
null
null
http://arxiv.org/pdf/1101.5039v1
2011-01-26T12:21:13Z
2011-01-26T12:21:13Z
A Novel Template-Based Learning Model
This article presents a model which is capable of learning and abstracting new concepts based on comparing observations and finding the resemblance between the observations. In the model, the new observations are compared with the templates which have been derived from the previous experiences. In the first stage, the objects are first represented through a geometric description which is used for finding the object boundaries and a descriptor which is inspired by the human visual system and then they are fed into the model. Next, the new observations are identified through comparing them with the previously-learned templates and are used for producing new templates. The comparisons are made based on measures like Euclidean or correlation distance. The new template is created by applying onion-pealing algorithm. The algorithm consecutively uses convex hulls which are made by the points representing the objects. If the new observation is remarkably similar to one of the observed categories, it is no longer utilized in creating a new template. The existing templates are used to provide a description of the new observation. This description is provided in the templates space. Each template represents a dimension of the feature space. The degree of the resemblance each template bears to each object indicates the value associated with the object in that dimension of the templates space. In this way, the description of the new observation becomes more accurate and detailed as the time passes and the experiences increase. We have used this model for learning and recognizing the new polygons in the polygon space. Representing the polygons was made possible through employing a geometric method and a method inspired by human visual system. Various implementations of the model have been compared. The evaluation results of the model prove its efficiency in learning and deriving new templates.
[ "Mohammadreza Abolghasemi-Dahaghani, Farzad Didehvar (1), Alireza\n Nowroozi", "['Mohammadreza Abolghasemi-Dahaghani' 'Farzad Didehvar' 'Alireza Nowroozi']" ]
cs.SI cs.LG physics.soc-ph
null
1101.5097
null
null
http://arxiv.org/pdf/1101.5097v1
2011-01-26T16:15:22Z
2011-01-26T16:15:22Z
Infinite Multiple Membership Relational Modeling for Complex Networks
Learning latent structure in complex networks has become an important problem fueled by many types of networked data originating from practically all fields of science. In this paper, we propose a new non-parametric Bayesian multiple-membership latent feature model for networks. Contrary to existing multiple-membership models that scale quadratically in the number of vertices the proposed model scales linearly in the number of links admitting multiple-membership analysis in large scale networks. We demonstrate a connection between the single membership relational model and multiple membership models and show on "real" size benchmark network data that accounting for multiple memberships improves the learning of latent structure as measured by link prediction while explicitly accounting for multiple membership result in a more compact representation of the latent structure of networks.
[ "['Morten Mørup' 'Mikkel N. Schmidt' 'Lars Kai Hansen']", "Morten M{\\o}rup, Mikkel N. Schmidt, Lars Kai Hansen" ]
physics.data-an cs.LG cs.SI physics.soc-ph
null
1101.5141
null
null
http://arxiv.org/pdf/1101.5141v1
2011-01-26T19:58:58Z
2011-01-26T19:58:58Z
A Complex Networks Approach for Data Clustering
Many methods have been developed for data clustering, such as k-means, expectation maximization and algorithms based on graph theory. In this latter case, graphs are generally constructed by taking into account the Euclidian distance as a similarity measure, and partitioned using spectral methods. However, these methods are not accurate when the clusters are not well separated. In addition, it is not possible to automatically determine the number of clusters. These limitations can be overcome by taking into account network community identification algorithms. In this work, we propose a methodology for data clustering based on complex networks theory. We compare different metrics for quantifying the similarity between objects and take into account three community finding techniques. This approach is applied to two real-world databases and to two sets of artificially generated data. By comparing our method with traditional clustering approaches, we verify that the proximity measures given by the Chebyshev and Manhattan distances are the most suitable metrics to quantify the similarity between objects. In addition, the community identification method based on the greedy optimization provides the smallest misclassification rates.
[ "['Francisco A. Rodrigues' 'Guilherme Ferraz de Arruda'\n 'Luciano da Fontoura Costa']", "Francisco A. Rodrigues, Guilherme Ferraz de Arruda, Luciano da\n Fontoura Costa" ]
cs.LG cs.AI cs.MA cs.RO
null
1101.5632
null
null
http://arxiv.org/pdf/1101.5632v1
2011-01-28T21:27:31Z
2011-01-28T21:27:31Z
Active Markov Information-Theoretic Path Planning for Robotic Environmental Sensing
Recent research in multi-robot exploration and mapping has focused on sampling environmental fields, which are typically modeled using the Gaussian process (GP). Existing information-theoretic exploration strategies for learning GP-based environmental field maps adopt the non-Markovian problem structure and consequently scale poorly with the length of history of observations. Hence, it becomes computationally impractical to use these strategies for in situ, real-time active sampling. To ease this computational burden, this paper presents a Markov-based approach to efficient information-theoretic path planning for active sampling of GP-based fields. We analyze the time complexity of solving the Markov-based path planning problem, and demonstrate analytically that it scales better than that of deriving the non-Markovian strategies with increasing length of planning horizon. For a class of exploration tasks called the transect sampling task, we provide theoretical guarantees on the active sampling performance of our Markov-based policy, from which ideal environmental field conditions and sampling task settings can be established to limit its performance degradation due to violation of the Markov assumption. Empirical evaluation on real-world temperature and plankton density field data shows that our Markov-based policy can generally achieve active sampling performance comparable to that of the widely-used non-Markovian greedy policies under less favorable realistic field conditions and task settings while enjoying significant computational gain over them.
[ "Kian Hsiang Low, John M. Dolan, and Pradeep Khosla", "['Kian Hsiang Low' 'John M. Dolan' 'Pradeep Khosla']" ]
cs.IT cs.LG math.IT
null
1101.5672
null
null
http://arxiv.org/pdf/1101.5672v1
2011-01-29T06:06:56Z
2011-01-29T06:06:56Z
On the Local Correctness of L^1 Minimization for Dictionary Learning
The idea that many important classes of signals can be well-represented by linear combinations of a small set of atoms selected from a given dictionary has had dramatic impact on the theory and practice of signal processing. For practical problems in which an appropriate sparsifying dictionary is not known ahead of time, a very popular and successful heuristic is to search for a dictionary that minimizes an appropriate sparsity surrogate over a given set of sample data. While this idea is appealing, the behavior of these algorithms is largely a mystery; although there is a body of empirical evidence suggesting they do learn very effective representations, there is little theory to guarantee when they will behave correctly, or when the learned dictionary can be expected to generalize. In this paper, we take a step towards such a theory. We show that under mild hypotheses, the dictionary learning problem is locally well-posed: the desired solution is indeed a local minimum of the $\ell^1$ norm. Namely, if $\mb A \in \Re^{m \times n}$ is an incoherent (and possibly overcomplete) dictionary, and the coefficients $\mb X \in \Re^{n \times p}$ follow a random sparse model, then with high probability $(\mb A,\mb X)$ is a local minimum of the $\ell^1$ norm over the manifold of factorizations $(\mb A',\mb X')$ satisfying $\mb A' \mb X' = \mb Y$, provided the number of samples $p = \Omega(n^3 k)$. For overcomplete $\mb A$, this is the first result showing that the dictionary learning problem is locally solvable. Our analysis draws on tools developed for the problem of completing a low-rank matrix from a small subset of its entries, which allow us to overcome a number of technical obstacles; in particular, the absence of the restricted isometry property.
[ "Quan Geng and Huan Wang and John Wright", "['Quan Geng' 'Huan Wang' 'John Wright']" ]
cs.CV cs.LG
10.1109/TSP.2011.2168521
1101.5785
null
null
http://arxiv.org/abs/1101.5785v1
2011-01-30T17:16:55Z
2011-01-30T17:16:55Z
Statistical Compressed Sensing of Gaussian Mixture Models
A novel framework of compressed sensing, namely statistical compressed sensing (SCS), that aims at efficiently sampling a collection of signals that follow a statistical distribution, and achieving accurate reconstruction on average, is introduced. SCS based on Gaussian models is investigated in depth. For signals that follow a single Gaussian model, with Gaussian or Bernoulli sensing matrices of O(k) measurements, considerably smaller than the O(k log(N/k)) required by conventional CS based on sparse models, where N is the signal dimension, and with an optimal decoder implemented via linear filtering, significantly faster than the pursuit decoders applied in conventional CS, the error of SCS is shown tightly upper bounded by a constant times the best k-term approximation error, with overwhelming probability. The failure probability is also significantly smaller than that of conventional sparsity-oriented CS. Stronger yet simpler results further show that for any sensing matrix, the error of Gaussian SCS is upper bounded by a constant times the best k-term approximation with probability one, and the bound constant can be efficiently calculated. For Gaussian mixture models (GMMs), that assume multiple Gaussian distributions and that each signal follows one of them with an unknown index, a piecewise linear estimator is introduced to decode SCS. The accuracy of model selection, at the heart of the piecewise linear decoder, is analyzed in terms of the properties of the Gaussian distributions and the number of sensing measurements. A maximum a posteriori expectation-maximization algorithm that iteratively estimates the Gaussian models parameters, the signals model selection, and decodes the signals, is presented for GMM-based SCS. In real image sensing applications, GMM-based SCS is shown to lead to improved results compared to conventional CS, at a considerably lower computational cost.
[ "Guoshen Yu and Guillermo Sapiro", "['Guoshen Yu' 'Guillermo Sapiro']" ]
cs.DB cs.DS cs.LG
null
1101.5805
null
null
http://arxiv.org/pdf/1101.5805v3
2011-08-11T20:56:03Z
2011-01-30T19:13:43Z
The VC-Dimension of Queries and Selectivity Estimation Through Sampling
We develop a novel method, based on the statistical concept of the Vapnik-Chervonenkis dimension, to evaluate the selectivity (output cardinality) of SQL queries - a crucial step in optimizing the execution of large scale database and data-mining operations. The major theoretical contribution of this work, which is of independent interest, is an explicit bound to the VC-dimension of a range space defined by all possible outcomes of a collection (class) of queries. We prove that the VC-dimension is a function of the maximum number of Boolean operations in the selection predicate and of the maximum number of select and join operations in any individual query in the collection, but it is neither a function of the number of queries in the collection nor of the size (number of tuples) of the database. We leverage on this result and develop a method that, given a class of queries, builds a concise random sample of a database, such that with high probability the execution of any query in the class on the sample provides an accurate estimate for the selectivity of the query on the original large database. The error probability holds simultaneously for the selectivity estimates of all queries in the collection, thus the same sample can be used to evaluate the selectivity of multiple queries, and the sample needs to be refreshed only following major changes in the database. The sample representation computed by our method is typically sufficiently small to be stored in main memory. We present extensive experimental results, validating our theoretical analysis and demonstrating the advantage of our technique when compared to complex selectivity estimation techniques used in PostgreSQL and the Microsoft SQL Server.
[ "Matteo Riondato, Mert Akdere, Ugur Cetintemel, Stanley B. Zdonik, Eli\n Upfal", "['Matteo Riondato' 'Mert Akdere' 'Ugur Cetintemel' 'Stanley B. Zdonik'\n 'Eli Upfal']" ]
cs.LG cs.CG cs.DB
null
1102.0026
null
null
http://arxiv.org/pdf/1102.0026v1
2011-01-31T22:21:19Z
2011-01-31T22:21:19Z
Spatially-Aware Comparison and Consensus for Clusterings
This paper proposes a new distance metric between clusterings that incorporates information about the spatial distribution of points and clusters. Our approach builds on the idea of a Hilbert space-based representation of clusters as a combination of the representations of their constituent points. We use this representation and the underlying metric to design a spatially-aware consensus clustering procedure. This consensus procedure is implemented via a novel reduction to Euclidean clustering, and is both simple and efficient. All of our results apply to both soft and hard clusterings. We accompany these algorithms with a detailed experimental evaluation that demonstrates the efficiency and quality of our techniques.
[ "Parasaran Raman, Jeff M. Phillips and Suresh Venkatasubramanian", "['Parasaran Raman' 'Jeff M. Phillips' 'Suresh Venkatasubramanian']" ]
stat.ME cs.CE cs.CV cs.LG q-bio.QM
10.1214/12-AOAS543
1102.0059
null
null
http://arxiv.org/abs/1102.0059v2
2012-10-01T09:20:39Z
2011-02-01T02:08:00Z
Statistical methods for tissue array images - algorithmic scoring and co-training
Recent advances in tissue microarray technology have allowed immunohistochemistry to become a powerful medium-to-high throughput analysis tool, particularly for the validation of diagnostic and prognostic biomarkers. However, as study size grows, the manual evaluation of these assays becomes a prohibitive limitation; it vastly reduces throughput and greatly increases variability and expense. We propose an algorithm - Tissue Array Co-Occurrence Matrix Analysis (TACOMA) - for quantifying cellular phenotypes based on textural regularity summarized by local inter-pixel relationships. The algorithm can be easily trained for any staining pattern, is absent of sensitive tuning parameters and has the ability to report salient pixels in an image that contribute to its score. Pathologists' input via informative training patches is an important aspect of the algorithm that allows the training for any specific marker or cell type. With co-training, the error rate of TACOMA can be reduced substantially for a very small training sample (e.g., with size 30). We give theoretical insights into the success of co-training via thinning of the feature set in a high-dimensional setting when there is "sufficient" redundancy among the features. TACOMA is flexible, transparent and provides a scoring process that can be evaluated with clarity and confidence. In a study based on an estrogen receptor (ER) marker, we show that TACOMA is comparable to, or outperforms, pathologists' performance in terms of accuracy and repeatability.
[ "Donghui Yan, Pei Wang, Michael Linden, Beatrice Knudsen, Timothy\n Randolph", "['Donghui Yan' 'Pei Wang' 'Michael Linden' 'Beatrice Knudsen'\n 'Timothy Randolph']" ]
cs.LG
null
1102.0836
null
null
http://arxiv.org/pdf/1102.0836v2
2011-02-08T04:03:50Z
2011-02-04T04:40:07Z
EigenNet: A Bayesian hybrid of generative and conditional models for sparse learning
It is a challenging task to select correlated variables in a high dimensional space. To address this challenge, the elastic net has been developed and successfully applied to many applications. Despite its great success, the elastic net does not explicitly use correlation information embedded in data to select correlated variables. To overcome this limitation, we present a novel Bayesian hybrid model, the EigenNet, that uses the eigenstructures of data to guide variable selection. Specifically, it integrates a sparse conditional classification model with a generative model capturing variable correlations in a principled Bayesian framework. We reparameterize the hybrid model in the eigenspace to avoid overfiting and to increase the computational efficiency of its MCMC sampler. Furthermore, we provide an alternative view to the EigenNet from a regularization perspective: the EigenNet has an adaptive eigenspace-based composite regularizer, which naturally generalizes the $l_{1/2}$ regularizer used by the elastic net. Experiments on synthetic and real data show that the EigenNet significantly outperforms the lasso, the elastic net, and the Bayesian lasso in terms of prediction accuracy, especially when the number of training samples is smaller than the number of variables.
[ "Yuan Qi, Feng Yan", "['Yuan Qi' 'Feng Yan']" ]
cs.AI cs.CV cs.LG math.NA math.PR
10.5121/ijaia.2011.2101
1102.0899
null
null
http://arxiv.org/abs/1102.0899v1
2011-02-04T13:00:06Z
2011-02-04T13:00:06Z
Evidence Feed Forward Hidden Markov Model: A New Type of Hidden Markov Model
The ability to predict the intentions of people based solely on their visual actions is a skill only performed by humans and animals. The intelligence of current computer algorithms has not reached this level of complexity, but there are several research efforts that are working towards it. With the number of classification algorithms available, it is hard to determine which algorithm works best for a particular situation. In classification of visual human intent data, Hidden Markov Models (HMM), and their variants, are leading candidates. The inability of HMMs to provide a probability in the observation to observation linkages is a big downfall in this classification technique. If a person is visually identifying an action of another person, they monitor patterns in the observations. By estimating the next observation, people have the ability to summarize the actions, and thus determine, with pretty good accuracy, the intention of the person performing the action. These visual cues and linkages are important in creating intelligent algorithms for determining human actions based on visual observations. The Evidence Feed Forward Hidden Markov Model is a newly developed algorithm which provides observation to observation linkages. The following research addresses the theory behind Evidence Feed Forward HMMs, provides mathematical proofs of their learning of these parameters to optimize the likelihood of observations with a Evidence Feed Forwards HMM, which is important in all computational intelligence algorithm, and gives comparative examples with standard HMMs in classification of both visual action data and measurement data; thus providing a strong base for Evidence Feed Forward HMMs in classification of many types of problems.
[ "['Michael DelRose' 'Christian Wagner' 'Philip Frederick']", "Michael DelRose, Christian Wagner, Philip Frederick" ]
cs.IR cs.AI cs.LG nlin.AO q-bio.OT
10.1007/s12065-011-0052-5
1102.1027
null
null
http://arxiv.org/abs/1102.1027v1
2011-02-04T22:10:45Z
2011-02-04T22:10:45Z
Collective Classification of Textual Documents by Guided Self-Organization in T-Cell Cross-Regulation Dynamics
We present and study an agent-based model of T-Cell cross-regulation in the adaptive immune system, which we apply to binary classification. Our method expands an existing analytical model of T-cell cross-regulation (Carneiro et al. in Immunol Rev 216(1):48-68, 2007) that was used to study the self-organizing dynamics of a single population of T-Cells in interaction with an idealized antigen presenting cell capable of presenting a single antigen. With agent-based modeling we are able to study the self-organizing dynamics of multiple populations of distinct T-cells which interact via antigen presenting cells that present hundreds of distinct antigens. Moreover, we show that such self-organizing dynamics can be guided to produce an effective binary classification of antigens, which is competitive with existing machine learning methods when applied to biomedical text classification. More specifically, here we test our model on a dataset of publicly available full-text biomedical articles provided by the BioCreative challenge (Krallinger in The biocreative ii. 5 challenge overview, p 19, 2009). We study the robustness of our model's parameter configurations, and show that it leads to encouraging results comparable to state-of-the-art classifiers. Our results help us understand both T-cell cross-regulation as a general principle of guided self-organization, as well as its applicability to document classification. Therefore, we show that our bio-inspired algorithm is a promising novel method for biomedical article classification and for binary document classification in general.
[ "Alaa Abi-Haidar and Luis M. Rocha", "['Alaa Abi-Haidar' 'Luis M. Rocha']" ]
cond-mat.stat-mech cs.LG cs.SI physics.soc-ph
10.1103/PhysRevLett.107.065701
1102.1182
null
null
http://arxiv.org/abs/1102.1182v1
2011-02-06T18:43:03Z
2011-02-06T18:43:03Z
Phase transition in the detection of modules in sparse networks
We present an asymptotically exact analysis of the problem of detecting communities in sparse random networks. Our results are also applicable to detection of functional modules, partitions, and colorings in noisy planted models. Using a cavity method analysis, we unveil a phase transition from a region where the original group assignment is undetectable to one where detection is possible. In some cases, the detectable region splits into an algorithmically hard region and an easy one. Our approach naturally translates into a practical algorithm for detecting modules in sparse networks, and learning the parameters of the underlying model.
[ "Aurelien Decelle, Florent Krzakala, Cristopher Moore and Lenka\n Zdeborov\\'a", "['Aurelien Decelle' 'Florent Krzakala' 'Cristopher Moore'\n 'Lenka Zdeborová']" ]
cs.LG math.FA
null
1102.1324
null
null
http://arxiv.org/pdf/1102.1324v1
2011-02-07T14:41:30Z
2011-02-07T14:41:30Z
Refinement of Operator-valued Reproducing Kernels
This paper studies the construction of a refinement kernel for a given operator-valued reproducing kernel such that the vector-valued reproducing kernel Hilbert space of the refinement kernel contains that of the given one as a subspace. The study is motivated from the need of updating the current operator-valued reproducing kernel in multi-task learning when underfitting or overfitting occurs. Numerical simulations confirm that the established refinement kernel method is able to meet this need. Various characterizations are provided based on feature maps and vector-valued integral representations of operator-valued reproducing kernels. Concrete examples of refining translation invariant and finite Hilbert-Schmidt operator-valued reproducing kernels are provided. Other examples include refinement of Hessian of scalar-valued translation-invariant kernels and transformation kernels. Existence and properties of operator-valued reproducing kernels preserved during the refinement process are also investigated.
[ "Yuesheng Xu, Haizhang Zhang, Qinghui Zhang", "['Yuesheng Xu' 'Haizhang Zhang' 'Qinghui Zhang']" ]
stat.ML cs.LG math.ST stat.TH
null
1102.1465
null
null
http://arxiv.org/pdf/1102.1465v6
2012-07-09T19:24:19Z
2011-02-07T23:25:47Z
An Introduction to Artificial Prediction Markets for Classification
Prediction markets are used in real life to predict outcomes of interest such as presidential elections. This paper presents a mathematical theory of artificial prediction markets for supervised learning of conditional probability estimators. The artificial prediction market is a novel method for fusing the prediction information of features or trained classifiers, where the fusion result is the contract price on the possible outcomes. The market can be trained online by updating the participants' budgets using training examples. Inspired by the real prediction markets, the equations that govern the market are derived from simple and reasonable assumptions. Efficient numerical algorithms are presented for solving these equations. The obtained artificial prediction market is shown to be a maximum likelihood estimator. It generalizes linear aggregation, existent in boosting and random forest, as well as logistic regression and some kernel methods. Furthermore, the market mechanism allows the aggregation of specialized classifiers that participate only on specific instances. Experimental comparisons show that the artificial prediction markets often outperform random forest and implicit online learning on synthetic data and real UCI datasets. Moreover, an extensive evaluation for pelvic and abdominal lymph node detection in CT data shows that the prediction market improves adaboost's detection rate from 79.6% to 81.2% at 3 false positives/volume.
[ "['Adrian Barbu' 'Nathan Lay']", "Adrian Barbu, Nathan Lay" ]
cs.AI cs.LG
null
1102.1808
null
null
http://arxiv.org/pdf/1102.1808v3
2011-02-11T05:10:57Z
2011-02-09T08:25:36Z
From Machine Learning to Machine Reasoning
A plausible definition of "reasoning" could be "algebraically manipulating previously acquired knowledge in order to answer a new question". This definition covers first-order logical inference or probabilistic inference. It also includes much simpler manipulations commonly used to build large learning systems. For instance, we can build an optical character recognition system by first training a character segmenter, an isolated character recognizer, and a language model, using appropriate labeled training sets. Adequately concatenating these modules and fine tuning the resulting system can be viewed as an algebraic operation in a space of models. The resulting model answers a new question, that is, converting the image of a text page into a computer readable text. This observation suggests a conceptual continuity between algebraically rich inference systems, such as logical or probabilistic inference, and simple manipulations, such as the mere concatenation of trainable learning systems. Therefore, instead of trying to bridge the gap between machine learning systems and sophisticated "all-purpose" inference mechanisms, we can instead algebraically enrich the set of manipulations applicable to training systems, and build reasoning capabilities from the ground up.
[ "Leon Bottou", "['Leon Bottou']" ]
cs.LG cs.IT math.IT
null
1102.2467
null
null
http://arxiv.org/pdf/1102.2467v1
2011-02-12T01:34:52Z
2011-02-12T01:34:52Z
Universal Learning Theory
This encyclopedic article gives a mini-introduction into the theory of universal learning, founded by Ray Solomonoff in the 1960s and significantly developed and extended in the last decade. It explains the spirit of universal learning, but necessarily glosses over technical subtleties.
[ "Marcus Hutter", "['Marcus Hutter']" ]
math.ST cs.LG cs.SY math.OC stat.TH
null
1102.2490
null
null
http://arxiv.org/pdf/1102.2490v5
2013-08-29T15:37:53Z
2011-02-12T10:03:21Z
The KL-UCB Algorithm for Bounded Stochastic Bandits and Beyond
This paper presents a finite-time analysis of the KL-UCB algorithm, an online, horizon-free index policy for stochastic bandit problems. We prove two distinct results: first, for arbitrary bounded rewards, the KL-UCB algorithm satisfies a uniformly better regret bound than UCB or UCB2; second, in the special case of Bernoulli rewards, it reaches the lower bound of Lai and Robbins. Furthermore, we show that simple adaptations of the KL-UCB algorithm are also optimal for specific classes of (possibly unbounded) rewards, including those generated from exponential families of distributions. A large-scale numerical study comparing KL-UCB with its main competitors (UCB, UCB2, UCB-Tuned, UCB-V, DMED) shows that KL-UCB is remarkably efficient and stable, including for short time horizons. KL-UCB is also the only method that always performs better than the basic UCB policy. Our regret bounds rely on deviations results of independent interest which are stated and proved in the Appendix. As a by-product, we also obtain an improved regret bound for the standard UCB algorithm.
[ "['Aurélien Garivier' 'Olivier Cappé']", "Aur\\'elien Garivier and Olivier Capp\\'e" ]
cs.CV cs.AI cs.LG cs.NE
null
1102.2739
null
null
http://arxiv.org/pdf/1102.2739v1
2011-02-14T11:40:08Z
2011-02-14T11:40:08Z
A General Framework for Development of the Cortex-like Visual Object Recognition System: Waves of Spikes, Predictive Coding and Universal Dictionary of Features
This study is focused on the development of the cortex-like visual object recognition system. We propose a general framework, which consists of three hierarchical levels (modules). These modules functionally correspond to the V1, V4 and IT areas. Both bottom-up and top-down connections between the hierarchical levels V4 and IT are employed. The higher the degree of matching between the input and the preferred stimulus, the shorter the response time of the neuron. Therefore information about a single stimulus is distributed in time and is transmitted by the waves of spikes. The reciprocal connections and waves of spikes implement predictive coding: an initial hypothesis is generated on the basis of information delivered by the first wave of spikes and is tested with the information carried by the consecutive waves. The development is considered as extraction and accumulation of features in V4 and objects in IT. Once stored a feature can be disposed, if rarely activated. This cause update of feature repository. Consequently, objects in IT are also updated. This illustrates the growing process and dynamical change of topological structures of V4, IT and connections between these areas.
[ "['Sergey S. Tarasenko']", "Sergey S. Tarasenko" ]
cs.LG
10.1109/TNNLS.2012.2198240
1102.2808
null
null
http://arxiv.org/abs/1102.2808v5
2012-09-03T02:17:30Z
2011-02-14T15:53:06Z
Transductive Ordinal Regression
Ordinal regression is commonly formulated as a multi-class problem with ordinal constraints. The challenge of designing accurate classifiers for ordinal regression generally increases with the number of classes involved, due to the large number of labeled patterns that are needed. The availability of ordinal class labels, however, is often costly to calibrate or difficult to obtain. Unlabeled patterns, on the other hand, often exist in much greater abundance and are freely available. To take benefits from the abundance of unlabeled patterns, we present a novel transductive learning paradigm for ordinal regression in this paper, namely Transductive Ordinal Regression (TOR). The key challenge of the present study lies in the precise estimation of both the ordinal class label of the unlabeled data and the decision functions of the ordinal classes, simultaneously. The core elements of the proposed TOR include an objective function that caters to several commonly used loss functions casted in transductive settings, for general ordinal regression. A label swapping scheme that facilitates a strictly monotonic decrease in the objective function value is also introduced. Extensive numerical studies on commonly used benchmark datasets including the real world sentiment prediction problem are then presented to showcase the characteristics and efficacies of the proposed transductive ordinal regression. Further, comparisons to recent state-of-the-art ordinal regression methods demonstrate the introduced transductive learning paradigm for ordinal regression led to the robust and improved performance.
[ "['Chun-Wei Seah' 'Ivor W. Tsang' 'Yew-Soon Ong']", "Chun-Wei Seah, Ivor W. Tsang, Yew-Soon Ong" ]
math.OC cs.LG cs.SY math.PR
null
1102.2975
null
null
http://arxiv.org/pdf/1102.2975v1
2011-02-15T06:12:44Z
2011-02-15T06:12:44Z
Decentralized Restless Bandit with Multiple Players and Unknown Dynamics
We consider decentralized restless multi-armed bandit problems with unknown dynamics and multiple players. The reward state of each arm transits according to an unknown Markovian rule when it is played and evolves according to an arbitrary unknown random process when it is passive. Players activating the same arm at the same time collide and suffer from reward loss. The objective is to maximize the long-term reward by designing a decentralized arm selection policy to address unknown reward models and collisions among players. A decentralized policy is constructed that achieves a regret with logarithmic order when an arbitrary nontrivial bound on certain system parameters is known. When no knowledge about the system is available, we extend the policy to achieve a regret arbitrarily close to the logarithmic order. The result finds applications in communication networks, financial investment, and industrial engineering.
[ "['Haoyang Liu' 'Keqin Liu' 'Qing Zhao']", "Haoyang Liu, Keqin Liu, Qing Zhao" ]
cs.IT cs.LG math.IT stat.ML
10.1109/ISIT.2011.6033687
1102.3176
null
null
http://arxiv.org/abs/1102.3176v3
2011-06-08T10:08:51Z
2011-02-15T20:49:37Z
Selecting the rank of truncated SVD by Maximum Approximation Capacity
Truncated Singular Value Decomposition (SVD) calculates the closest rank-$k$ approximation of a given input matrix. Selecting the appropriate rank $k$ defines a critical model order choice in most applications of SVD. To obtain a principled cut-off criterion for the spectrum, we convert the underlying optimization problem into a noisy channel coding problem. The optimal approximation capacity of this channel controls the appropriate strength of regularization to suppress noise. In simulation experiments, this information theoretic method to determine the optimal rank competes with state-of-the art model selection techniques.
[ "Mario Frank and Joachim M. Buhmann", "['Mario Frank' 'Joachim M. Buhmann']" ]
physics.data-an cond-mat.stat-mech cs.LG q-bio.NC q-bio.QM
10.1103/PhysRevLett.106.090601
1102.3260
null
null
http://arxiv.org/abs/1102.3260v1
2011-02-16T08:15:42Z
2011-02-16T08:15:42Z
Adaptive Cluster Expansion for Inferring Boltzmann Machines with Noisy Data
We introduce a procedure to infer the interactions among a set of binary variables, based on their sampled frequencies and pairwise correlations. The algorithm builds the clusters of variables contributing most to the entropy of the inferred Ising model, and rejects the small contributions due to the sampling noise. Our procedure successfully recovers benchmark Ising models even at criticality and in the low temperature phase, and is applied to neurobiological data.
[ "['Simona Cocco' 'Rémi Monasson']", "Simona Cocco (LPS), R\\'emi Monasson (LPTENS)" ]
math.OC cs.LG
10.1109/TIT.2012.2198613
1102.3508
null
null
http://arxiv.org/abs/1102.3508v1
2011-02-17T07:08:37Z
2011-02-17T07:08:37Z
Online Learning of Rested and Restless Bandits
In this paper we study the online learning problem involving rested and restless multiarmed bandits with multiple plays. The system consists of a single player/user and a set of K finite-state discrete-time Markov chains (arms) with unknown state spaces and statistics. At each time step the player can play M arms. The objective of the user is to decide for each step which M of the K arms to play over a sequence of trials so as to maximize its long term reward. The restless multiarmed bandit is particularly relevant to the application of opportunistic spectrum access (OSA), where a (secondary) user has access to a set of K channels, each of time-varying condition as a result of random fading and/or certain primary users' activities.
[ "Cem Tekin and Mingyan Liu", "['Cem Tekin' 'Mingyan Liu']" ]
cs.IT cs.LG math.IT stat.ML
null
1102.3887
null
null
http://arxiv.org/pdf/1102.3887v1
2011-02-18T19:05:49Z
2011-02-18T19:05:49Z
Active Clustering: Robust and Efficient Hierarchical Clustering using Adaptively Selected Similarities
Hierarchical clustering based on pairwise similarities is a common tool used in a broad range of scientific applications. However, in many problems it may be expensive to obtain or compute similarities between the items to be clustered. This paper investigates the hierarchical clustering of N items based on a small subset of pairwise similarities, significantly less than the complete set of N(N-1)/2 similarities. First, we show that if the intracluster similarities exceed intercluster similarities, then it is possible to correctly determine the hierarchical clustering from as few as 3N log N similarities. We demonstrate this order of magnitude savings in the number of pairwise similarities necessitates sequentially selecting which similarities to obtain in an adaptive fashion, rather than picking them at random. We then propose an active clustering method that is robust to a limited fraction of anomalous similarities, and show how even in the presence of these noisy similarity values we can resolve the hierarchical clustering using only O(N log^2 N) pairwise similarities.
[ "Brian Eriksson, Gautam Dasarathy, Aarti Singh, Robert Nowak", "['Brian Eriksson' 'Gautam Dasarathy' 'Aarti Singh' 'Robert Nowak']" ]
q-bio.GN cs.AI cs.LG q-bio.MN
null
1102.3919
null
null
http://arxiv.org/pdf/1102.3919v1
2011-02-18T21:01:38Z
2011-02-18T21:01:38Z
Inferring Disease and Gene Set Associations with Rank Coherence in Networks
A computational challenge to validate the candidate disease genes identified in a high-throughput genomic study is to elucidate the associations between the set of candidate genes and disease phenotypes. The conventional gene set enrichment analysis often fails to reveal associations between disease phenotypes and the gene sets with a short list of poorly annotated genes, because the existing annotations of disease causative genes are incomplete. We propose a network-based computational approach called rcNet to discover the associations between gene sets and disease phenotypes. Assuming coherent associations between the genes ranked by their relevance to the query gene set, and the disease phenotypes ranked by their relevance to the hidden target disease phenotypes of the query gene set, we formulate a learning framework maximizing the rank coherence with respect to the known disease phenotype-gene associations. An efficient algorithm coupling ridge regression with label propagation, and two variants are introduced to find the optimal solution of the framework. We evaluated the rcNet algorithms and existing baseline methods with both leave-one-out cross-validation and a task of predicting recently discovered disease-gene associations in OMIM. The experiments demonstrated that the rcNet algorithms achieved the best overall rankings compared to the baselines. To further validate the reproducibility of the performance, we applied the algorithms to identify the target diseases of novel candidate disease genes obtained from recent studies of GWAS, DNA copy number variation analysis, and gene expression profiling. The algorithms ranked the target disease of the candidate genes at the top of the rank list in many cases across all the three case studies. The rcNet algorithms are available as a webtool for disease and gene set association analysis at http://compbio.cs.umn.edu/dgsa_rcNet.
[ "['TaeHyun Hwang' 'Wei Zhang' 'Maoqiang Xie' 'Rui Kuang']", "TaeHyun Hwang, Wei Zhang, Maoqiang Xie, Rui Kuang" ]
cs.LG stat.ML
null
1102.3923
null
null
http://arxiv.org/pdf/1102.3923v2
2011-05-26T19:26:27Z
2011-02-18T21:26:16Z
Concentration-Based Guarantees for Low-Rank Matrix Reconstruction
We consider the problem of approximately reconstructing a partially-observed, approximately low-rank matrix. This problem has received much attention lately, mostly using the trace-norm as a surrogate to the rank. Here we study low-rank matrix reconstruction using both the trace-norm, as well as the less-studied max-norm, and present reconstruction guarantees based on existing analysis on the Rademacher complexity of the unit balls of these norms. We show how these are superior in several ways to recently published guarantees based on specialized analysis.
[ "['Rina Foygel' 'Nathan Srebro']", "Rina Foygel, Nathan Srebro" ]
stat.ML cs.LG
10.1109/JSTSP.2011.2159773
1102.3949
null
null
http://arxiv.org/abs/1102.3949v2
2011-08-17T00:03:36Z
2011-02-19T01:41:35Z
Sparse Signal Recovery with Temporally Correlated Source Vectors Using Sparse Bayesian Learning
We address the sparse signal recovery problem in the context of multiple measurement vectors (MMV) when elements in each nonzero row of the solution matrix are temporally correlated. Existing algorithms do not consider such temporal correlations and thus their performance degrades significantly with the correlations. In this work, we propose a block sparse Bayesian learning framework which models the temporal correlations. In this framework we derive two sparse Bayesian learning (SBL) algorithms, which have superior recovery performance compared to existing algorithms, especially in the presence of high temporal correlations. Furthermore, our algorithms are better at handling highly underdetermined problems and require less row-sparsity on the solution matrix. We also provide analysis of the global and local minima of their cost function, and show that the SBL cost function has the very desirable property that the global minimum is at the sparsest solution to the MMV problem. Extensive experiments also provide some interesting results that motivate future theoretical research on the MMV model.
[ "Zhilin Zhang and Bhaskar D. Rao", "['Zhilin Zhang' 'Bhaskar D. Rao']" ]
cs.LG cs.CR
null
1102.4021
null
null
http://arxiv.org/pdf/1102.4021v2
2011-09-18T05:37:43Z
2011-02-19T20:40:56Z
Privacy Preserving Spam Filtering
Email is a private medium of communication, and the inherent privacy constraints form a major obstacle in developing effective spam filtering methods which require access to a large amount of email data belonging to multiple users. To mitigate this problem, we envision a privacy preserving spam filtering system, where the server is able to train and evaluate a logistic regression based spam classifier on the combined email data of all users without being able to observe any emails using primitives such as homomorphic encryption and randomization. We analyze the protocols for correctness and security, and perform experiments of a prototype system on a large scale spam filtering task. State of the art spam filters often use character n-grams as features which result in large sparse data representation, which is not feasible to be used directly with our training and evaluation protocols. We explore various data independent dimensionality reduction which decrease the running time of the protocol making it feasible to use in practice while achieving high accuracy.
[ "['Manas A. Pathak' 'Mehrbod Sharifi' 'Bhiksha Raj']", "Manas A. Pathak, Mehrbod Sharifi, Bhiksha Raj" ]
cs.LG cs.DS
null
1102.4240
null
null
http://arxiv.org/pdf/1102.4240v1
2011-02-21T14:48:20Z
2011-02-21T14:48:20Z
Sparse neural networks with large learning diversity
Coded recurrent neural networks with three levels of sparsity are introduced. The first level is related to the size of messages, much smaller than the number of available neurons. The second one is provided by a particular coding rule, acting as a local constraint in the neural activity. The third one is a characteristic of the low final connection density of the network after the learning phase. Though the proposed network is very simple since it is based on binary neurons and binary connections, it is able to learn a large number of messages and recall them, even in presence of strong erasures. The performance of the network is assessed as a classifier and as an associative memory.
[ "Vincent Gripon and Claude Berrou", "['Vincent Gripon' 'Claude Berrou']" ]
cs.CR cs.LG
null
1102.4374
null
null
http://arxiv.org/pdf/1102.4374v1
2011-02-22T00:11:14Z
2011-02-22T00:11:14Z
Link Prediction by De-anonymization: How We Won the Kaggle Social Network Challenge
This paper describes the winning entry to the IJCNN 2011 Social Network Challenge run by Kaggle.com. The goal of the contest was to promote research on real-world link prediction, and the dataset was a graph obtained by crawling the popular Flickr social photo sharing website, with user identities scrubbed. By de-anonymizing much of the competition test set using our own Flickr crawl, we were able to effectively game the competition. Our attack represents a new application of de-anonymization to gaming machine learning contests, suggesting changes in how future competitions should be run. We introduce a new simulated annealing-based weighted graph matching algorithm for the seeding step of de-anonymization. We also show how to combine de-anonymization with link prediction---the latter is required to achieve good performance on the portion of the test set not de-anonymized---for example by training the predictor on the de-anonymized portion of the test set, and combining probabilistic predictions from de-anonymization and link prediction.
[ "['Arvind Narayanan' 'Elaine Shi' 'Benjamin I. P. Rubinstein']", "Arvind Narayanan, Elaine Shi, Benjamin I. P. Rubinstein" ]
cs.LG cs.GT math.OC
null
1102.4442
null
null
http://arxiv.org/pdf/1102.4442v1
2011-02-22T09:56:28Z
2011-02-22T09:56:28Z
Internal Regret with Partial Monitoring. Calibration-Based Optimal Algorithms
We provide consistent random algorithms for sequential decision under partial monitoring, i.e. when the decision maker does not observe the outcomes but receives instead random feedback signals. Those algorithms have no internal regret in the sense that, on the set of stages where the decision maker chose his action according to a given law, the average payoff could not have been improved in average by using any other fixed law. They are based on a generalization of calibration, no longer defined in terms of a Voronoi diagram but instead of a Laguerre diagram (a more general concept). This allows us to bound, for the first time in this general framework, the expected average internal -- as well as the usual external -- regret at stage $n$ by $O(n^{-1/3})$, which is known to be optimal.
[ "['Vianney Perchet']", "Vianney Perchet" ]
stat.ML cs.IT cs.LG math.IT
10.1214/12-AOS1000
1102.4807
null
null
http://arxiv.org/abs/1102.4807v3
2012-03-06T06:59:59Z
2011-02-23T18:02:53Z
Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions
We analyze a class of estimators based on convex relaxation for solving high-dimensional matrix decomposition problems. The observations are noisy realizations of a linear transformation $\mathfrak{X}$ of the sum of an approximately) low rank matrix $\Theta^\star$ with a second matrix $\Gamma^\star$ endowed with a complementary form of low-dimensional structure; this set-up includes many statistical models of interest, including factor analysis, multi-task regression, and robust covariance estimation. We derive a general theorem that bounds the Frobenius norm error for an estimate of the pair $(\Theta^\star, \Gamma^\star)$ obtained by solving a convex optimization problem that combines the nuclear norm with a general decomposable regularizer. Our results utilize a "spikiness" condition that is related to but milder than singular vector incoherence. We specialize our general result to two cases that have been studied in past work: low rank plus an entrywise sparse matrix, and low rank plus a columnwise sparse matrix. For both models, our theory yields non-asymptotic Frobenius error bounds for both deterministic and stochastic noise matrices, and applies to matrices $\Theta^\star$ that can be exactly or approximately low rank, and matrices $\Gamma^\star$ that can be exactly or approximately sparse. Moreover, for the case of stochastic noise matrices and the identity observation operator, we establish matching lower bounds on the minimax error. The sharpness of our predictions is confirmed by numerical simulations.
[ "['Alekh Agarwal' 'Sahand N. Negahban' 'Martin J. Wainwright']", "Alekh Agarwal and Sahand N. Negahban and Martin J. Wainwright" ]
stat.ML cs.LG cs.SY math.OC stat.AP
null
1102.5288
null
null
http://arxiv.org/pdf/1102.5288v2
2011-09-09T19:06:10Z
2011-02-25T17:13:00Z
Sparse Bayesian Methods for Low-Rank Matrix Estimation
Recovery of low-rank matrices has recently seen significant activity in many areas of science and engineering, motivated by recent theoretical results for exact reconstruction guarantees and interesting practical applications. A number of methods have been developed for this recovery problem. However, a principled method for choosing the unknown target rank is generally not provided. In this paper, we present novel recovery algorithms for estimating low-rank matrices in matrix completion and robust principal component analysis based on sparse Bayesian learning (SBL) principles. Starting from a matrix factorization formulation and enforcing the low-rank constraint in the estimates as a sparsity constraint, we develop an approach that is very effective in determining the correct rank while providing high recovery performance. We provide connections with existing methods in other similar problems and empirical results and comparisons with current state-of-the-art methods that illustrate the effectiveness of this approach.
[ "S. Derin Babacan, Martin Luessi, Rafael Molina, Aggelos K. Katsaggelos", "['S. Derin Babacan' 'Martin Luessi' 'Rafael Molina'\n 'Aggelos K. Katsaggelos']" ]
cond-mat.stat-mech cs.IT cs.LG math.IT
null
1102.5396
null
null
http://arxiv.org/pdf/1102.5396v1
2011-02-26T09:31:25Z
2011-02-26T09:31:25Z
Deformed Statistics Free Energy Model for Source Separation using Unsupervised Learning
A generalized-statistics variational principle for source separation is formulated by recourse to Tsallis' entropy subjected to the additive duality and employing constraints described by normal averages. The variational principle is amalgamated with Hopfield-like learning rules resulting in an unsupervised learning model. The update rules are formulated with the aid of q-deformed calculus. Numerical examples exemplify the efficacy of this model.
[ "R. C. Venkatesan and A. Plastino", "['R. C. Venkatesan' 'A. Plastino']" ]
cs.AI cs.LG
null
1102.5561
null
null
http://arxiv.org/pdf/1102.5561v2
2011-03-01T07:35:59Z
2011-02-27T23:47:13Z
Decision Making Agent Searching for Markov Models in Near-Deterministic World
Reinforcement learning has solid foundations, but becomes inefficient in partially observed (non-Markovian) environments. Thus, a learning agent -born with a representation and a policy- might wish to investigate to what extent the Markov property holds. We propose a learning architecture that utilizes combinatorial policy optimization to overcome non-Markovity and to develop efficient behaviors, which are easy to inherit, tests the Markov property of the behavioral states, and corrects against non-Markovity by running a deterministic factored Finite State Model, which can be learned. We illustrate the properties of architecture in the near deterministic Ms. Pac-Man game. We analyze the architecture from the point of view of evolutionary, individual, and social learning.
[ "['Gabor Matuz' 'Andras Lorincz']", "Gabor Matuz and Andras Lorincz" ]
cs.IT cs.LG math.IT
10.1109/WCNC.2011.5779375
1102.5593
null
null
http://arxiv.org/abs/1102.5593v2
2011-03-03T13:10:48Z
2011-02-28T04:16:24Z
Low Complexity Kolmogorov-Smirnov Modulation Classification
Kolmogorov-Smirnov (K-S) test-a non-parametric method to measure the goodness of fit, is applied for automatic modulation classification (AMC) in this paper. The basic procedure involves computing the empirical cumulative distribution function (ECDF) of some decision statistic derived from the received signal, and comparing it with the CDFs of the signal under each candidate modulation format. The K-S-based modulation classifier is first developed for AWGN channel, then it is applied to OFDM-SDMA systems to cancel multiuser interference. Regarding the complexity issue of K-S modulation classification, we propose a low-complexity method based on the robustness of the K-S classifier. Extensive simulation results demonstrate that compared with the traditional cumulant-based classifiers, the proposed K-S classifier offers superior classification performance and requires less number of signal samples (thus is fast).
[ "Fanggang Wang, Rongtao Xu, Zhangdui Zhong", "['Fanggang Wang' 'Rongtao Xu' 'Zhangdui Zhong']" ]
cs.NA cs.LG
null
1102.5597
null
null
http://arxiv.org/pdf/1102.5597v1
2011-02-28T05:26:58Z
2011-02-28T05:26:58Z
Fast and Faster: A Comparison of Two Streamed Matrix Decomposition Algorithms
With the explosion of the size of digital dataset, the limiting factor for decomposition algorithms is the \emph{number of passes} over the input, as the input is often stored out-of-core or even off-site. Moreover, we're only interested in algorithms that operate in \emph{constant memory} w.r.t. to the input size, so that arbitrarily large input can be processed. In this paper, we present a practical comparison of two such algorithms: a distributed method that operates in a single pass over the input vs. a streamed two-pass stochastic algorithm. The experiments track the effect of distributed computing, oversampling and memory trade-offs on the accuracy and performance of the two algorithms. To ensure meaningful results, we choose the input to be a real dataset, namely the whole of the English Wikipedia, in the application settings of Latent Semantic Analysis.
[ "['Radim Řeh{ů}řek']", "Radim \\v{R}eh{\\r{u}}\\v{r}ek" ]
cs.IR cs.LG
10.5121/ijmit.2011.3104
1102.5728
null
null
http://arxiv.org/abs/1102.5728v1
2011-02-28T18:33:09Z
2011-02-28T18:33:09Z
Named Entity Recognition Using Web Document Corpus
This paper introduces a named entity recognition approach in textual corpus. This Named Entity (NE) can be a named: location, person, organization, date, time, etc., characterized by instances. A NE is found in texts accompanied by contexts: words that are left or right of the NE. The work mainly aims at identifying contexts inducing the NE's nature. As such, The occurrence of the word "President" in a text, means that this word or context may be followed by the name of a president as President "Obama". Likewise, a word preceded by the string "footballer" induces that this is the name of a footballer. NE recognition may be viewed as a classification method, where every word is assigned to a NE class, regarding the context. The aim of this study is then to identify and classify the contexts that are most relevant to recognize a NE, those which are frequently found with the NE. A learning approach using training corpus: web documents, constructed from learning examples is then suggested. Frequency representations and modified tf-idf representations are used to calculate the context weights associated to context frequency, learning example frequency, and document frequency in the corpus.
[ "['Wahiba Ben Abdessalem Karaa']", "Wahiba Ben Abdessalem Karaa" ]
stat.ML cs.LG math.ST stat.TH
null
1102.5750
null
null
http://arxiv.org/pdf/1102.5750v1
2011-02-28T19:31:41Z
2011-02-28T19:31:41Z
Neyman-Pearson classification, convexity and stochastic constraints
Motivated by problems of anomaly detection, this paper implements the Neyman-Pearson paradigm to deal with asymmetric errors in binary classification with a convex loss. Given a finite collection of classifiers, we combine them and obtain a new classifier that satisfies simultaneously the two following properties with high probability: (i) its probability of type I error is below a pre-specified level and (ii), it has probability of type II error close to the minimum possible. The proposed classifier is obtained by solving an optimization problem with an empirical objective and an empirical constraint. New techniques to handle such problems are developed and have consequences on chance constrained programming.
[ "Philippe Rigollet and Xin Tong", "['Philippe Rigollet' 'Xin Tong']" ]
cs.DC cs.CR cs.LG
null
1103.0086
null
null
http://arxiv.org/pdf/1103.0086v1
2011-03-01T06:03:15Z
2011-03-01T06:03:15Z
A generic trust framework for large-scale open systems using machine learning
In many large scale distributed systems and on the web, agents need to interact with other unknown agents to carry out some tasks or transactions. The ability to reason about and assess the potential risks in carrying out such transactions is essential for providing a safe and reliable environment. A traditional approach to reason about the trustworthiness of a transaction is to determine the trustworthiness of the specific agent involved, derived from the history of its behavior. As a departure from such traditional trust models, we propose a generic, machine learning approach based trust framework where an agent uses its own previous transactions (with other agents) to build a knowledge base, and utilize this to assess the trustworthiness of a transaction based on associated features, which are capable of distinguishing successful transactions from unsuccessful ones. These features are harnessed using appropriate machine learning algorithms to extract relationships between the potential transaction and previous transactions. The trace driven experiments using real auction dataset show that this approach provides good accuracy and is highly efficient compared to other trust mechanisms, especially when historical information of the specific agent is rare, incomplete or inaccurate.
[ "Xin Liu and Gilles Tredan and Anwitaman Datta", "['Xin Liu' 'Gilles Tredan' 'Anwitaman Datta']" ]
cs.LG stat.ML
null
1103.0102
null
null
http://arxiv.org/pdf/1103.0102v2
2011-03-03T00:00:13Z
2011-03-01T08:15:28Z
Multi-label Learning via Structured Decomposition and Group Sparsity
In multi-label learning, each sample is associated with several labels. Existing works indicate that exploring correlations between labels improve the prediction performance. However, embedding the label correlations into the training process significantly increases the problem size. Moreover, the mapping of the label structure in the feature space is not clear. In this paper, we propose a novel multi-label learning method "Structured Decomposition + Group Sparsity (SDGS)". In SDGS, we learn a feature subspace for each label from the structured decomposition of the training data, and predict the labels of a new sample from its group sparse representation on the multi-subspace obtained from the structured decomposition. In particular, in the training stage, we decompose the data matrix $X\in R^{n\times p}$ as $X=\sum_{i=1}^kL^i+S$, wherein the rows of $L^i$ associated with samples that belong to label $i$ are nonzero and consist a low-rank matrix, while the other rows are all-zeros, the residual $S$ is a sparse matrix. The row space of $L_i$ is the feature subspace corresponding to label $i$. This decomposition can be efficiently obtained via randomized optimization. In the prediction stage, we estimate the group sparse representation of a new sample on the multi-subspace via group \emph{lasso}. The nonzero representation coefficients tend to concentrate on the subspaces of labels that the sample belongs to, and thus an effective prediction can be obtained. We evaluate SDGS on several real datasets and compare it with popular methods. Results verify the effectiveness and efficiency of SDGS.
[ "Tianyi Zhou and Dacheng Tao", "['Tianyi Zhou' 'Dacheng Tao']" ]
cs.LG cs.CL
null
1103.0398
null
null
http://arxiv.org/pdf/1103.0398v1
2011-03-02T11:34:50Z
2011-03-02T11:34:50Z
Natural Language Processing (almost) from Scratch
We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including: part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.
[ "['Ronan Collobert' 'Jason Weston' 'Leon Bottou' 'Michael Karlen'\n 'Koray Kavukcuoglu' 'Pavel Kuksa']", "Ronan Collobert, Jason Weston, Leon Bottou, Michael Karlen, Koray\n Kavukcuoglu, Pavel Kuksa" ]
cs.LG
null
1103.0598
null
null
http://arxiv.org/pdf/1103.0598v1
2011-03-03T02:46:51Z
2011-03-03T02:46:51Z
Learning transformed product distributions
We consider the problem of learning an unknown product distribution $X$ over $\{0,1\}^n$ using samples $f(X)$ where $f$ is a \emph{known} transformation function. Each choice of a transformation function $f$ specifies a learning problem in this framework. Information-theoretic arguments show that for every transformation function $f$ the corresponding learning problem can be solved to accuracy $\eps$, using $\tilde{O}(n/\eps^2)$ examples, by a generic algorithm whose running time may be exponential in $n.$ We show that this learning problem can be computationally intractable even for constant $\eps$ and rather simple transformation functions. Moreover, the above sample complexity bound is nearly optimal for the general problem, as we give a simple explicit linear transformation function $f(x)=w \cdot x$ with integer weights $w_i \leq n$ and prove that the corresponding learning problem requires $\Omega(n)$ samples. As our main positive result we give a highly efficient algorithm for learning a sum of independent unknown Bernoulli random variables, corresponding to the transformation function $f(x)= \sum_{i=1}^n x_i$. Our algorithm learns to $\eps$-accuracy in poly$(n)$ time, using a surprising poly$(1/\eps)$ number of samples that is independent of $n.$ We also give an efficient algorithm that uses $\log n \cdot \poly(1/\eps)$ samples but has running time that is only $\poly(\log n, 1/\eps).$
[ "['Constantinos Daskalakis' 'Ilias Diakonikolas' 'Rocco A. Servedio']", "Constantinos Daskalakis, Ilias Diakonikolas, Rocco A. Servedio" ]
cs.LG cs.IT math.IT stat.ML
10.1109/TSP.2011.2165952
1103.0769
null
null
http://arxiv.org/abs/1103.0769v2
2011-09-07T00:03:45Z
2011-03-03T20:21:28Z
Sparse Volterra and Polynomial Regression Models: Recoverability and Estimation
Volterra and polynomial regression models play a major role in nonlinear system identification and inference tasks. Exciting applications ranging from neuroscience to genome-wide association analysis build on these models with the additional requirement of parsimony. This requirement has high interpretative value, but unfortunately cannot be met by least-squares based or kernel regression methods. To this end, compressed sampling (CS) approaches, already successful in linear regression settings, can offer a viable alternative. The viability of CS for sparse Volterra and polynomial models is the core theme of this work. A common sparse regression task is initially posed for the two models. Building on (weighted) Lasso-based schemes, an adaptive RLS-type algorithm is developed for sparse polynomial regressions. The identifiability of polynomial models is critically challenged by dimensionality. However, following the CS principle, when these models are sparse, they could be recovered by far fewer measurements. To quantify the sufficient number of measurements for a given level of sparsity, restricted isometry properties (RIP) are investigated in commonly met polynomial regression settings, generalizing known results for their linear counterparts. The merits of the novel (weighted) adaptive CS algorithms to sparse polynomial modeling are verified through synthetic as well as real data tests for genotype-phenotype analysis.
[ "['Vassilis Kekatos' 'Georgios B. Giannakis']", "Vassilis Kekatos and Georgios B. Giannakis" ]
cs.LG cs.CL
null
1103.0890
null
null
http://arxiv.org/pdf/1103.0890v2
2013-05-04T13:57:32Z
2011-03-04T13:08:59Z
Efficient Multi-Template Learning for Structured Prediction
Conditional random field (CRF) and Structural Support Vector Machine (Structural SVM) are two state-of-the-art methods for structured prediction which captures the interdependencies among output variables. The success of these methods is attributed to the fact that their discriminative models are able to account for overlapping features on the whole input observations. These features are usually generated by applying a given set of templates on labeled data, but improper templates may lead to degraded performance. To alleviate this issue, in this paper, we propose a novel multiple template learning paradigm to learn structured prediction and the importance of each template simultaneously, so that hundreds of arbitrary templates could be added into the learning model without caution. This paradigm can be formulated as a special multiple kernel learning problem with exponential number of constraints. Then we introduce an efficient cutting plane algorithm to solve this problem in the primal, and its convergence is presented. We also evaluate the proposed learning paradigm on two widely-studied structured prediction tasks, \emph{i.e.} sequence labeling and dependency parsing. Extensive experimental results show that the proposed method outperforms CRFs and Structural SVMs due to exploiting the importance of each template. Our complexity analysis and empirical results also show that our proposed method is more efficient than OnlineMKL on very sparse and high-dimensional data. We further extend this paradigm for structured prediction using generalized $p$-block norm regularization with $p>1$, and experiments show competitive performances when $p \in [1,2)$.
[ "['Qi Mao' 'Ivor W. Tsang']", "Qi Mao, Ivor W. Tsang" ]
stat.ML cs.LG math.PR
null
1103.0941
null
null
http://arxiv.org/pdf/1103.0941v1
2011-03-04T16:29:04Z
2011-03-04T16:29:04Z
Estimating $\beta$-mixing coefficients
The literature on statistical learning for time series assumes the asymptotic independence or ``mixing' of the data-generating process. These mixing assumptions are never tested, nor are there methods for estimating mixing rates from data. We give an estimator for the $\beta$-mixing rate based on a single stationary sample path and show it is $L_1$-risk consistent.
[ "Daniel J. McDonald, Cosma Rohilla Shalizi, Mark Schervish (Carnegie\n Mellon University)", "['Daniel J. McDonald' 'Cosma Rohilla Shalizi' 'Mark Schervish']" ]
stat.ML cs.LG
null
1103.0942
null
null
http://arxiv.org/pdf/1103.0942v2
2011-06-03T19:08:19Z
2011-03-04T16:38:55Z
Generalization error bounds for stationary autoregressive models
We derive generalization error bounds for stationary univariate autoregressive (AR) models. We show that imposing stationarity is enough to control the Gaussian complexity without further regularization. This lets us use structural risk minimization for model selection. We demonstrate our methods by predicting interest rate movements.
[ "Daniel J. McDonald, Cosma Rohilla Shalizi, Mark Schervish (Carnegie\n Mellon University)", "['Daniel J. McDonald' 'Cosma Rohilla Shalizi' 'Mark Schervish']" ]
stat.ML cs.LG physics.data-an stat.ME
null
1103.0949
null
null
http://arxiv.org/pdf/1103.0949v2
2011-06-28T23:25:41Z
2011-03-04T17:04:20Z
Adapting to Non-stationarity with Growing Expert Ensembles
When dealing with time series with complex non-stationarities, low retrospective regret on individual realizations is a more appropriate goal than low prospective risk in expectation. Online learning algorithms provide powerful guarantees of this form, and have often been proposed for use with non-stationary processes because of their ability to switch between different forecasters or ``experts''. However, existing methods assume that the set of experts whose forecasts are to be combined are all given at the start, which is not plausible when dealing with a genuinely historical or evolutionary system. We show how to modify the ``fixed shares'' algorithm for tracking the best expert to cope with a steadily growing set of experts, obtained by fitting new models to new data as it becomes available, and obtain regret bounds for the growing ensemble.
[ "Cosma Rohilla Shalizi, Abigail Z. Jacobs, Kristina Lisa Klinkner,\n Aaron Clauset", "['Cosma Rohilla Shalizi' 'Abigail Z. Jacobs' 'Kristina Lisa Klinkner'\n 'Aaron Clauset']" ]
cs.LG
10.1109/TPAMI.2012.266
1103.1013
null
null
http://arxiv.org/abs/1103.1013v2
2013-05-04T14:48:06Z
2011-03-05T07:10:41Z
A Feature Selection Method for Multivariate Performance Measures
Feature selection with specific multivariate performance measures is the key to the success of many applications, such as image retrieval and text classification. The existing feature selection methods are usually designed for classification error. In this paper, we propose a generalized sparse regularizer. Based on the proposed regularizer, we present a unified feature selection framework for general loss functions. In particular, we study the novel feature selection paradigm by optimizing multivariate performance measures. The resultant formulation is a challenging problem for high-dimensional data. Hence, a two-layer cutting plane algorithm is proposed to solve this problem, and the convergence is presented. In addition, we adapt the proposed method to optimize multivariate measures for multiple instance learning problems. The analyses by comparing with the state-of-the-art feature selection methods show that the proposed method is superior to others. Extensive experiments on large-scale and high-dimensional real world datasets show that the proposed method outperforms $l_1$-SVM and SVM-RFE when choosing a small subset of features, and achieves significantly improved performances over SVM$^{perf}$ in terms of $F_1$-score.
[ "['Qi Mao' 'Ivor W. Tsang']", "Qi Mao, Ivor W. Tsang" ]
math.ST cs.LG cs.SY math.OC math.PR stat.TH
10.1007/s10208-012-9129-5
1103.1417
null
null
http://arxiv.org/abs/1103.1417v4
2012-11-21T02:31:50Z
2011-03-08T02:34:40Z
Localization from Incomplete Noisy Distance Measurements
We consider the problem of positioning a cloud of points in the Euclidean space $\mathbb{R}^d$, using noisy measurements of a subset of pairwise distances. This task has applications in various areas, such as sensor network localization and reconstruction of protein conformations from NMR measurements. Also, it is closely related to dimensionality reduction problems and manifold learning, where the goal is to learn the underlying global geometry of a data set using local (or partial) metric information. Here we propose a reconstruction algorithm based on semidefinite programming. For a random geometric graph model and uniformly bounded noise, we provide a precise characterization of the algorithm's performance: In the noiseless case, we find a radius $r_0$ beyond which the algorithm reconstructs the exact positions (up to rigid transformations). In the presence of noise, we obtain upper and lower bounds on the reconstruction error that match up to a factor that depends only on the dimension $d$, and the average degree of the nodes in the graph.
[ "Adel Javanmard, Andrea Montanari", "['Adel Javanmard' 'Andrea Montanari']" ]
cs.CG cs.LG
null
1103.1625
null
null
http://arxiv.org/pdf/1103.1625v2
2011-03-09T23:22:09Z
2011-03-08T20:50:55Z
A Gentle Introduction to the Kernel Distance
This document reviews the definition of the kernel distance, providing a gentle introduction tailored to a reader with background in theoretical computer science, but limited exposure to technology more common to machine learning, functional analysis and geometric measure theory. The key aspect of the kernel distance developed here is its interpretation as an L_2 distance between probability measures or various shapes (e.g. point sets, curves, surfaces) embedded in a vector space (specifically an RKHS). This structure enables several elegant and efficient solutions to data analysis problems. We conclude with a glimpse into the mathematical underpinnings of this measure, highlighting its recent independent evolution in two separate fields.
[ "['Jeff M. Phillips' 'Suresh Venkatasubramanian']", "Jeff M. Phillips, Suresh Venkatasubramanian" ]
cs.IT cs.LG math.IT math.ST q-fin.ST stat.ML stat.TH
null
1103.1689
null
null
http://arxiv.org/pdf/1103.1689v1
2011-03-09T02:03:17Z
2011-03-09T02:03:17Z
Information Theoretic Limits on Learning Stochastic Differential Equations
Consider the problem of learning the drift coefficient of a stochastic differential equation from a sample path. In this paper, we assume that the drift is parametrized by a high dimensional vector. We address the question of how long the system needs to be observed in order to learn this vector of parameters. We prove a general lower bound on this time complexity by using a characterization of mutual information as time integral of conditional variance, due to Kadota, Zakai, and Ziv. This general lower bound is applied to specific classes of linear and non-linear stochastic differential equations. In the linear case, the problem under consideration is the one of learning a matrix of interaction coefficients. We evaluate our lower bound for ensembles of sparse and dense random matrices. The resulting estimates match the qualitative behavior of upper bounds achieved by computationally efficient procedures.
[ "['José Bento' 'Morteza Ibrahimi' 'Andrea Montanari']", "Jos\\'e Bento and Morteza Ibrahimi and Andrea Montanari" ]
cs.LG cs.DC stat.ML
10.1109/ICDM.2011.39
1103.2068
null
null
http://arxiv.org/abs/1103.2068v2
2011-09-08T16:20:45Z
2011-03-10T16:15:42Z
COMET: A Recipe for Learning and Using Large Ensembles on Massive Data
COMET is a single-pass MapReduce algorithm for learning on large-scale data. It builds multiple random forest ensembles on distributed blocks of data and merges them into a mega-ensemble. This approach is appropriate when learning from massive-scale data that is too large to fit on a single machine. To get the best accuracy, IVoting should be used instead of bagging to generate the training subset for each decision tree in the random forest. Experiments with two large datasets (5GB and 50GB compressed) show that COMET compares favorably (in both accuracy and training time) to learning on a subsample of data using a serial algorithm. Finally, we propose a new Gaussian approach for lazy ensemble evaluation which dynamically decides how many ensemble members to evaluate per data point; this can reduce evaluation cost by 100X or more.
[ "['Justin D. Basilico' 'M. Arthur Munson' 'Tamara G. Kolda'\n 'Kevin R. Dixon' 'W. Philip Kegelmeyer']", "Justin D. Basilico and M. Arthur Munson and Tamara G. Kolda and Kevin\n R. Dixon and W. Philip Kegelmeyer" ]
cs.LG cs.GT cs.SY math.OC
null
1103.2491
null
null
http://arxiv.org/pdf/1103.2491v1
2011-03-13T03:18:55Z
2011-03-13T03:18:55Z
Heterogeneous Learning in Zero-Sum Stochastic Games with Incomplete Information
Learning algorithms are essential for the applications of game theory in a networking environment. In dynamic and decentralized settings where the traffic, topology and channel states may vary over time and the communication between agents is impractical, it is important to formulate and study games of incomplete information and fully distributed learning algorithms which for each agent requires a minimal amount of information regarding the remaining agents. In this paper, we address this major challenge and introduce heterogeneous learning schemes in which each agent adopts a distinct learning pattern in the context of games with incomplete information. We use stochastic approximation techniques to show that the heterogeneous learning schemes can be studied in terms of their deterministic ordinary differential equation (ODE) counterparts. Depending on the learning rates of the players, these ODEs could be different from the standard replicator dynamics, (myopic) best response (BR) dynamics, logit dynamics, and fictitious play dynamics. We apply the results to a class of security games in which the attacker and the defender adopt different learning schemes due to differences in their rationality levels and the information they acquire.
[ "Quanyan Zhu, Hamidou Tembine and Tamer Basar", "['Quanyan Zhu' 'Hamidou Tembine' 'Tamer Basar']" ]
cs.LG cs.IR cs.SD
null
1103.2832
null
null
http://arxiv.org/pdf/1103.2832v1
2011-03-15T02:39:31Z
2011-03-15T02:39:31Z
Autotagging music with conditional restricted Boltzmann machines
This paper describes two applications of conditional restricted Boltzmann machines (CRBMs) to the task of autotagging music. The first consists of training a CRBM to predict tags that a user would apply to a clip of a song based on tags already applied by other users. By learning the relationships between tags, this model is able to pre-process training data to significantly improve the performance of a support vector machine (SVM) autotagging. The second is the use of a discriminative RBM, a type of CRBM, to autotag music. By simultaneously exploiting the relationships among tags and between tags and audio-based features, this model is able to significantly outperform SVMs, logistic regression, and multi-layer perceptrons. In order to be applied to this problem, the discriminative RBM was generalized to the multi-label setting and four different learning algorithms for it were evaluated, the first such in-depth analysis of which we are aware.
[ "['Michael Mandel' 'Razvan Pascanu' 'Hugo Larochelle' 'Yoshua Bengio']", "Michael Mandel, Razvan Pascanu, Hugo Larochelle and Yoshua Bengio" ]
cs.LG stat.ML
null
1103.3095
null
null
http://arxiv.org/pdf/1103.3095v1
2011-03-16T04:54:58Z
2011-03-16T04:54:58Z
A note on active learning for smooth problems
We show that the disagreement coefficient of certain smooth hypothesis classes is $O(m)$, where $m$ is the dimension of the hypothesis space, thereby answering a question posed in \cite{friedman09}.
[ "Satyaki Mahalanabis", "['Satyaki Mahalanabis']" ]
cs.GT cs.LG cs.NI
10.1109/JSAC.2012.1201xx
1103.3541
null
null
http://arxiv.org/abs/1103.3541v2
2011-11-16T13:34:16Z
2011-03-18T00:40:42Z
Distributed Learning Policies for Power Allocation in Multiple Access Channels
We analyze the problem of distributed power allocation for orthogonal multiple access channels by considering a continuous non-cooperative game whose strategy space represents the users' distribution of transmission power over the network's channels. When the channels are static, we find that this game admits an exact potential function and this allows us to show that it has a unique equilibrium almost surely. Furthermore, using the game's potential property, we derive a modified version of the replicator dynamics of evolutionary game theory which applies to this continuous game, and we show that if the network's users employ a distributed learning scheme based on these dynamics, then they converge to equilibrium exponentially quickly. On the other hand, a major challenge occurs if the channels do not remain static but fluctuate stochastically over time, following a stationary ergodic process. In that case, the associated ergodic game still admits a unique equilibrium, but the learning analysis becomes much more complicated because the replicator dynamics are no longer deterministic. Nonetheless, by employing results from the theory of stochastic approximation, we show that users still converge to the game's unique equilibrium. Our analysis hinges on a game-theoretical result which is of independent interest: in finite player games which admit a (possibly nonlinear) convex potential function, the replicator dynamics (suitably modified to account for nonlinear payoffs) converge to an eps-neighborhood of an equilibrium at time of order O(log(1/eps)).
[ "['Panayotis Mertikopoulos' 'Elena V. Belmega' 'Aris L. Moustakas'\n 'Samson Lasaulce']", "Panayotis Mertikopoulos and Elena V. Belmega and Aris L. Moustakas and\n Samson Lasaulce" ]
cs.IR cs.AI cs.LG
null
1103.3735
null
null
http://arxiv.org/pdf/1103.3735v1
2011-03-19T00:08:45Z
2011-03-19T00:08:45Z
Refining Recency Search Results with User Click Feedback
Traditional machine-learned ranking systems for web search are often trained to capture stationary relevance of documents to queries, which has limited ability to track non-stationary user intention in a timely manner. In recency search, for instance, the relevance of documents to a query on breaking news often changes significantly over time, requiring effective adaptation to user intention. In this paper, we focus on recency search and study a number of algorithms to improve ranking results by leveraging user click feedback. Our contributions are three-fold. First, we use real search sessions collected in a random exploration bucket for \emph{reliable} offline evaluation of these algorithms, which provides an unbiased comparison across algorithms without online bucket tests. Second, we propose a re-ranking approach to improve search results for recency queries using user clicks. Third, our empirical comparison of a dozen algorithms on real-life search data suggests importance of a few algorithmic choices in these applications, including generalization across different query-document pairs, specialization to popular queries, and real-time adaptation of user clicks.
[ "['Taesup Moon' 'Wei Chu' 'Lihong Li' 'Zhaohui Zheng' 'Yi Chang']", "Taesup Moon and Wei Chu and Lihong Li and Zhaohui Zheng and Yi Chang" ]
cond-mat.dis-nn cs.LG physics.bio-ph
10.1088/1742-6596/297/1/012012
1103.3787
null
null
http://arxiv.org/abs/1103.3787v1
2011-03-19T14:57:03Z
2011-03-19T14:57:03Z
Pattern-recalling processes in quantum Hopfield networks far from saturation
As a mathematical model of associative memories, the Hopfield model was now well-established and a lot of studies to reveal the pattern-recalling process have been done from various different approaches. As well-known, a single neuron is itself an uncertain, noisy unit with a finite unnegligible error in the input-output relation. To model the situation artificially, a kind of 'heat bath' that surrounds neurons is introduced. The heat bath, which is a source of noise, is specified by the 'temperature'. Several studies concerning the pattern-recalling processes of the Hopfield model governed by the Glauber-dynamics at finite temperature were already reported. However, we might extend the 'thermal noise' to the quantum-mechanical variant. In this paper, in terms of the stochastic process of quantum-mechanical Markov chain Monte Carlo method (the quantum MCMC), we analytically derive macroscopically deterministic equations of order parameters such as 'overlap' in a quantum-mechanical variant of the Hopfield neural networks (let us call "quantum Hopfield model" or "quantum Hopfield networks"). For the case in which non-extensive number $p$ of patterns are embedded via asymmetric Hebbian connections, namely, $p/N \to 0$ for the number of neuron $N \to \infty$ ('far from saturation'), we evaluate the recalling processes for one of the built-in patterns under the influence of quantum-mechanical noise.
[ "['Jun-ichi Inoue']", "Jun-ichi Inoue" ]
q-bio.QM cs.CL cs.IR cs.LG
null
1103.4090
null
null
http://arxiv.org/pdf/1103.4090v2
2011-04-22T17:46:37Z
2011-03-21T17:33:32Z
A Linear Classifier Based on Entity Recognition Tools and a Statistical Approach to Method Extraction in the Protein-Protein Interaction Literature
We participated, in the Article Classification and the Interaction Method subtasks (ACT and IMT, respectively) of the Protein-Protein Interaction task of the BioCreative III Challenge. For the ACT, we pursued an extensive testing of available Named Entity Recognition and dictionary tools, and used the most promising ones to extend our Variable Trigonometric Threshold linear classifier. For the IMT, we experimented with a primarily statistical approach, as opposed to employing a deeper natural language processing strategy. Finally, we also studied the benefits of integrating the method extraction approach that we have used for the IMT into the ACT pipeline. For the ACT, our linear article classifier leads to a ranking and classification performance significantly higher than all the reported submissions. For the IMT, our results are comparable to those of other systems, which took very different approaches. For the ACT, we show that the use of named entity recognition tools leads to a substantial improvement in the ranking and classification of articles relevant to protein-protein interaction. Thus, we show that our substantially expanded linear classifier is a very competitive classifier in this domain. Moreover, this classifier produces interpretable surfaces that can be understood as "rules" for human understanding of the classification. In terms of the IMT task, in contrast to other participants, our approach focused on identifying sentences that are likely to bear evidence for the application of a PPI detection method, rather than on classifying a document as relevant to a method. As BioCreative III did not perform an evaluation of the evidence provided by the system, we have conducted a separate assessment; the evaluators agree that our tool is indeed effective in detecting relevant evidence for PPI detection methods.
[ "An\\'alia Louren\\c{c}o, Michael Conover, Andrew Wong, Azadeh\n Nematzadeh, Fengxia Pan, Hagit Shatkay, Luis M. Rocha", "['Anália Lourenço' 'Michael Conover' 'Andrew Wong' 'Azadeh Nematzadeh'\n 'Fengxia Pan' 'Hagit Shatkay' 'Luis M. Rocha']" ]
cs.LG
null
1103.4204
null
null
http://arxiv.org/pdf/1103.4204v1
2011-03-22T04:54:35Z
2011-03-22T04:54:35Z
Parallel Online Learning
In this work we study parallelization of online learning, a core primitive in machine learning. In a parallel environment all known approaches for parallel online learning lead to delayed updates, where the model is updated using out-of-date information. In the worst case, or when examples are temporally correlated, delay can have a very adverse effect on the learning algorithm. Here, we analyze and present preliminary empirical results on a set of learning architectures based on a feature sharding approach that present various tradeoffs between delay, degree of parallelism, representation power and empirical performance.
[ "Daniel Hsu, Nikos Karampatziakis, John Langford, Alex Smola", "['Daniel Hsu' 'Nikos Karampatziakis' 'John Langford' 'Alex Smola']" ]
cs.LG stat.ML
null
1103.4480
null
null
http://arxiv.org/pdf/1103.4480v1
2011-03-23T10:20:14Z
2011-03-23T10:20:14Z
Clustered regression with unknown clusters
We consider a collection of prediction experiments, which are clustered in the sense that groups of experiments ex- hibit similar relationship between the predictor and response variables. The experiment clusters as well as the regres- sion relationships are unknown. The regression relation- ships define the experiment clusters, and in general, the predictor and response variables may not exhibit any clus- tering. We call this prediction problem clustered regres- sion with unknown clusters (CRUC) and in this paper we focus on linear regression. We study and compare several methods for CRUC, demonstrate their applicability to the Yahoo Learning-to-rank Challenge (YLRC) dataset, and in- vestigate an associated mathematical model. CRUC is at the crossroads of many prior works and we study several prediction algorithms with diverse origins: an adaptation of the expectation-maximization algorithm, an approach in- spired by K-means clustering, the singular value threshold- ing approach to matrix rank minimization under quadratic constraints, an adaptation of the Curds and Whey method in multiple regression, and a local regression (LoR) scheme reminiscent of neighborhood methods in collaborative filter- ing. Based on empirical evaluation on the YLRC dataset as well as simulated data, we identify the LoR method as a good practical choice: it yields best or near-best prediction performance at a reasonable computational load, and it is less sensitive to the choice of the algorithm parameter. We also provide some analysis of the LoR method for an asso- ciated mathematical model, which sheds light on optimal parameter choice and prediction performance.
[ "['Kishor Barman' 'Onkar Dabeer']", "Kishor Barman, Onkar Dabeer" ]
cs.LG cs.AI cs.CV cs.NE
null
1103.4487
null
null
http://arxiv.org/pdf/1103.4487v1
2011-03-23T10:38:50Z
2011-03-23T10:38:50Z
Handwritten Digit Recognition with a Committee of Deep Neural Nets on GPUs
The competitive MNIST handwritten digit recognition benchmark has a long history of broken records since 1998. The most recent substantial improvement by others dates back 7 years (error rate 0.4%) . Recently we were able to significantly improve this result, using graphics cards to greatly speed up training of simple but deep MLPs, which achieved 0.35%, outperforming all the previous more complex methods. Here we report another substantial improvement: 0.31% obtained using a committee of MLPs.
[ "Dan C. Cire\\c{s}an, Ueli Meier, Luca M. Gambardella and J\\\"urgen\n Schmidhuber", "['Dan C. Cireşan' 'Ueli Meier' 'Luca M. Gambardella' 'Jürgen Schmidhuber']" ]
cs.LG cs.AI cs.RO stat.AP stat.ML
null
1103.4601
null
null
http://arxiv.org/pdf/1103.4601v2
2011-05-06T02:38:18Z
2011-03-23T19:37:45Z
Doubly Robust Policy Evaluation and Learning
We study decision making in environments where the reward is only partially observed, but can be modeled as a function of an action and an observed context. This setting, known as contextual bandits, encompasses a wide variety of applications including health-care policy and Internet advertising. A central task is evaluation of a new policy given historic data consisting of contexts, actions and received rewards. The key challenge is that the past data typically does not faithfully represent proportions of actions taken by a new policy. Previous approaches rely either on models of rewards or models of the past policy. The former are plagued by a large bias whereas the latter have a large variance. In this work, we leverage the strength and overcome the weaknesses of the two approaches by applying the doubly robust technique to the problems of policy evaluation and optimization. We prove that this approach yields accurate value estimates when we have either a good (but not necessarily consistent) model of rewards or a good (but not necessarily consistent) model of past policy. Extensive empirical comparison demonstrates that the doubly robust approach uniformly improves over existing techniques, achieving both lower variance in value estimation and better policies. As such, we expect the doubly robust approach to become common practice.
[ "['Miroslav Dudik' 'John Langford' 'Lihong Li']", "Miroslav Dudik and John Langford and Lihong Li" ]
cs.LG stat.ML
null
1103.4896
null
null
http://arxiv.org/pdf/1103.4896v1
2011-03-25T02:33:27Z
2011-03-25T02:33:27Z
Classification of Sets using Restricted Boltzmann Machines
We consider the problem of classification when inputs correspond to sets of vectors. This setting occurs in many problems such as the classification of pieces of mail containing several pages, of web sites with several sections or of images that have been pre-segmented into smaller regions. We propose generalizations of the restricted Boltzmann machine (RBM) that are appropriate in this context and explore how to incorporate different assumptions about the relationship between the input sets and the target class within the RBM. In experiments on standard multiple-instance learning datasets, we demonstrate the competitiveness of approaches based on RBMs and apply the proposed variants to the problem of incoming mail classification.
[ "J\\'er\\^ome Louradour and Hugo Larochelle", "['Jérôme Louradour' 'Hugo Larochelle']" ]
cs.LG cs.CC cs.NE
null
1103.4904
null
null
http://arxiv.org/pdf/1103.4904v1
2011-03-25T04:34:42Z
2011-03-25T04:34:42Z
Distribution-Independent Evolvability of Linear Threshold Functions
Valiant's (2007) model of evolvability models the evolutionary process of acquiring useful functionality as a restricted form of learning from random examples. Linear threshold functions and their various subclasses, such as conjunctions and decision lists, play a fundamental role in learning theory and hence their evolvability has been the primary focus of research on Valiant's framework (2007). One of the main open problems regarding the model is whether conjunctions are evolvable distribution-independently (Feldman and Valiant, 2008). We show that the answer is negative. Our proof is based on a new combinatorial parameter of a concept class that lower-bounds the complexity of learning from correlations. We contrast the lower bound with a proof that linear threshold functions having a non-negligible margin on the data points are evolvable distribution-independently via a simple mutation algorithm. Our algorithm relies on a non-linear loss function being used to select the hypotheses instead of 0-1 loss in Valiant's (2007) original definition. The proof of evolvability requires that the loss function satisfies several mild conditions that are, for example, satisfied by the quadratic loss function studied in several other works (Michael, 2007; Feldman, 2009; Valiant, 2010). An important property of our evolution algorithm is monotonicity, that is the algorithm guarantees evolvability without any decreases in performance. Previously, monotone evolvability was only shown for conjunctions with quadratic loss (Feldman, 2009) or when the distribution on the domain is severely restricted (Michael, 2007; Feldman, 2009; Kanade et al., 2010)
[ "['Vitaly Feldman']", "Vitaly Feldman" ]
cs.IT cs.LG math.IT
null
1103.5985
null
null
http://arxiv.org/pdf/1103.5985v1
2011-03-30T16:30:27Z
2011-03-30T16:30:27Z
On Empirical Entropy
We propose a compression-based version of the empirical entropy of a finite string over a finite alphabet. Whereas previously one considers the naked entropy of (possibly higher order) Markov processes, we consider the sum of the description of the random variable involved plus the entropy it induces. We assume only that the distribution involved is computable. To test the new notion we compare the Normalized Information Distance (the similarity metric) with a related measure based on Mutual Information in Shannon's framework. This way the similarities and differences of the last two concepts are exposed.
[ "['Paul M. B. Vitányi']", "Paul M.B. Vit\\'anyi (CWI and University of Amsterdam)" ]
cs.LG cs.NI math.PR
null
1104.0111
null
null
http://arxiv.org/pdf/1104.0111v1
2011-04-01T08:48:54Z
2011-04-01T08:48:54Z
Decentralized Online Learning Algorithms for Opportunistic Spectrum Access
The fundamental problem of multiple secondary users contending for opportunistic spectrum access over multiple channels in cognitive radio networks has been formulated recently as a decentralized multi-armed bandit (D-MAB) problem. In a D-MAB problem there are $M$ users and $N$ arms (channels) that each offer i.i.d. stochastic rewards with unknown means so long as they are accessed without collision. The goal is to design a decentralized online learning policy that incurs minimal regret, defined as the difference between the total expected rewards accumulated by a model-aware genie, and that obtained by all users applying the policy. We make two contributions in this paper. First, we consider the setting where the users have a prioritized ranking, such that it is desired for the $K$-th-ranked user to learn to access the arm offering the $K$-th highest mean reward. For this problem, we present the first distributed policy that yields regret that is uniformly logarithmic over time without requiring any prior assumption about the mean rewards. Second, we consider the case when a fair access policy is required, i.e., it is desired for all users to experience the same mean reward. For this problem, we present a distributed policy that yields order-optimal regret scaling with respect to the number of users and arms, better than previously proposed policies in the literature. Both of our distributed policies make use of an innovative modification of the well known UCB1 policy for the classic multi-armed bandit problem that allows a single user to learn how to play the arm that yields the $K$-th largest mean reward.
[ "['Yi Gai' 'Bhaskar Krishnamachari']", "Yi Gai and Bhaskar Krishnamachari" ]
cs.LG
null
1104.0235
null
null
http://arxiv.org/pdf/1104.0235v1
2011-04-01T19:33:05Z
2011-04-01T19:33:05Z
Gaussian Robust Classification
Supervised learning is all about the ability to generalize knowledge. Specifically, the goal of the learning is to train a classifier using training data, in such a way that it will be capable of classifying new unseen data correctly. In order to acheive this goal, it is important to carefully design the learner, so it will not overfit the training data. The later can is done usually by adding a regularization term. The statistical learning theory explains the success of this method by claiming that it restricts the complexity of the learned model. This explanation, however, is rather abstract and does not have a geometric intuition. The generalization error of a classifier may be thought of as correlated with its robustness to perturbations of the data: a classifier that copes with disturbance is expected to generalize well. Indeed, Xu et al. [2009] have shown that the SVM formulation is equivalent to a robust optimization (RO) formulation, in which an adversary displaces the training and testing points within a ball of pre-determined radius. In this work we explore a different kind of robustness, namely changing each data point with a Gaussian cloud centered at the sample. Loss is evaluated as the expectation of an underlying loss function on the cloud. This setup fits the fact that in many applications, the data is sampled along with noise. We develop an RO framework, in which the adversary chooses the covariance of the noise. In our algorithm named GURU, the tuning parameter is a spectral bound on the noise, thus it can be estimated using physical or applicative considerations. Our experiments show that this framework performs as well as SVM and even slightly better in some cases. Generalizations for Mercer kernels and for the multiclass case are presented as well. We also show that our framework may be further generalized, using the technique of convex perspective functions.
[ "['Ido Ginodi' 'Amir Globerson']", "Ido Ginodi, Amir Globerson" ]
cs.LG
null
1104.0651
null
null
http://arxiv.org/pdf/1104.0651v3
2011-07-19T14:39:35Z
2011-04-04T19:04:25Z
Meaningful Clustered Forest: an Automatic and Robust Clustering Algorithm
We propose a new clustering technique that can be regarded as a numerical method to compute the proximity gestalt. The method analyzes edge length statistics in the MST of the dataset and provides an a contrario cluster detection criterion. The approach is fully parametric on the chosen distance and can detect arbitrarily shaped clusters. The method is also automatic, in the sense that only a single parameter is left to the user. This parameter has an intuitive interpretation as it controls the expected number of false detections. We show that the iterative application of our method can (1) provide robustness to noise and (2) solve a masking phenomenon in which a highly populated and salient cluster dominates the scene and inhibits the detection of less-populated, but still salient, clusters.
[ "['Mariano Tepper' 'Pablo Musé' 'Andrés Almansa']", "Mariano Tepper, Pablo Mus\\'e, Andr\\'es Almansa" ]
cs.LG stat.ML
null
1104.0729
null
null
http://arxiv.org/pdf/1104.0729v4
2011-06-16T15:40:28Z
2011-04-05T04:28:51Z
Online and Batch Learning Algorithms for Data with Missing Features
We introduce new online and batch algorithms that are robust to data with missing features, a situation that arises in many practical applications. In the online setup, we allow for the comparison hypothesis to change as a function of the subset of features that is observed on any given round, extending the standard setting where the comparison hypothesis is fixed throughout. In the batch setup, we present a convex relation of a non-convex problem to jointly estimate an imputation function, used to fill in the values of missing features, along with the classification hypothesis. We prove regret bounds in the online setting and Rademacher complexity bounds for the batch i.i.d. setting. The algorithms are tested on several UCI datasets, showing superior performance over baselines.
[ "['Afshin Rostamizadeh' 'Alekh Agarwal' 'Peter Bartlett']", "Afshin Rostamizadeh, Alekh Agarwal, Peter Bartlett" ]
cs.LG math.OC stat.ME stat.ML
null
1104.1436
null
null
http://arxiv.org/pdf/1104.1436v1
2011-04-07T20:05:48Z
2011-04-07T20:05:48Z
Efficient First Order Methods for Linear Composite Regularizers
A wide class of regularization problems in machine learning and statistics employ a regularization term which is obtained by composing a simple convex function \omega with a linear transformation. This setting includes Group Lasso methods, the Fused Lasso and other total variation methods, multi-task learning methods and many more. In this paper, we present a general approach for computing the proximity operator of this class of regularizers, under the assumption that the proximity operator of the function \omega is known in advance. Our approach builds on a recent line of research on optimal first order optimization methods and uses fixed point iterations for numerically computing the proximity operator. It is more general than current approaches and, as we show with numerical simulations, computationally more efficient than available first order methods which do not achieve the optimal rate. In particular, our method outperforms state of the art O(1/T) methods for overlapping Group Lasso and matches optimal O(1/T^2) methods for the Fused Lasso and tree structured Group Lasso.
[ "Andreas Argyriou, Charles A. Micchelli, Massimiliano Pontil, Lixin\n Shen, Yuesheng Xu", "['Andreas Argyriou' 'Charles A. Micchelli' 'Massimiliano Pontil'\n 'Lixin Shen' 'Yuesheng Xu']" ]
math.ST cs.LG stat.TH
null
1104.1450
null
null
http://arxiv.org/pdf/1104.1450v2
2011-11-02T03:20:06Z
2011-04-07T21:54:09Z
Plug-in Approach to Active Learning
We present a new active learning algorithm based on nonparametric estimators of the regression function. Our investigation provides probabilistic bounds for the rates of convergence of the generalization error achievable by proposed method over a broad class of underlying distributions. We also prove minimax lower bounds which show that the obtained rates are almost tight.
[ "Stanislav Minsker", "['Stanislav Minsker']" ]
math.PR cs.LG stat.ML
null
1104.1672
null
null
http://arxiv.org/pdf/1104.1672v3
2011-04-16T04:17:23Z
2011-04-09T04:25:04Z
Dimension-free tail inequalities for sums of random matrices
We derive exponential tail inequalities for sums of random matrices with no dependence on the explicit matrix dimensions. These are similar to the matrix versions of the Chernoff bound and Bernstein inequality except with the explicit matrix dimensions replaced by a trace quantity that can be small even when the dimension is large or infinite. Some applications to principal component analysis and approximate matrix multiplication are given to illustrate the utility of the new bounds.
[ "['Daniel Hsu' 'Sham M. Kakade' 'Tong Zhang']", "Daniel Hsu, Sham M. Kakade, Tong Zhang" ]
math.OC cs.LG stat.ML
null
1104.1872
null
null
http://arxiv.org/pdf/1104.1872v3
2011-09-16T05:26:00Z
2011-04-11T08:04:59Z
Convex and Network Flow Optimization for Structured Sparsity
We consider a class of learning problems regularized by a structured sparsity-inducing norm defined as the sum of l_2- or l_infinity-norms over groups of variables. Whereas much effort has been put in developing fast optimization techniques when the groups are disjoint or embedded in a hierarchy, we address here the case of general overlapping groups. To this end, we present two different strategies: On the one hand, we show that the proximal operator associated with a sum of l_infinity-norms can be computed exactly in polynomial time by solving a quadratic min-cost flow problem, allowing the use of accelerated proximal gradient methods. On the other hand, we use proximal splitting techniques, and address an equivalent formulation with non-overlapping groups, but in higher dimension and with additional constraints. We propose efficient and scalable algorithms exploiting these two strategies, which are significantly faster than alternative approaches. We illustrate these methods with several problems such as CUR matrix factorization, multi-task learning of tree-structured dictionaries, background subtraction in video sequences, image denoising with wavelets, and topographic dictionary learning of natural image patches.
[ "Julien Mairal, Rodolphe Jenatton (LIENS, INRIA Paris - Rocquencourt),\n Guillaume Obozinski (LIENS, INRIA Paris - Rocquencourt), Francis Bach (LIENS,\n INRIA Paris - Rocquencourt)", "['Julien Mairal' 'Rodolphe Jenatton' 'Guillaume Obozinski' 'Francis Bach']" ]
cs.LG stat.ML
10.1007/s10618-012-0302-x
1104.1990
null
null
http://arxiv.org/abs/1104.1990v3
2013-02-19T16:17:49Z
2011-04-11T16:38:50Z
Adaptive Evolutionary Clustering
In many practical applications of clustering, the objects to be clustered evolve over time, and a clustering result is desired at each time step. In such applications, evolutionary clustering typically outperforms traditional static clustering by producing clustering results that reflect long-term trends while being robust to short-term variations. Several evolutionary clustering algorithms have recently been proposed, often by adding a temporal smoothness penalty to the cost function of a static clustering method. In this paper, we introduce a different approach to evolutionary clustering by accurately tracking the time-varying proximities between objects followed by static clustering. We present an evolutionary clustering framework that adaptively estimates the optimal smoothing parameter using shrinkage estimation, a statistical approach that improves a naive estimate using additional information. The proposed framework can be used to extend a variety of static clustering algorithms, including hierarchical, k-means, and spectral clustering, into evolutionary clustering algorithms. Experiments on synthetic and real data sets indicate that the proposed framework outperforms static clustering and existing evolutionary clustering algorithms in many scenarios.
[ "Kevin S. Xu, Mark Kliger, Alfred O. Hero III", "['Kevin S. Xu' 'Mark Kliger' 'Alfred O. Hero III']" ]
cs.AI cs.LG stat.ML
null
1104.2018
null
null
http://arxiv.org/pdf/1104.2018v1
2011-04-11T18:24:01Z
2011-04-11T18:24:01Z
Efficient Learning of Generalized Linear and Single Index Models with Isotonic Regression
Generalized Linear Models (GLMs) and Single Index Models (SIMs) provide powerful generalizations of linear regression, where the target variable is assumed to be a (possibly unknown) 1-dimensional function of a linear predictor. In general, these problems entail non-convex estimation procedures, and, in practice, iterative local search heuristics are often used. Kalai and Sastry (2009) recently provided the first provably efficient method for learning SIMs and GLMs, under the assumptions that the data are in fact generated under a GLM and under certain monotonicity and Lipschitz constraints. However, to obtain provable performance, the method requires a fresh sample every iteration. In this paper, we provide algorithms for learning GLMs and SIMs, which are both computationally and statistically efficient. We also provide an empirical study, demonstrating their feasibility in practice.
[ "['Sham Kakade' 'Adam Tauman Kalai' 'Varun Kanade' 'Ohad Shamir']", "Sham Kakade and Adam Tauman Kalai and Varun Kanade and Ohad Shamir" ]
cs.LG
null
1104.2097
null
null
http://arxiv.org/pdf/1104.2097v1
2011-04-12T01:15:03Z
2011-04-12T01:15:03Z
PAC learnability versus VC dimension: a footnote to a basic result of statistical learning
A fundamental result of statistical learnig theory states that a concept class is PAC learnable if and only if it is a uniform Glivenko-Cantelli class if and only if the VC dimension of the class is finite. However, the theorem is only valid under special assumptions of measurability of the class, in which case the PAC learnability even becomes consistent. Otherwise, there is a classical example, constructed under the Continuum Hypothesis by Dudley and Durst and further adapted by Blumer, Ehrenfeucht, Haussler, and Warmuth, of a concept class of VC dimension one which is neither uniform Glivenko-Cantelli nor consistently PAC learnable. We show that, rather surprisingly, under an additional set-theoretic hypothesis which is much milder than the Continuum Hypothesis (Martin's Axiom), PAC learnability is equivalent to finite VC dimension for every concept class.
[ "['Vladimir Pestov']", "Vladimir Pestov" ]
cs.CV cs.CG cs.GR cs.LG
null
1104.2580
null
null
http://arxiv.org/pdf/1104.2580v2
2011-08-15T19:31:24Z
2011-04-13T18:59:52Z
Hypothesize and Bound: A Computational Focus of Attention Mechanism for Simultaneous N-D Segmentation, Pose Estimation and Classification Using Shape Priors
Given the ever increasing bandwidth of the visual information available to many intelligent systems, it is becoming essential to endow them with a sense of what is worthwhile their attention and what can be safely disregarded. This article presents a general mathematical framework to efficiently allocate the available computational resources to process the parts of the input that are relevant to solve a given perceptual problem. By this we mean to find the hypothesis H (i.e., the state of the world) that maximizes a function L(H), representing how well each hypothesis "explains" the input. Given the large bandwidth of the sensory input, fully evaluating L(H) for each hypothesis H is computationally infeasible (e.g., because it would imply checking a large number of pixels). To address this problem we propose a mathematical framework with two key ingredients. The first one is a Bounding Mechanism (BM) to compute lower and upper bounds of L(H), for a given computational budget. These bounds are much cheaper to compute than L(H) itself, can be refined at any time by increasing the budget allocated to a hypothesis, and are frequently enough to discard a hypothesis. To compute these bounds, we develop a novel theory of shapes and shape priors. The second ingredient is a Focus of Attention Mechanism (FoAM) to select which hypothesis' bounds should be refined next, with the goal of discarding non-optimal hypotheses with the least amount of computation. The proposed framework: 1) is very efficient since most hypotheses are discarded with minimal computation; 2) is parallelizable; 3) is guaranteed to find the globally optimal hypothesis; and 4) its running time depends on the problem at hand, not on the bandwidth of the input. We instantiate the proposed framework for the problem of simultaneously estimating the class, pose, and a noiseless version of a 2D shape in a 2D image.
[ "['Diego Rother' 'Simon Schütz' 'René Vidal']", "Diego Rother, Simon Sch\\\"utz, Ren\\'e Vidal" ]
stat.ME cs.LG stat.ML
10.1016/j.csda.2013.04.010
1104.2930
null
null
http://arxiv.org/abs/1104.2930v3
2013-05-23T21:17:26Z
2011-04-14T21:29:10Z
Cluster Forests
With inspiration from Random Forests (RF) in the context of classification, a new clustering ensemble method---Cluster Forests (CF) is proposed. Geometrically, CF randomly probes a high-dimensional data cloud to obtain "good local clusterings" and then aggregates via spectral clustering to obtain cluster assignments for the whole dataset. The search for good local clusterings is guided by a cluster quality measure kappa. CF progressively improves each local clustering in a fashion that resembles the tree growth in RF. Empirical studies on several real-world datasets under two different performance metrics show that CF compares favorably to its competitors. Theoretical analysis reveals that the kappa measure makes it possible to grow the local clustering in a desirable way---it is "noise-resistant". A closed-form expression is obtained for the mis-clustering rate of spectral clustering under a perturbation model, which yields new insights into some aspects of spectral clustering.
[ "Donghui Yan, Aiyou Chen, Michael I. Jordan", "['Donghui Yan' 'Aiyou Chen' 'Michael I. Jordan']" ]
astro-ph.IM cs.LG physics.data-an
10.1016/j.nima.2010.11.016
1104.3248
null
null
http://arxiv.org/abs/1104.3248v1
2011-04-16T17:35:20Z
2011-04-16T17:35:20Z
Signal Classification for Acoustic Neutrino Detection
This article focuses on signal classification for deep-sea acoustic neutrino detection. In the deep sea, the background of transient signals is very diverse. Approaches like matched filtering are not sufficient to distinguish between neutrino-like signals and other transient signals with similar signature, which are forming the acoustic background for neutrino detection in the deep-sea environment. A classification system based on machine learning algorithms is analysed with the goal to find a robust and effective way to perform this task. For a well-trained model, a testing error on the level of one percent is achieved for strong classifiers like Random Forest and Boosting Trees using the extracted features of the signal as input and utilising dense clusters of sensors instead of single sensors.
[ "['M. Neff' 'G. Anton' 'A. Enzenhöfer' 'K. Graf' 'J. Hößl' 'U. Katz'\n 'R. Lahmann' 'C. Richardt']", "M. Neff, G. Anton, A. Enzenh\\\"ofer, K. Graf, J. H\\\"o{\\ss}l, U. Katz,\n R. Lahmann and C. Richardt" ]
stat.ML cs.LG math.NA
null
1104.3792
null
null
http://arxiv.org/pdf/1104.3792v1
2011-04-19T16:19:03Z
2011-04-19T16:19:03Z
A sufficient condition on monotonic increase of the number of nonzero entry in the optimizer of L1 norm penalized least-square problem
The $\ell$-1 norm based optimization is widely used in signal processing, especially in recent compressed sensing theory. This paper studies the solution path of the $\ell$-1 norm penalized least-square problem, whose constrained form is known as Least Absolute Shrinkage and Selection Operator (LASSO). A solution path is the set of all the optimizers with respect to the evolution of the hyperparameter (Lagrange multiplier). The study of the solution path is of great significance in viewing and understanding the profile of the tradeoff between the approximation and regularization terms. If the solution path of a given problem is known, it can help us to find the optimal hyperparameter under a given criterion such as the Akaike Information Criterion. In this paper we present a sufficient condition on $\ell$-1 norm penalized least-square problem. Under this sufficient condition, the number of nonzero entries in the optimizer or solution vector increases monotonically when the hyperparameter decreases. We also generalize the result to the often used total variation case, where the $\ell$-1 norm is taken over the first order derivative of the solution vector. We prove that the proposed condition has intrinsic connections with the condition given by Donoho, et al \cite{Donoho08} and the positive cone condition by Efron {\it el al} \cite{Efron04}. However, the proposed condition does not need to assume the sparsity level of the signal as required by Donoho et al's condition, and is easier to verify than Efron, et al's positive cone condition when being used for practical applications.
[ "J. Duan, Charles Soussen, David Brie, Jerome Idier and Y.-P. Wang", "['J. Duan' 'Charles Soussen' 'David Brie' 'Jerome Idier' 'Y. -P. Wang']" ]
cs.AI cs.LG
null
1104.3929
null
null
http://arxiv.org/pdf/1104.3929v1
2011-04-20T02:49:59Z
2011-04-20T02:49:59Z
Understanding Exhaustive Pattern Learning
Pattern learning in an important problem in Natural Language Processing (NLP). Some exhaustive pattern learning (EPL) methods (Bod, 1992) were proved to be flawed (Johnson, 2002), while similar algorithms (Och and Ney, 2004) showed great advantages on other tasks, such as machine translation. In this article, we first formalize EPL, and then show that the probability given by an EPL model is constant-factor approximation of the probability given by an ensemble method that integrates exponential number of models obtained with various segmentations of the training data. This work for the first time provides theoretical justification for the widely used EPL algorithm in NLP, which was previously viewed as a flawed heuristic method. Better understanding of EPL may lead to improved pattern learning algorithms in future.
[ "Libin Shen", "['Libin Shen']" ]
stat.ME cs.CV cs.LG
null
1104.4376
null
null
http://arxiv.org/pdf/1104.4376v1
2011-04-22T02:27:45Z
2011-04-22T02:27:45Z
Intent Inference and Syntactic Tracking with GMTI Measurements
In conventional target tracking systems, human operators use the estimated target tracks to make higher level inference of the target behaviour/intent. This paper develops syntactic filtering algorithms that assist human operators by extracting spatial patterns from target tracks to identify suspicious/anomalous spatial trajectories. The targets' spatial trajectories are modeled by a stochastic context free grammar (SCFG) and a switched mode state space model. Bayesian filtering algorithms for stochastic context free grammars are presented for extracting the syntactic structure and illustrated for a ground moving target indicator (GMTI) radar example. The performance of the algorithms is tested with the experimental data collected using DRDC Ottawa's X-band Wideband Experimental Airborne Radar (XWEAR).
[ "Alex Wang and Vikram Krishnamurthy and Bhashyam Balaji", "['Alex Wang' 'Vikram Krishnamurthy' 'Bhashyam Balaji']" ]
stat.ML cs.LG
10.1109/TSP.2012.2196696
1104.4512
null
null
http://arxiv.org/abs/1104.4512v1
2011-04-22T22:01:14Z
2011-04-22T22:01:14Z
Robust Clustering Using Outlier-Sparsity Regularization
Notwithstanding the popularity of conventional clustering algorithms such as K-means and probabilistic clustering, their clustering results are sensitive to the presence of outliers in the data. Even a few outliers can compromise the ability of these algorithms to identify meaningful hidden structures rendering their outcome unreliable. This paper develops robust clustering algorithms that not only aim to cluster the data, but also to identify the outliers. The novel approaches rely on the infrequent presence of outliers in the data which translates to sparsity in a judiciously chosen domain. Capitalizing on the sparsity in the outlier domain, outlier-aware robust K-means and probabilistic clustering approaches are proposed. Their novelty lies on identifying outliers while effecting sparsity in the outlier domain through carefully chosen regularization. A block coordinate descent approach is developed to obtain iterative algorithms with convergence guarantees and small excess computational complexity with respect to their non-robust counterparts. Kernelized versions of the robust clustering algorithms are also developed to efficiently handle high-dimensional data, identify nonlinearly separable clusters, or even cluster objects that are not represented by vectors. Numerical tests on both synthetic and real datasets validate the performance and applicability of the novel algorithms.
[ "Pedro A. Forero, Vassilis Kekatos, Georgios B. Giannakis", "['Pedro A. Forero' 'Vassilis Kekatos' 'Georgios B. Giannakis']" ]
stat.ML cs.DM cs.LG cs.SI physics.soc-ph
null
1104.4605
null
null
http://arxiv.org/pdf/1104.4605v1
2011-04-24T06:06:12Z
2011-04-24T06:06:12Z
Compressive Network Analysis
Modern data acquisition routinely produces massive amounts of network data. Though many methods and models have been proposed to analyze such data, the research of network data is largely disconnected with the classical theory of statistical learning and signal processing. In this paper, we present a new framework for modeling network data, which connects two seemingly different areas: network data analysis and compressed sensing. From a nonparametric perspective, we model an observed network using a large dictionary. In particular, we consider the network clique detection problem and show connections between our formulation with a new algebraic tool, namely Randon basis pursuit in homogeneous spaces. Such a connection allows us to identify rigorous recovery conditions for clique detection problems. Though this paper is mainly conceptual, we also develop practical approximation algorithms for solving empirical problems and demonstrate their usefulness on real-world datasets.
[ "Xiaoye Jiang and Yuan Yao and Han Liu and Leonidas Guibas", "['Xiaoye Jiang' 'Yuan Yao' 'Han Liu' 'Leonidas Guibas']" ]