categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
cs.LG cs.IR
10.1145/2700406
1502.05491
null
null
http://arxiv.org/abs/1502.05491v2
2015-04-15T14:45:59Z
2015-02-19T08:06:54Z
Optimizing Text Quantifiers for Multivariate Loss Functions
We address the problem of \emph{quantification}, a supervised learning task whose goal is, given a class, to estimate the relative frequency (or \emph{prevalence}) of the class in a dataset of unlabelled items. Quantification has several applications in data and text mining, such as estimating the prevalence of positive reviews in a set of reviews of a given product, or estimating the prevalence of a given support issue in a dataset of transcripts of phone calls to tech support. So far, quantification has been addressed by learning a general-purpose classifier, counting the unlabelled items which have been assigned the class, and tuning the obtained counts according to some heuristics. In this paper we depart from the tradition of using general-purpose classifiers, and use instead a supervised learning model for \emph{structured prediction}, capable of generating classifiers directly optimized for the (multivariate and non-linear) function used for evaluating quantification accuracy. The experiments that we have run on 5500 binary high-dimensional datasets (averaging more than 14,000 documents each) show that this method is more accurate, more stable, and more efficient than existing, state-of-the-art quantification methods.
[ "['Andrea Esuli' 'Fabrizio Sebastiani']", "Andrea Esuli and Fabrizio Sebastiani" ]
cs.LG cs.HC
null
1502.05534
null
null
http://arxiv.org/pdf/1502.05534v1
2015-02-19T11:45:37Z
2015-02-19T11:45:37Z
NeuroSVM: A Graphical User Interface for Identification of Liver Patients
Diagnosis of liver infection at preliminary stage is important for better treatment. In todays scenario devices like sensors are used for detection of infections. Accurate classification techniques are required for automatic identification of disease samples. In this context, this study utilizes data mining approaches for classification of liver patients from healthy individuals. Four algorithms (Naive Bayes, Bagging, Random forest and SVM) were implemented for classification using R platform. Further to improve the accuracy of classification a hybrid NeuroSVM model was developed using SVM and feed-forward artificial neural network (ANN). The hybrid model was tested for its performance using statistical parameters like root mean square error (RMSE) and mean absolute percentage error (MAPE). The model resulted in a prediction accuracy of 98.83%. The results suggested that development of hybrid model improved the accuracy of prediction. To serve the medicinal community for prediction of liver disease among patients, a graphical user interface (GUI) has been developed using R. The GUI is deployed as a package in local repository of R platform for users to perform prediction.
[ "['Kalyan Nagaraj' 'Amulyashree Sridhar']", "Kalyan Nagaraj and Amulyashree Sridhar" ]
stat.ML cs.LG
null
1502.05556
null
null
http://arxiv.org/pdf/1502.05556v2
2017-06-15T15:19:38Z
2015-02-19T12:50:13Z
Just Sort It! A Simple and Effective Approach to Active Preference Learning
We address the problem of learning a ranking by using adaptively chosen pairwise comparisons. Our goal is to recover the ranking accurately but to sample the comparisons sparingly. If all comparison outcomes are consistent with the ranking, the optimal solution is to use an efficient sorting algorithm, such as Quicksort. But how do sorting algorithms behave if some comparison outcomes are inconsistent with the ranking? We give favorable guarantees for Quicksort for the popular Bradley-Terry model, under natural assumptions on the parameters. Furthermore, we empirically demonstrate that sorting algorithms lead to a very simple and effective active learning strategy: repeatedly sort the items. This strategy performs as well as state-of-the-art methods (and much better than random sampling) at a minuscule fraction of the computational cost.
[ "Lucas Maystre, Matthias Grossglauser", "['Lucas Maystre' 'Matthias Grossglauser']" ]
math.OC cs.LG
null
1502.05577
null
null
http://arxiv.org/pdf/1502.05577v2
2015-08-08T21:04:32Z
2015-02-19T14:11:21Z
Adaptive system optimization using random directions stochastic approximation
We present novel algorithms for simulation optimization using random directions stochastic approximation (RDSA). These include first-order (gradient) as well as second-order (Newton) schemes. We incorporate both continuous-valued as well as discrete-valued perturbations into both our algorithms. The former are chosen to be independent and identically distributed (i.i.d.) symmetric, uniformly distributed random variables (r.v.), while the latter are i.i.d., asymmetric, Bernoulli r.v.s. Our Newton algorithm, with a novel Hessian estimation scheme, requires N-dimensional perturbations and three loss measurements per iteration, whereas the simultaneous perturbation Newton search algorithm of [1] requires 2N-dimensional perturbations and four loss measurements per iteration. We prove the unbiasedness of both gradient and Hessian estimates and asymptotic (strong) convergence for both first-order and second-order schemes. We also provide asymptotic normality results, which in particular establish that the asymmetric Bernoulli variant of Newton RDSA method is better than 2SPSA of [1]. Numerical experiments are used to validate the theoretical results.
[ "Prashanth L.A., Shalabh Bhatnagar, Michael Fu and Steve Marcus", "['Prashanth L. A.' 'Shalabh Bhatnagar' 'Michael Fu' 'Steve Marcus']" ]
cs.LG cs.CC cs.DS math.CO stat.ML
null
1502.05675
null
null
http://arxiv.org/pdf/1502.05675v2
2015-02-20T13:00:17Z
2015-02-19T19:30:46Z
NP-Hardness and Inapproximability of Sparse PCA
We give a reduction from {\sc clique} to establish that sparse PCA is NP-hard. The reduction has a gap which we use to exclude an FPTAS for sparse PCA (unless P=NP). Under weaker complexity assumptions, we also exclude polynomial constant-factor approximation algorithms.
[ "['Malik Magdon-Ismail']", "Malik Magdon-Ismail" ]
cs.GT cs.AI cs.LG cs.MA
null
1502.05696
null
null
http://arxiv.org/pdf/1502.05696v3
2015-09-07T05:21:06Z
2015-02-19T20:42:55Z
Approval Voting and Incentives in Crowdsourcing
The growing need for labeled training data has made crowdsourcing an important part of machine learning. The quality of crowdsourced labels is, however, adversely affected by three factors: (1) the workers are not experts; (2) the incentives of the workers are not aligned with those of the requesters; and (3) the interface does not allow workers to convey their knowledge accurately, by forcing them to make a single choice among a set of options. In this paper, we address these issues by introducing approval voting to utilize the expertise of workers who have partial knowledge of the true answer, and coupling it with a ("strictly proper") incentive-compatible compensation mechanism. We show rigorous theoretical guarantees of optimality of our mechanism together with a simple axiomatic characterization. We also conduct preliminary empirical studies on Amazon Mechanical Turk which validate our approach.
[ "['Nihar B. Shah' 'Dengyong Zhou' 'Yuval Peres']", "Nihar B. Shah, Dengyong Zhou, Yuval Peres" ]
cs.LG math.OC
null
1502.05744
null
null
http://arxiv.org/pdf/1502.05744v2
2015-07-01T20:56:34Z
2015-02-19T23:05:04Z
Scale-Free Algorithms for Online Linear Optimization
We design algorithms for online linear optimization that have optimal regret and at the same time do not need to know any upper or lower bounds on the norm of the loss vectors. We achieve adaptiveness to norms of loss vectors by scale invariance, i.e., our algorithms make exactly the same decisions if the sequence of loss vectors is multiplied by any positive constant. Our algorithms work for any decision set, bounded or unbounded. For unbounded decisions sets, these are the first truly adaptive algorithms for online linear optimization.
[ "['Francesco Orabona' 'David Pal']", "Francesco Orabona and David Pal" ]
cs.CV cs.LG stat.ML
null
1502.05752
null
null
http://arxiv.org/pdf/1502.05752v1
2015-02-19T23:59:48Z
2015-02-19T23:59:48Z
Pairwise Constraint Propagation: A Survey
As one of the most important types of (weaker) supervised information in machine learning and pattern recognition, pairwise constraint, which specifies whether a pair of data points occur together, has recently received significant attention, especially the problem of pairwise constraint propagation. At least two reasons account for this trend: the first is that compared to the data label, pairwise constraints are more general and easily to collect, and the second is that since the available pairwise constraints are usually limited, the constraint propagation problem is thus important. This paper provides an up-to-date critical survey of pairwise constraint propagation research. There are two underlying motivations for us to write this survey paper: the first is to provide an up-to-date review of the existing literature, and the second is to offer some insights into the studies of pairwise constraint propagation. To provide a comprehensive survey, we not only categorize existing propagation techniques but also present detailed descriptions of representative methods within each category.
[ "Zhenyong Fu and Zhiwu Lu", "['Zhenyong Fu' 'Zhiwu Lu']" ]
cs.SC cs.LG stat.ML
null
1502.05767
null
null
http://arxiv.org/pdf/1502.05767v4
2018-02-05T15:57:57Z
2015-02-20T04:20:47Z
Automatic differentiation in machine learning: a survey
Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in machine learning. Automatic differentiation (AD), also called algorithmic differentiation or simply "autodiff", is a family of techniques similar to but more general than backpropagation for efficiently and accurately evaluating derivatives of numeric functions expressed as computer programs. AD is a small but established field with applications in areas including computational fluid dynamics, atmospheric sciences, and engineering design optimization. Until very recently, the fields of machine learning and AD have largely been unaware of each other and, in some cases, have independently discovered each other's results. Despite its relevance, general-purpose AD has been missing from the machine learning toolbox, a situation slowly changing with its ongoing adoption under the names "dynamic computational graphs" and "differentiable programming". We survey the intersection of AD and machine learning, cover applications where AD has direct relevance, and address the main implementation techniques. By precisely defining the main differentiation techniques and their interrelationships, we aim to bring clarity to the usage of the terms "autodiff", "automatic differentiation", and "symbolic differentiation" as these are encountered more and more in machine learning settings.
[ "Atilim Gunes Baydin, Barak A. Pearlmutter, Alexey Andreyevich Radul,\n Jeffrey Mark Siskind", "['Atilim Gunes Baydin' 'Barak A. Pearlmutter' 'Alexey Andreyevich Radul'\n 'Jeffrey Mark Siskind']" ]
cs.GT cs.AI cs.LG stat.ML
10.1145/2764468.2764519
1502.05774
null
null
http://arxiv.org/abs/1502.05774v2
2015-06-06T03:24:36Z
2015-02-20T05:11:44Z
Low-Cost Learning via Active Data Procurement
We design mechanisms for online procurement of data held by strategic agents for machine learning tasks. The challenge is to use past data to actively price future data and give learning guarantees even when an agent's cost for revealing her data may depend arbitrarily on the data itself. We achieve this goal by showing how to convert a large class of no-regret algorithms into online posted-price and learning mechanisms. Our results in a sense parallel classic sample complexity guarantees, but with the key resource being money rather than quantity of data: With a budget constraint $B$, we give robust risk (predictive error) bounds on the order of $1/\sqrt{B}$. Because we use an active approach, we can often guarantee to do significantly better by leveraging correlations between costs and data. Our algorithms and analysis go through a model of no-regret learning with $T$ arriving pairs (cost, data) and a budget constraint of $B$. Our regret bounds for this model are on the order of $T/\sqrt{B}$ and we give lower bounds on the same order.
[ "['Jacob Abernethy' 'Yiling Chen' 'Chien-Ju Ho' 'Bo Waggoner']", "Jacob Abernethy, Yiling Chen, Chien-Ju Ho, Bo Waggoner" ]
cs.NE cs.LG
null
1502.05777
null
null
http://arxiv.org/pdf/1502.05777v1
2015-02-20T05:26:09Z
2015-02-20T05:26:09Z
Spike Event Based Learning in Neural Networks
A scheme is derived for learning connectivity in spiking neural networks. The scheme learns instantaneous firing rates that are conditional on the activity in other parts of the network. The scheme is independent of the choice of neuron dynamics or activation function, and network architecture. It involves two simple, online, local learning rules that are applied only in response to occurrences of spike events. This scheme provides a direct method for transferring ideas between the fields of deep learning and computational neuroscience. This learning scheme is demonstrated using a layered feedforward spiking neural network trained self-supervised on a prediction and classification task for moving MNIST images collected using a Dynamic Vision Sensor.
[ "['James A. Henderson' 'TingTing A. Gibson' 'Janet Wiles']", "James A. Henderson, TingTing A. Gibson, Janet Wiles" ]
cs.LG math.OC
null
1502.05832
null
null
http://arxiv.org/pdf/1502.05832v1
2015-02-20T11:23:58Z
2015-02-20T11:23:58Z
A provably convergent alternating minimization method for mean field inference
Mean-Field is an efficient way to approximate a posterior distribution in complex graphical models and constitutes the most popular class of Bayesian variational approximation methods. In most applications, the mean field distribution parameters are computed using an alternate coordinate minimization. However, the convergence properties of this algorithm remain unclear. In this paper, we show how, by adding an appropriate penalization term, we can guarantee convergence to a critical point, while keeping a closed form update at each step. A convergence rate estimate can also be derived based on recent results in non-convex optimization.
[ "Pierre Baqu\\'e, Jean-Hubert Hours, Fran\\c{c}ois Fleuret, Pascal Fua", "['Pierre Baqué' 'Jean-Hubert Hours' 'François Fleuret' 'Pascal Fua']" ]
cs.SI cs.LG physics.data-an physics.soc-ph
10.1145/2817946.2817949
1502.05886
null
null
http://arxiv.org/abs/1502.05886v1
2015-02-20T14:42:26Z
2015-02-20T14:42:26Z
On predictability of rare events leveraging social media: a machine learning perspective
Information extracted from social media streams has been leveraged to forecast the outcome of a large number of real-world events, from political elections to stock market fluctuations. An increasing amount of studies demonstrates how the analysis of social media conversations provides cheap access to the wisdom of the crowd. However, extents and contexts in which such forecasting power can be effectively leveraged are still unverified at least in a systematic way. It is also unclear how social-media-based predictions compare to those based on alternative information sources. To address these issues, here we develop a machine learning framework that leverages social media streams to automatically identify and predict the outcomes of soccer matches. We focus in particular on matches in which at least one of the possible outcomes is deemed as highly unlikely by professional bookmakers. We argue that sport events offer a systematic approach for testing the predictive power of social media, and allow to compare such power against the rigorous baselines set by external sources. Despite such strict baselines, our framework yields above 8% marginal profit when used to inform simple betting strategies. The system is based on real-time sentiment analysis and exploits data collected immediately before the games, allowing for informed bets. We discuss the rationale behind our approach, describe the learning framework, its prediction performance and the return it provides as compared to a set of betting strategies. To test our framework we use both historical Twitter data from the 2014 FIFA World Cup games, and real-time Twitter data collected by monitoring the conversations about all soccer matches of four major European tournaments (FA Premier League, Serie A, La Liga, and Bundesliga), and the 2014 UEFA Champions League, during the period between Oct. 25th 2014 and Nov. 26th 2014.
[ "['Lei Le' 'Emilio Ferrara' 'Alessandro Flammini']", "Lei Le, Emilio Ferrara, Alessandro Flammini" ]
cs.LG stat.ML
null
1502.05890
null
null
http://arxiv.org/pdf/1502.05890v4
2016-11-04T19:28:07Z
2015-02-20T14:55:41Z
Contextual Semibandits via Supervised Learning Oracles
We study an online decision making problem where on each round a learner chooses a list of items based on some side information, receives a scalar feedback value for each individual item, and a reward that is linearly related to this feedback. These problems, known as contextual semibandits, arise in crowdsourcing, recommendation, and many other domains. This paper reduces contextual semibandits to supervised learning, allowing us to leverage powerful supervised learning methods in this partial-feedback setting. Our first reduction applies when the mapping from feedback to reward is known and leads to a computationally efficient algorithm with near-optimal regret. We show that this algorithm outperforms state-of-the-art approaches on real-world learning-to-rank datasets, demonstrating the advantage of oracle-based algorithms. Our second reduction applies to the previously unstudied setting when the linear mapping from feedback to reward is unknown. Our regret guarantees are superior to prior techniques that ignore the feedback.
[ "Akshay Krishnamurthy, Alekh Agarwal, Miroslav Dudik", "['Akshay Krishnamurthy' 'Alekh Agarwal' 'Miroslav Dudik']" ]
cs.LG cs.CE
null
1502.05911
null
null
http://arxiv.org/pdf/1502.05911v1
2015-02-20T15:47:49Z
2015-02-20T15:47:49Z
A Data Mining framework to model Consumer Indebtedness with Psychological Factors
Modelling Consumer Indebtedness has proven to be a problem of complex nature. In this work we utilise Data Mining techniques and methods to explore the multifaceted aspect of Consumer Indebtedness by examining the contribution of Psychological Factors, like Impulsivity to the analysis of Consumer Debt. Our results confirm the beneficial impact of Psychological Factors in modelling Consumer Indebtedness and suggest a new approach in analysing Consumer Debt, that would take into consideration more Psychological characteristics of consumers and adopt techniques and practices from Data Mining.
[ "['Alexandros Ladas' 'Eamonn Ferguson' 'Uwe Aickelin' 'Jon Garibaldi']", "Alexandros Ladas, Eamonn Ferguson, Uwe Aickelin and Jon Garibaldi" ]
stat.ML cs.LG
null
1502.05925
null
null
http://arxiv.org/pdf/1502.05925v1
2015-02-20T16:42:40Z
2015-02-20T16:42:40Z
Feature-Budgeted Random Forest
We seek decision rules for prediction-time cost reduction, where complete data is available for training, but during prediction-time, each feature can only be acquired for an additional cost. We propose a novel random forest algorithm to minimize prediction error for a user-specified {\it average} feature acquisition budget. While random forests yield strong generalization performance, they do not explicitly account for feature costs and furthermore require low correlation among trees, which amplifies costs. Our random forest grows trees with low acquisition cost and high strength based on greedy minimax cost-weighted-impurity splits. Theoretically, we establish near-optimal acquisition cost guarantees for our algorithm. Empirically, on a number of benchmark datasets we demonstrate superior accuracy-cost curves against state-of-the-art prediction-time algorithms.
[ "Feng Nan, Joseph Wang, Venkatesh Saligrama", "['Feng Nan' 'Joseph Wang' 'Venkatesh Saligrama']" ]
cs.LG
null
1502.05934
null
null
http://arxiv.org/pdf/1502.05934v1
2015-02-20T16:58:36Z
2015-02-20T16:58:36Z
Achieving All with No Parameters: Adaptive NormalHedge
We study the classic online learning problem of predicting with expert advice, and propose a truly parameter-free and adaptive algorithm that achieves several objectives simultaneously without using any prior information. The main component of this work is an improved version of the NormalHedge.DT algorithm (Luo and Schapire, 2014), called AdaNormalHedge. On one hand, this new algorithm ensures small regret when the competitor has small loss and almost constant regret when the losses are stochastic. On the other hand, the algorithm is able to compete with any convex combination of the experts simultaneously, with a regret in terms of the relative entropy of the prior and the competitor. This resolves an open problem proposed by Chaudhuri et al. (2009) and Chernov and Vovk (2010). Moreover, we extend the results to the sleeping expert setting and provide two applications to illustrate the power of AdaNormalHedge: 1) competing with time-varying unknown competitors and 2) predicting almost as well as the best pruning tree. Our results on these applications significantly improve previous work from different aspects, and a special case of the first application resolves another open problem proposed by Warmuth and Koolen (2014) on whether one can simultaneously achieve optimal shifting regret for both adversarial and stochastic losses.
[ "Haipeng Luo and Robert E. Schapire", "['Haipeng Luo' 'Robert E. Schapire']" ]
cs.DB cs.CE cs.LG
10.1109/ICDMW.2014.53
1502.05943
null
null
http://arxiv.org/abs/1502.05943v1
2015-02-20T17:14:17Z
2015-02-20T17:14:17Z
Refining Adverse Drug Reactions using Association Rule Mining for Electronic Healthcare Data
Side effects of prescribed medications are a common occurrence. Electronic healthcare databases present the opportunity to identify new side effects efficiently but currently the methods are limited due to confounding (i.e. when an association between two variables is identified due to them both being associated to a third variable). In this paper we propose a proof of concept method that learns common associations and uses this knowledge to automatically refine side effect signals (i.e. exposure-outcome associations) by removing instances of the exposure-outcome associations that are caused by confounding. This leaves the signal instances that are most likely to correspond to true side effect occurrences. We then calculate a novel measure termed the confounding-adjusted risk value, a more accurate absolute risk value of a patient experiencing the outcome within 60 days of the exposure. Tentative results suggest that the method works. For the four signals (i.e. exposure-outcome associations) investigated we are able to correctly filter the majority of exposure-outcome instances that were unlikely to correspond to true side effects. The method is likely to improve when tuning the association rule mining parameters for specific health outcomes. This paper shows that it may be possible to filter signals at a patient level based on association rules learned from considering patients' medical histories. However, additional work is required to develop a way to automate the tuning of the method's parameters.
[ "['Jenna M. Reps' 'Uwe Aickelin' 'Jiangang Ma' 'Yanchun Zhang']", "Jenna M. Reps, Uwe Aickelin, Jiangang Ma, Yanchun Zhang" ]
cs.LG cs.AI
null
1502.05988
null
null
http://arxiv.org/pdf/1502.05988v1
2014-12-17T12:06:47Z
2014-12-17T12:06:47Z
Deep Learning for Multi-label Classification
In multi-label classification, the main focus has been to develop ways of learning the underlying dependencies between labels, and to take advantage of this at classification time. Developing better feature-space representations has been predominantly employed to reduce complexity, e.g., by eliminating non-helpful feature attributes from the input space prior to (or during) training. This is an important task, since many multi-label methods typically create many different copies or views of the same input data as they transform it, and considerable memory can be saved by taking advantage of redundancy. In this paper, we show that a proper development of the feature space can make labels less interdependent and easier to model and predict at inference time. For this task we use a deep learning approach with restricted Boltzmann machines. We present a deep network that, in an empirical evaluation, outperforms a number of competitive methods from the literature
[ "Jesse Read, Fernando Perez-Cruz", "['Jesse Read' 'Fernando Perez-Cruz']" ]
stat.ML cs.LG cs.MS
null
1502.06064
null
null
http://arxiv.org/pdf/1502.06064v1
2015-02-21T04:29:41Z
2015-02-21T04:29:41Z
MILJS : Brand New JavaScript Libraries for Matrix Calculation and Machine Learning
MILJS is a collection of state-of-the-art, platform-independent, scalable, fast JavaScript libraries for matrix calculation and machine learning. Our core library offering a matrix calculation is called Sushi, which exhibits far better performance than any other leading machine learning libraries written in JavaScript. Especially, our matrix multiplication is 177 times faster than the fastest JavaScript benchmark. Based on Sushi, a machine learning library called Tempura is provided, which supports various algorithms widely used in machine learning research. We also provide Soba as a visualization library. The implementations of our libraries are clearly written, properly documented and thus can are easy to get started with, as long as there is a web browser. These libraries are available from http://mil-tokyo.github.io/ under the MIT license.
[ "Ken Miura, Tetsuaki Mano, Atsushi Kanehira, Yuichiro Tsuchiya and\n Tatsuya Harada", "['Ken Miura' 'Tetsuaki Mano' 'Atsushi Kanehira' 'Yuichiro Tsuchiya'\n 'Tatsuya Harada']" ]
cs.CV cs.LG
10.1109/ACCESS.2016.2551727
1502.06105
null
null
http://arxiv.org/abs/1502.06105v2
2016-03-29T04:42:12Z
2015-02-21T14:37:44Z
Regularization and Kernelization of the Maximin Correlation Approach
Robust classification becomes challenging when each class consists of multiple subclasses. Examples include multi-font optical character recognition and automated protein function prediction. In correlation-based nearest-neighbor classification, the maximin correlation approach (MCA) provides the worst-case optimal solution by minimizing the maximum misclassification risk through an iterative procedure. Despite the optimality, the original MCA has drawbacks that have limited its wide applicability in practice. That is, the MCA tends to be sensitive to outliers, cannot effectively handle nonlinearities in datasets, and suffers from having high computational complexity. To address these limitations, we propose an improved solution, named regularized maximin correlation approach (R-MCA). We first reformulate MCA as a quadratically constrained linear programming (QCLP) problem, incorporate regularization by introducing slack variables in the primal problem of the QCLP, and derive the corresponding Lagrangian dual. The dual formulation enables us to apply the kernel trick to R-MCA so that it can better handle nonlinearities. Our experimental results demonstrate that the regularization and kernelization make the proposed R-MCA more robust and accurate for various classification tasks than the original MCA. Furthermore, when the data size or dimensionality grows, R-MCA runs substantially faster by solving either the primal or dual (whichever has a smaller variable dimension) of the QCLP.
[ "Taehoon Lee, Taesup Moon, Seung Jean Kim, Sungroh Yoon", "['Taehoon Lee' 'Taesup Moon' 'Seung Jean Kim' 'Sungroh Yoon']" ]
cs.AI cs.LG cs.RO math.MG
null
1502.06132
null
null
http://arxiv.org/pdf/1502.06132v1
2015-02-21T19:11:23Z
2015-02-21T19:11:23Z
Universal Memory Architectures for Autonomous Machines
We propose a self-organizing memory architecture for perceptual experience, capable of supporting autonomous learning and goal-directed problem solving in the absence of any prior information about the agent's environment. The architecture is simple enough to ensure (1) a quadratic bound (in the number of available sensors) on space requirements, and (2) a quadratic bound on the time-complexity of the update-execute cycle. At the same time, it is sufficiently complex to provide the agent with an internal representation which is (3) minimal among all representations of its class which account for every sensory equivalence class subject to the agent's belief state; (4) capable, in principle, of recovering the homotopy type of the system's state space; (5) learnable with arbitrary precision through a random application of the available actions. The provable properties of an effectively trained memory structure exploit a duality between weak poc sets -- a symbolic (discrete) representation of subset nesting relations -- and non-positively curved cubical complexes, whose rich convexity theory underlies the planning cycle of the proposed architecture.
[ "Dan P. Guralnik and Daniel E. Koditschek", "['Dan P. Guralnik' 'Daniel E. Koditschek']" ]
stat.ML cs.LG math.ST stat.TH
null
1502.06134
null
null
http://arxiv.org/pdf/1502.06134v3
2015-06-15T15:20:08Z
2015-02-21T19:20:44Z
Learning with Square Loss: Localization through Offset Rademacher Complexity
We consider regression with square loss and general classes of functions without the boundedness assumption. We introduce a notion of offset Rademacher complexity that provides a transparent way to study localization both in expectation and in high probability. For any (possibly non-convex) class, the excess loss of a two-step estimator is shown to be upper bounded by this offset complexity through a novel geometric inequality. In the convex case, the estimator reduces to an empirical risk minimizer. The method recovers the results of \citep{RakSriTsy15} for the bounded case while also providing guarantees without the boundedness assumption.
[ "Tengyuan Liang, Alexander Rakhlin, Karthik Sridharan", "['Tengyuan Liang' 'Alexander Rakhlin' 'Karthik Sridharan']" ]
math.ST cs.CC cs.LG stat.TH
null
1502.06144
null
null
http://arxiv.org/pdf/1502.06144v2
2019-03-06T15:07:54Z
2015-02-21T22:14:04Z
Detection of Planted Solutions for Flat Satisfiability Problems
We study the detection problem of finding planted solutions in random instances of flat satisfiability problems, a generalization of boolean satisfiability formulas. We describe the properties of random instances of flat satisfiability, as well of the optimal rates of detection of the associated hypothesis testing problem. We also study the performance of an algorithmically efficient testing procedure. We introduce a modification of our model, the light planting of solutions, and show that it is as hard as the problem of learning parity with noise. This hints strongly at the difficulty of detecting planted flat satisfiability for a wide class of tests.
[ "['Quentin Berthet' 'Jordan S. Ellenberg']", "Quentin Berthet and Jordan S. Ellenberg" ]
cs.CL cs.IR cs.LG stat.ML
null
1502.06161
null
null
http://arxiv.org/pdf/1502.06161v1
2015-02-22T01:30:32Z
2015-02-22T01:30:32Z
Using NLP to measure democracy
This paper uses natural language processing to create the first machine-coded democracy index, which I call Automated Democracy Scores (ADS). The ADS are based on 42 million news articles from 6,043 different sources and cover all independent countries in the 1993-2012 period. Unlike the democracy indices we have today the ADS are replicable and have standard errors small enough to actually distinguish between cases. The ADS are produced with supervised learning. Three approaches are tried: a) a combination of Latent Semantic Analysis and tree-based regression methods; b) a combination of Latent Dirichlet Allocation and tree-based regression methods; and c) the Wordscores algorithm. The Wordscores algorithm outperforms the alternatives, so it is the one on which the ADS are based. There is a web application where anyone can change the training set and see how the results change: democracy-scores.org
[ "['Thiago Marzagão']", "Thiago Marzag\\~ao" ]
cs.LG
null
1502.06177
null
null
http://arxiv.org/pdf/1502.06177v1
2015-02-22T04:42:01Z
2015-02-22T04:42:01Z
SDCA without Duality
Stochastic Dual Coordinate Ascent is a popular method for solving regularized loss minimization for the case of convex losses. In this paper we show how a variant of SDCA can be applied for non-convex losses. We prove linear convergence rate even if individual loss functions are non-convex as long as the expected loss is convex.
[ "Shai Shalev-Shwartz", "['Shai Shalev-Shwartz']" ]
cs.LG
null
1502.06187
null
null
http://arxiv.org/pdf/1502.06187v2
2016-11-24T01:46:11Z
2015-02-22T06:21:28Z
Teaching and compressing for low VC-dimension
In this work we study the quantitative relation between VC-dimension and two other basic parameters related to learning and teaching. Namely, the quality of sample compression schemes and of teaching sets for classes of low VC-dimension. Let $C$ be a binary concept class of size $m$ and VC-dimension $d$. Prior to this work, the best known upper bounds for both parameters were $\log(m)$, while the best lower bounds are linear in $d$. We present significantly better upper bounds on both as follows. Set $k = O(d 2^d \log \log |C|)$. We show that there always exists a concept $c$ in $C$ with a teaching set (i.e. a list of $c$-labeled examples uniquely identifying $c$ in $C$) of size $k$. This problem was studied by Kuhlmann (1999). Our construction implies that the recursive teaching (RT) dimension of $C$ is at most $k$ as well. The RT-dimension was suggested by Zilles et al. and Doliwa et al. (2010). The same notion (under the name partial-ID width) was independently studied by Wigderson and Yehudayoff (2013). An upper bound on this parameter that depends only on $d$ is known just for the very simple case $d=1$, and is open even for $d=2$. We also make small progress towards this seemingly modest goal. We further construct sample compression schemes of size $k$ for $C$, with additional information of $k \log(k)$ bits. Roughly speaking, given any list of $C$-labelled examples of arbitrary length, we can retain only $k$ labeled examples in a way that allows to recover the labels of all others examples in the list, using additional $k\log (k)$ information bits. This problem was first suggested by Littlestone and Warmuth (1986).
[ "Shay Moran, Amir Shpilka, Avi Wigderson, and Amir Yehudayoff", "['Shay Moran' 'Amir Shpilka' 'Avi Wigderson' 'Amir Yehudayoff']" ]
stat.ML cs.LG
10.1109/TIT.2016.2621111
1502.06189
null
null
http://arxiv.org/abs/1502.06189v2
2016-10-02T01:09:34Z
2015-02-22T06:44:18Z
Two-stage Sampling, Prediction and Adaptive Regression via Correlation Screening (SPARCS)
This paper proposes a general adaptive procedure for budget-limited predictor design in high dimensions called two-stage Sampling, Prediction and Adaptive Regression via Correlation Screening (SPARCS). SPARCS can be applied to high dimensional prediction problems in experimental science, medicine, finance, and engineering, as illustrated by the following. Suppose one wishes to run a sequence of experiments to learn a sparse multivariate predictor of a dependent variable $Y$ (disease prognosis for instance) based on a $p$ dimensional set of independent variables $\mathbf X=[X_1,\ldots, X_p]^T$ (assayed biomarkers). Assume that the cost of acquiring the full set of variables $\mathbf X$ increases linearly in its dimension. SPARCS breaks the data collection into two stages in order to achieve an optimal tradeoff between sampling cost and predictor performance. In the first stage we collect a few ($n$) expensive samples $\{y_i,\mathbf x_i\}_{i=1}^n$, at the full dimension $p\gg n$ of $\mathbf X$, winnowing the number of variables down to a smaller dimension $l < p$ using a type of cross-correlation or regression coefficient screening. In the second stage we collect a larger number $(t-n)$ of cheaper samples of the $l$ variables that passed the screening of the first stage. At the second stage, a low dimensional predictor is constructed by solving the standard regression problem using all $t$ samples of the selected variables. SPARCS is an adaptive online algorithm that implements false positive control on the selected variables, is well suited to small sample sizes, and is scalable to high dimensions. We establish asymptotic bounds for the Familywise Error Rate (FWER), specify high dimensional convergence rates for support recovery, and establish optimal sample allocation rules to the first and second stages.
[ "['Hamed Firouzi' 'Alfred Hero' 'Bala Rajaratnam']", "Hamed Firouzi, Alfred Hero, Bala Rajaratnam" ]
stat.ME cs.LG math.ST stat.AP stat.TH
null
1502.06197
null
null
http://arxiv.org/pdf/1502.06197v2
2015-03-04T00:39:16Z
2015-02-22T09:07:07Z
On Online Control of False Discovery Rate
Multiple hypotheses testing is a core problem in statistical inference and arises in almost every scientific field. Given a sequence of null hypotheses $\mathcal{H}(n) = (H_1,..., H_n)$, Benjamini and Hochberg \cite{benjamini1995controlling} introduced the false discovery rate (FDR) criterion, which is the expected proportion of false positives among rejected null hypotheses, and proposed a testing procedure that controls FDR below a pre-assigned significance level. They also proposed a different criterion, called mFDR, which does not control a property of the realized set of tests; rather it controls the ratio of expected number of false discoveries to the expected number of discoveries. In this paper, we propose two procedures for multiple hypotheses testing that we will call "LOND" and "LORD". These procedures control FDR and mFDR in an \emph{online manner}. Concretely, we consider an ordered --possibly infinite-- sequence of null hypotheses $\mathcal{H} = (H_1,H_2,H_3,...)$ where, at each step $i$, the statistician must decide whether to reject hypothesis $H_i$ having access only to the previous decisions. To the best of our knowledge, our work is the first that controls FDR in this setting. This model was introduced by Foster and Stine \cite{alpha-investing} whose alpha-investing rule only controls mFDR in online manner. In order to compare different procedures, we develop lower bounds on the total discovery rate under the mixture model and prove that both LOND and LORD have nearly linear number of discoveries. We further propose adjustment to LOND to address arbitrary correlation among the $p$-values. Finally, we evaluate the performance of our procedures on both synthetic and real data comparing them with alpha-investing rule, Benjamin-Hochberg method and a Bonferroni procedure.
[ "['Adel Javanmard' 'Andrea Montanari']", "Adel Javanmard and Andrea Montanari" ]
cs.LG cs.CC cs.DS
null
1502.06208
null
null
http://arxiv.org/pdf/1502.06208v1
2015-02-22T10:42:52Z
2015-02-22T10:42:52Z
Nearly optimal classification for semimetrics
We initiate the rigorous study of classification in semimetric spaces, which are point sets with a distance function that is non-negative and symmetric, but need not satisfy the triangle inequality. For metric spaces, the doubling dimension essentially characterizes both the runtime and sample complexity of classification algorithms --- yet we show that this is not the case for semimetrics. Instead, we define the {\em density dimension} and discover that it plays a central role in the statistical and algorithmic feasibility of learning in semimetric spaces. We present nearly optimal sample compression algorithms and use these to obtain generalization guarantees, including fast rates. The latter hold for general sample compression schemes and may be of independent interest.
[ "Lee-Ad Gottlieb and Aryeh Kontorovich", "['Lee-Ad Gottlieb' 'Aryeh Kontorovich']" ]
cs.LG stat.ME
null
1502.06254
null
null
http://arxiv.org/pdf/1502.06254v2
2015-06-28T15:00:20Z
2015-02-22T17:58:05Z
The fundamental nature of the log loss function
The standard loss functions used in the literature on probabilistic prediction are the log loss function, the Brier loss function, and the spherical loss function; however, any computable proper loss function can be used for comparison of prediction algorithms. This note shows that the log loss function is most selective in that any prediction algorithm that is optimal for a given data sequence (in the sense of the algorithmic theory of randomness) under the log loss function will be optimal under any computable proper mixable loss function; on the other hand, there is a data sequence and a prediction algorithm that is optimal for that sequence under either of the two other standard loss functions but not under the log loss function.
[ "Vladimir Vovk", "['Vladimir Vovk']" ]
q-bio.GN cs.CE cs.LG
10.1093/bioinformatics/btv419
1502.06256
null
null
http://arxiv.org/abs/1502.06256v3
2015-07-09T09:47:00Z
2015-02-22T18:30:58Z
Spaced seeds improve k-mer-based metagenomic classification
Metagenomics is a powerful approach to study genetic content of environmental samples that has been strongly promoted by NGS technologies. To cope with massive data involved in modern metagenomic projects, recent tools [4, 39] rely on the analysis of k-mers shared between the read to be classified and sampled reference genomes. Within this general framework, we show in this work that spaced seeds provide a significant improvement of classification accuracy as opposed to traditional contiguous k-mers. We support this thesis through a series a different computational experiments, including simulations of large-scale metagenomic projects. Scripts and programs used in this study, as well as supplementary material, are available from http://github.com/gregorykucherov/spaced-seeds-for-metagenomics.
[ "Karel Brinda and Maciej Sykulski and Gregory Kucherov", "['Karel Brinda' 'Maciej Sykulski' 'Gregory Kucherov']" ]
stat.ML cs.CR cs.LG
null
1502.06309
null
null
http://arxiv.org/pdf/1502.06309v3
2016-04-27T19:55:21Z
2015-02-23T03:52:08Z
Learning with Differential Privacy: Stability, Learnability and the Sufficiency and Necessity of ERM Principle
While machine learning has proven to be a powerful data-driven solution to many real-life problems, its use in sensitive domains has been limited due to privacy concerns. A popular approach known as **differential privacy** offers provable privacy guarantees, but it is often observed in practice that it could substantially hamper learning accuracy. In this paper we study the learnability (whether a problem can be learned by any algorithm) under Vapnik's general learning setting with differential privacy constraint, and reveal some intricate relationships between privacy, stability and learnability. In particular, we show that a problem is privately learnable **if an only if** there is a private algorithm that asymptotically minimizes the empirical risk (AERM). In contrast, for non-private learning AERM alone is not sufficient for learnability. This result suggests that when searching for private learning algorithms, we can restrict the search to algorithms that are AERM. In light of this, we propose a conceptual procedure that always finds a universally consistent algorithm whenever the problem is learnable under privacy constraint. We also propose a generic and practical algorithm and show that under very general conditions it privately learns a wide class of learning problems. Lastly, we extend some of the results to the more practical $(\epsilon,\delta)$-differential privacy and establish the existence of a phase-transition on the class of problems that are approximately privately learnable with respect to how small $\delta$ needs to be.
[ "['Yu-Xiang Wang' 'Jing Lei' 'Stephen E. Fienberg']", "Yu-Xiang Wang, Jing Lei, Stephen E. Fienberg" ]
cs.LG stat.ML
null
1502.06354
null
null
http://arxiv.org/pdf/1502.06354v2
2015-06-10T11:51:51Z
2015-02-23T09:12:26Z
First-order regret bounds for combinatorial semi-bandits
We consider the problem of online combinatorial optimization under semi-bandit feedback, where a learner has to repeatedly pick actions from a combinatorial decision set in order to minimize the total losses associated with its decisions. After making each decision, the learner observes the losses associated with its action, but not other losses. For this problem, there are several learning algorithms that guarantee that the learner's expected regret grows as $\widetilde{O}(\sqrt{T})$ with the number of rounds $T$. In this paper, we propose an algorithm that improves this scaling to $\widetilde{O}(\sqrt{{L_T^*}})$, where $L_T^*$ is the total loss of the best action. Our algorithm is among the first to achieve such guarantees in a partial-feedback scheme, and the first one to do so in a combinatorial setting.
[ "['Gergely Neu']", "Gergely Neu" ]
cs.LG
null
1502.06362
null
null
http://arxiv.org/pdf/1502.06362v2
2015-06-13T21:56:25Z
2015-02-23T09:47:54Z
Contextual Dueling Bandits
We consider the problem of learning to choose actions using contextual information when provided with limited feedback in the form of relative pairwise comparisons. We study this problem in the dueling-bandits framework of Yue et al. (2009), which we extend to incorporate context. Roughly, the learner's goal is to find the best policy, or way of behaving, in some space of policies, although "best" is not always so clearly defined. Here, we propose a new and natural solution concept, rooted in game theory, called a von Neumann winner, a randomized policy that beats or ties every other policy. We show that this notion overcomes important limitations of existing solutions, particularly the Condorcet winner which has typically been used in the past, but which requires strong and often unrealistic assumptions. We then present three efficient algorithms for online learning in our setting, and for approximating a von Neumann winner from batch-like data. The first of these algorithms achieves particularly low regret, even when data is adversarial, although its time and space requirements are linear in the size of the policy space. The other two algorithms require time and space only logarithmic in the size of the policy space when provided access to an oracle for solving classification problems on the space.
[ "['Miroslav Dudík' 'Katja Hofmann' 'Robert E. Schapire'\n 'Aleksandrs Slivkins' 'Masrour Zoghi']", "Miroslav Dud\\'ik and Katja Hofmann and Robert E. Schapire and\n Aleksandrs Slivkins and Masrour Zoghi" ]
cs.LG math.OC
null
1502.06398
null
null
http://arxiv.org/pdf/1502.06398v1
2015-02-23T11:54:30Z
2015-02-23T11:54:30Z
Bandit Convex Optimization: sqrt{T} Regret in One Dimension
We analyze the minimax regret of the adversarial bandit convex optimization problem. Focusing on the one-dimensional case, we prove that the minimax regret is $\widetilde\Theta(\sqrt{T})$ and partially resolve a decade-old open problem. Our analysis is non-constructive, as we do not present a concrete algorithm that attains this regret rate. Instead, we use minimax duality to reduce the problem to a Bayesian setting, where the convex loss functions are drawn from a worst-case distribution, and then we solve the Bayesian version of the problem with a variant of Thompson Sampling. Our analysis features a novel use of convexity, formalized as a "local-to-global" property of convex functions, that may be of independent interest.
[ "['Sébastien Bubeck' 'Ofer Dekel' 'Tomer Koren' 'Yuval Peres']", "S\\'ebastien Bubeck, Ofer Dekel, Tomer Koren, Yuval Peres" ]
q-fin.ST cs.CE cs.LG cs.NE
null
1502.06434
null
null
http://arxiv.org/pdf/1502.06434v1
2014-12-17T06:59:18Z
2014-12-17T06:59:18Z
ANN Model to Predict Stock Prices at Stock Exchange Markets
Stock exchanges are considered major players in financial sectors of many countries. Most Stockbrokers, who execute stock trade, use technical, fundamental or time series analysis in trying to predict stock prices, so as to advise clients. However, these strategies do not usually guarantee good returns because they guide on trends and not the most likely price. It is therefore necessary to explore improved methods of prediction. The research proposes the use of Artificial Neural Network that is feedforward multi-layer perceptron with error backpropagation and develops a model of configuration 5:21:21:1 with 80% training data in 130,000 cycles. The research develops a prototype and tests it on 2008-2012 data from stock markets e.g. Nairobi Securities Exchange and New York Stock Exchange, where prediction results show MAPE of between 0.71% and 2.77%. Validation done with Encog and Neuroph realized comparable results. The model is thus capable of prediction on typical stock markets.
[ "B. W. Wanjawa and L. Muchemi", "['B. W. Wanjawa' 'L. Muchemi']" ]
cs.LG cs.CV cs.NE stat.ML
null
1502.06464
null
null
http://arxiv.org/pdf/1502.06464v2
2015-06-11T21:27:53Z
2015-02-23T15:44:37Z
Rectified Factor Networks
We propose rectified factor networks (RFNs) to efficiently construct very sparse, non-linear, high-dimensional representations of the input. RFN models identify rare and small events in the input, have a low interference between code units, have a small reconstruction error, and explain the data covariance structure. RFN learning is a generalized alternating minimization algorithm derived from the posterior regularization method which enforces non-negative and normalized posterior means. We proof convergence and correctness of the RFN learning algorithm. On benchmarks, RFNs are compared to other unsupervised methods like autoencoders, RBMs, factor analysis, ICA, and PCA. In contrast to previous sparse coding methods, RFNs yield sparser codes, capture the data's covariance structure more precisely, and have a significantly smaller reconstruction error. We test RFNs as pretraining technique for deep networks on different vision datasets, where RFNs were superior to RBMs and autoencoders. On gene expression data from two pharmaceutical drug discovery studies, RFNs detected small and rare gene modules that revealed highly relevant new biological insights which were so far missed by other unsupervised methods.
[ "['Djork-Arné Clevert' 'Andreas Mayr' 'Thomas Unterthiner'\n 'Sepp Hochreiter']", "Djork-Arn\\'e Clevert, Andreas Mayr, Thomas Unterthiner, Sepp\n Hochreiter" ]
cs.LG stat.ML
null
1502.06531
null
null
http://arxiv.org/pdf/1502.06531v2
2015-02-24T16:43:05Z
2015-02-23T18:08:07Z
Scalable Variational Inference in Log-supermodular Models
We consider the problem of approximate Bayesian inference in log-supermodular models. These models encompass regular pairwise MRFs with binary variables, but allow to capture high-order interactions, which are intractable for existing approximate inference techniques such as belief propagation, mean field, and variants. We show that a recently proposed variational approach to inference in log-supermodular models -L-FIELD- reduces to the widely-studied minimum norm problem for submodular minimization. This insight allows to leverage powerful existing tools, and hence to solve the variational problem orders of magnitude more efficiently than previously possible. We then provide another natural interpretation of L-FIELD, demonstrating that it exactly minimizes a specific type of R\'enyi divergence measure. This insight sheds light on the nature of the variational approximations produced by L-FIELD. Furthermore, we show how to perform parallel inference as message passing in a suitable factor graph at a linear convergence rate, without having to sum up over all the configurations of the factor. Finally, we apply our approach to a challenging image segmentation task. Our experiments confirm scalability of our approach, high quality of the marginals, and the benefit of incorporating higher-order potentials.
[ "Josip Djolonga and Andreas Krause", "['Josip Djolonga' 'Andreas Krause']" ]
cs.LG cs.AI cs.IT math.IT stat.CO stat.ML
null
1502.06626
null
null
http://arxiv.org/pdf/1502.06626v1
2015-02-23T21:06:39Z
2015-02-23T21:06:39Z
Optimal Sparse Linear Auto-Encoders and Sparse PCA
Principal components analysis (PCA) is the optimal linear auto-encoder of data, and it is often used to construct features. Enforcing sparsity on the principal components can promote better generalization, while improving the interpretability of the features. We study the problem of constructing optimal sparse linear auto-encoders. Two natural questions in such a setting are: i) Given a level of sparsity, what is the best approximation to PCA that can be achieved? ii) Are there low-order polynomial-time algorithms which can asymptotically achieve this optimal tradeoff between the sparsity and the approximation quality? In this work, we answer both questions by giving efficient low-order polynomial-time algorithms for constructing asymptotically \emph{optimal} linear auto-encoders (in particular, sparse features with near-PCA reconstruction error) and demonstrate the performance of our algorithms on real data.
[ "Malik Magdon-Ismail, Christos Boutsidis", "['Malik Magdon-Ismail' 'Christos Boutsidis']" ]
stat.ML cs.LG math.ST stat.TH
null
1502.06644
null
null
null
null
null
On The Identifiability of Mixture Models from Grouped Samples
Finite mixture models are statistical models which appear in many problems in statistics and machine learning. In such models it is assumed that data are drawn from random probability measures, called mixture components, which are themselves drawn from a probability measure P over probability measures. When estimating mixture models, it is common to make assumptions on the mixture components, such as parametric assumptions. In this paper, we make no assumption on the mixture components, and instead assume that observations from the mixture model are grouped, such that observations in the same group are known to be drawn from the same component. We show that any mixture of m probability measures can be uniquely identified provided there are 2m-1 observations per group. Moreover we show that, for any m, there exists a mixture of m probability measures that cannot be uniquely identified when groups have 2m-2 observations. Our results hold for any sample space with more than one element.
[ "Robert A. Vandermeulen and Clayton D. Scott" ]
cs.LG
null
1502.06665
null
null
http://arxiv.org/pdf/1502.06665v1
2015-02-24T01:26:43Z
2015-02-24T01:26:43Z
Reified Context Models
A classic tension exists between exact inference in a simple model and approximate inference in a complex model. The latter offers expressivity and thus accuracy, but the former provides coverage of the space, an important property for confidence estimation and learning with indirect supervision. In this work, we introduce a new approach, reified context models, to reconcile this tension. Specifically, we let the amount of context (the arity of the factors in a graphical model) be chosen "at run-time" by reifying it---that is, letting this choice itself be a random variable inside the model. Empirically, we show that our approach obtains expressivity and coverage on three natural language tasks.
[ "['Jacob Steinhardt' 'Percy Liang']", "Jacob Steinhardt and Percy Liang" ]
cs.LG
null
1502.06668
null
null
http://arxiv.org/pdf/1502.06668v1
2015-02-24T01:42:09Z
2015-02-24T01:42:09Z
Learning Fast-Mixing Models for Structured Prediction
Markov Chain Monte Carlo (MCMC) algorithms are often used for approximate inference inside learning, but their slow mixing can be difficult to diagnose and the approximations can seriously degrade learning. To alleviate these issues, we define a new model family using strong Doeblin Markov chains, whose mixing times can be precisely controlled by a parameter. We also develop an algorithm to learn such models, which involves maximizing the data likelihood under the induced stationary distribution of these chains. We show empirical improvements on two challenging inference tasks.
[ "['Jacob Steinhardt' 'Percy Liang']", "Jacob Steinhardt and Percy Liang" ]
cs.LG math.NA stat.ML
null
1502.06800
null
null
http://arxiv.org/pdf/1502.06800v2
2015-11-09T14:29:04Z
2015-02-24T13:12:51Z
On the Equivalence between Kernel Quadrature Rules and Random Feature Expansions
We show that kernel-based quadrature rules for computing integrals can be seen as a special case of random feature expansions for positive definite kernels, for a particular decomposition that always exists for such kernels. We provide a theoretical analysis of the number of required samples for a given approximation error, leading to both upper and lower bounds that are based solely on the eigenvalues of the associated integral operator and match up to logarithmic terms. In particular, we show that the upper bound may be obtained from independent and identically distributed samples from a specific non-uniform distribution, while the lower bound if valid for any set of points. Applying our results to kernel-based quadrature, while our results are fairly general, we recover known upper and lower bounds for the special cases of Sobolev spaces. Moreover, our results extend to the more general problem of full function approximations (beyond simply computing an integral), with results in L2- and L$\infty$-norm that match known results for special cases. Applying our results to random features, we show an improvement of the number of random features needed to preserve the generalization guarantees for learning with Lipschitz-continuous losses.
[ "['Francis Bach']", "Francis Bach (LIENS, SIERRA)" ]
math.ST cs.LG stat.ML stat.TH
null
1502.06895
null
null
http://arxiv.org/pdf/1502.06895v3
2015-06-06T07:33:02Z
2015-02-24T17:52:20Z
On the consistency theory of high dimensional variable screening
Variable screening is a fast dimension reduction technique for assisting high dimensional feature selection. As a preselection method, it selects a moderate size subset of candidate variables for further refining via feature selection to produce the final model. The performance of variable screening depends on both computational efficiency and the ability to dramatically reduce the number of variables without discarding the important ones. When the data dimension $p$ is substantially larger than the sample size $n$, variable screening becomes crucial as 1) Faster feature selection algorithms are needed; 2) Conditions guaranteeing selection consistency might fail to hold. This article studies a class of linear screening methods and establishes consistency theory for this special class. In particular, we prove the restricted diagonally dominant (RDD) condition is a necessary and sufficient condition for strong screening consistency. As concrete examples, we show two screening methods $SIS$ and $HOLP$ are both strong screening consistent (subject to additional constraints) with large probability if $n > O((\rho s + \sigma/\tau)^2\log p)$ under random designs. In addition, we relate the RDD condition to the irrepresentable condition, and highlight limitations of $SIS$.
[ "Xiangyu Wang, Chenlei Leng, David B. Dunson", "['Xiangyu Wang' 'Chenlei Leng' 'David B. Dunson']" ]
cs.CL cs.IR cs.LG cs.NE
10.1109/TASLP.2016.2520371
1502.06922
null
null
http://arxiv.org/abs/1502.06922v3
2016-01-16T06:35:23Z
2015-02-24T19:39:27Z
Deep Sentence Embedding Using Long Short-Term Memory Networks: Analysis and Application to Information Retrieval
This paper develops a model that addresses sentence embedding, a hot topic in current natural language processing research, using recurrent neural networks with Long Short-Term Memory (LSTM) cells. Due to its ability to capture long term memory, the LSTM-RNN accumulates increasingly richer information as it goes through the sentence, and when it reaches the last word, the hidden layer of the network provides a semantic representation of the whole sentence. In this paper, the LSTM-RNN is trained in a weakly supervised manner on user click-through data logged by a commercial web search engine. Visualization and analysis are performed to understand how the embedding process works. The model is found to automatically attenuate the unimportant words and detects the salient keywords in the sentence. Furthermore, these detected keywords are found to automatically activate different cells of the LSTM-RNN, where words belonging to a similar topic activate the same cell. As a semantic representation of the sentence, the embedding vector can be used in many different applications. These automatic keyword detection and topic allocation abilities enabled by the LSTM-RNN allow the network to perform document retrieval, a difficult language processing task, where the similarity between the query and documents can be measured by the distance between their corresponding sentence embedding vectors computed by the LSTM-RNN. On a web search task, the LSTM-RNN embedding is shown to significantly outperform several existing state of the art methods. We emphasize that the proposed model generates sentence embedding vectors that are specially useful for web document retrieval tasks. A comparison with a well known general sentence embedding method, the Paragraph Vector, is performed. The results show that the proposed method in this paper significantly outperforms it for web document retrieval task.
[ "['Hamid Palangi' 'Li Deng' 'Yelong Shen' 'Jianfeng Gao' 'Xiaodong He'\n 'Jianshu Chen' 'Xinying Song' 'Rabab Ward']", "Hamid Palangi, Li Deng, Yelong Shen, Jianfeng Gao, Xiaodong He,\n Jianshu Chen, Xinying Song, Rabab Ward" ]
cs.CV cs.IR cs.LG cs.NE
null
1502.07058
null
null
http://arxiv.org/pdf/1502.07058v1
2015-02-25T05:58:43Z
2015-02-25T05:58:43Z
Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval
This paper presents a new state-of-the-art for document image classification and retrieval, using features learned by deep convolutional neural networks (CNNs). In object and scene analysis, deep neural nets are capable of learning a hierarchical chain of abstraction from pixel inputs to concise and descriptive representations. The current work explores this capacity in the realm of document analysis, and confirms that this representation strategy is superior to a variety of popular hand-crafted alternatives. Experiments also show that (i) features extracted from CNNs are robust to compression, (ii) CNNs trained on non-document images transfer well to document analysis tasks, and (iii) enforcing region-specific feature-learning is unnecessary given sufficient training data. This work also makes available a new labelled subset of the IIT-CDIP collection, containing 400,000 document images across 16 categories, useful for training new CNNs for document analysis.
[ "Adam W. Harley, Alex Ufkes, and Konstantinos G. Derpanis", "['Adam W. Harley' 'Alex Ufkes' 'Konstantinos G. Derpanis']" ]
cs.LG
null
1502.07073
null
null
http://arxiv.org/pdf/1502.07073v3
2015-06-19T07:31:45Z
2015-02-25T07:24:40Z
Strongly Adaptive Online Learning
Strongly adaptive algorithms are algorithms whose performance on every time interval is close to optimal. We present a reduction that can transform standard low-regret algorithms to strongly adaptive. As a consequence, we derive simple, yet efficient, strongly adaptive algorithms for a handful of problems.
[ "Amit Daniely, Alon Gonen, Shai Shalev-Shwartz", "['Amit Daniely' 'Alon Gonen' 'Shai Shalev-Shwartz']" ]
cs.LG
null
1502.07143
null
null
http://arxiv.org/pdf/1502.07143v1
2015-02-25T12:14:04Z
2015-02-25T12:14:04Z
The VC-Dimension of Similarity Hypotheses Spaces
Given a set $X$ and a function $h:X\longrightarrow\{0,1\}$ which labels each element of $X$ with either $0$ or $1$, we may define a function $h^{(s)}$ to measure the similarity of pairs of points in $X$ according to $h$. Specifically, for $h\in \{0,1\}^X$ we define $h^{(s)}\in \{0,1\}^{X\times X}$ by $h^{(s)}(w,x):= \mathbb{1}[h(w) = h(x)]$. This idea can be extended to a set of functions, or hypothesis space $\mathcal{H} \subseteq \{0,1\}^X$ by defining a similarity hypothesis space $\mathcal{H}^{(s)}:=\{h^{(s)}:h\in\mathcal{H}\}$. We show that ${{vc-dimension}}(\mathcal{H}^{(s)}) \in \Theta({{vc-dimension}}(\mathcal{H}))$.
[ "['Mark Herbster' 'Paul Rubenstein' 'James Townsend']", "Mark Herbster, Paul Rubenstein, James Townsend" ]
stat.ML cs.LG
10.1214/15-AOAS887
1502.07190
null
null
http://arxiv.org/abs/1502.07190v3
2015-10-16T13:50:03Z
2015-02-25T14:57:55Z
Topic-adjusted visibility metric for scientific articles
Measuring the impact of scientific articles is important for evaluating the research output of individual scientists, academic institutions and journals. While citations are raw data for constructing impact measures, there exist biases and potential issues if factors affecting citation patterns are not properly accounted for. In this work, we address the problem of field variation and introduce an article level metric useful for evaluating individual articles' visibility. This measure derives from joint probabilistic modeling of the content in the articles and the citations amongst them using latent Dirichlet allocation (LDA) and the mixed membership stochastic blockmodel (MMSB). Our proposed model provides a visibility metric for individual articles adjusted for field variation in citation rates, a structural understanding of citation behavior in different fields, and article recommendations which take into account article visibility and citation patterns. We develop an efficient algorithm for model fitting using variational methods. To scale up to large networks, we develop an online variant using stochastic gradient methods and case-control likelihood approximation. We apply our methods to the benchmark KDD Cup 2003 dataset with approximately 30,000 high energy physics papers.
[ "['Linda S. L. Tan' 'Aik Hui Chan' 'Tian Zheng']", "Linda S. L. Tan, Aik Hui Chan and Tian Zheng" ]
stat.ML cs.LG
null
1502.07229
null
null
http://arxiv.org/pdf/1502.07229v1
2015-02-25T16:26:33Z
2015-02-25T16:26:33Z
Online Pairwise Learning Algorithms with Kernels
Pairwise learning usually refers to a learning task which involves a loss function depending on pairs of examples, among which most notable ones include ranking, metric learning and AUC maximization. In this paper, we study an online algorithm for pairwise learning with a least-square loss function in an unconstrained setting of a reproducing kernel Hilbert space (RKHS), which we refer to as the Online Pairwise lEaRning Algorithm (OPERA). In contrast to existing works \cite{Kar,Wang} which require that the iterates are restricted to a bounded domain or the loss function is strongly-convex, OPERA is associated with a non-strongly convex objective function and learns the target function in an unconstrained RKHS. Specifically, we establish a general theorem which guarantees the almost surely convergence for the last iterate of OPERA without any assumptions on the underlying distribution. Explicit convergence rates are derived under the condition of polynomially decaying step sizes. We also establish an interesting property for a family of widely-used kernels in the setting of pairwise learning and illustrate the above convergence results using such kernels. Our methodology mainly depends on the characterization of RKHSs using its associated integral operators and probability inequalities for random variables with values in a Hilbert space.
[ "Yiming Ying and Ding-Xuan Zhou", "['Yiming Ying' 'Ding-Xuan Zhou']" ]
cs.LG
null
1502.07617
null
null
http://arxiv.org/pdf/1502.07617v1
2015-02-26T16:18:53Z
2015-02-26T16:18:53Z
Online Learning with Feedback Graphs: Beyond Bandits
We study a general class of online learning problems where the feedback is specified by a graph. This class includes online prediction with expert advice and the multi-armed bandit problem, but also several learning problems where the online player does not necessarily observe his own loss. We analyze how the structure of the feedback graph controls the inherent difficulty of the induced $T$-round learning problem. Specifically, we show that any feedback graph belongs to one of three classes: strongly observable graphs, weakly observable graphs, and unobservable graphs. We prove that the first class induces learning problems with $\widetilde\Theta(\alpha^{1/2} T^{1/2})$ minimax regret, where $\alpha$ is the independence number of the underlying graph; the second class induces problems with $\widetilde\Theta(\delta^{1/3}T^{2/3})$ minimax regret, where $\delta$ is the domination number of a certain portion of the graph; and the third class induces problems with linear minimax regret. Our results subsume much of the previous work on learning with feedback graphs and reveal new connections to partial monitoring games. We also show how the regret is affected if the graphs are allowed to vary with time.
[ "['Noga Alon' 'Nicolò Cesa-Bianchi' 'Ofer Dekel' 'Tomer Koren']", "Noga Alon, Nicol\\`o Cesa-Bianchi, Ofer Dekel, Tomer Koren" ]
math.ST cs.LG stat.TH
null
1502.07641
null
null
http://arxiv.org/pdf/1502.07641v3
2017-09-01T18:59:57Z
2015-02-26T17:25:03Z
ROCKET: Robust Confidence Intervals via Kendall's Tau for Transelliptical Graphical Models
Undirected graphical models are used extensively in the biological and social sciences to encode a pattern of conditional independences between variables, where the absence of an edge between two nodes $a$ and $b$ indicates that the corresponding two variables $X_a$ and $X_b$ are believed to be conditionally independent, after controlling for all other measured variables. In the Gaussian case, conditional independence corresponds to a zero entry in the precision matrix $\Omega$ (the inverse of the covariance matrix $\Sigma$). Real data often exhibits heavy tail dependence between variables, which cannot be captured by the commonly-used Gaussian or nonparanormal (Gaussian copula) graphical models. In this paper, we study the transelliptical model, an elliptical copula model that generalizes Gaussian and nonparanormal models to a broader family of distributions. We propose the ROCKET method, which constructs an estimator of $\Omega_{ab}$ that we prove to be asymptotically normal under mild assumptions. Empirically, ROCKET outperforms the nonparanormal and Gaussian models in terms of achieving accurate inference on simulated data. We also compare the three methods on real data (daily stock returns), and find that the ROCKET estimator is the only method whose behavior across subsamples agrees with the distribution predicted by the theory.
[ "Rina Foygel Barber and Mladen Kolar", "['Rina Foygel Barber' 'Mladen Kolar']" ]
stat.ML cs.LG
null
1502.07645
null
null
http://arxiv.org/pdf/1502.07645v2
2015-04-12T02:53:05Z
2015-02-26T17:38:47Z
Privacy for Free: Posterior Sampling and Stochastic Gradient Monte Carlo
We consider the problem of Bayesian learning on sensitive datasets and present two simple but somewhat surprising results that connect Bayesian learning to "differential privacy:, a cryptographic approach to protect individual-level privacy while permiting database-level utility. Specifically, we show that that under standard assumptions, getting one single sample from a posterior distribution is differentially private "for free". We will see that estimator is statistically consistent, near optimal and computationally tractable whenever the Bayesian model of interest is consistent, optimal and tractable. Similarly but separately, we show that a recent line of works that use stochastic gradient for Hybrid Monte Carlo (HMC) sampling also preserve differentially privacy with minor or no modifications of the algorithmic procedure at all, these observations lead to an "anytime" algorithm for Bayesian learning under privacy constraint. We demonstrate that it performs much better than the state-of-the-art differential private methods on synthetic and real datasets.
[ "['Yu-Xiang Wang' 'Stephen E. Fienberg' 'Alex Smola']", "Yu-Xiang Wang, Stephen E. Fienberg, Alex Smola" ]
stat.ML cs.LG
null
1502.07697
null
null
http://arxiv.org/pdf/1502.07697v2
2015-07-01T18:37:24Z
2015-02-26T19:47:41Z
A Chaining Algorithm for Online Nonparametric Regression
We consider the problem of online nonparametric regression with arbitrary deterministic sequences. Using ideas from the chaining technique, we design an algorithm that achieves a Dudley-type regret bound similar to the one obtained in a non-constructive fashion by Rakhlin and Sridharan (2014). Our regret bound is expressed in terms of the metric entropy in the sup norm, which yields optimal guarantees when the metric and sequential entropies are of the same order of magnitude. In particular our algorithm is the first one that achieves optimal rates for online regression over H{\"o}lder balls. In addition we show for this example how to adapt our chaining algorithm to get a reasonable computational efficiency with similar regret guarantees (up to a log factor).
[ "['Pierre Gaillard' 'Sébastien Gerchinovitz']", "Pierre Gaillard (GREGHEC, EDF R\\&D), S\\'ebastien Gerchinovitz (IMT,\n UPS)" ]
cs.LG cs.CG
null
1502.07776
null
null
http://arxiv.org/pdf/1502.07776v1
2015-02-26T22:12:22Z
2015-02-26T22:12:22Z
Efficient Geometric-based Computation of the String Subsequence Kernel
Kernel methods are powerful tools in machine learning. They have to be computationally efficient. In this paper, we present a novel Geometric-based approach to compute efficiently the string subsequence kernel (SSK). Our main idea is that the SSK computation reduces to range query problem. We started by the construction of a match list $L(s,t)=\{(i,j):s_{i}=t_{j}\}$ where $s$ and $t$ are the strings to be compared; such match list contains only the required data that contribute to the result. To compute efficiently the SSK, we extended the layered range tree data structure to a layered range sum tree, a range-aggregation data structure. The whole process takes $ O(p|L|\log|L|)$ time and $O(|L|\log|L|)$ space, where $|L|$ is the size of the match list and $p$ is the length of the SSK. We present empiric evaluations of our approach against the dynamic and the sparse programming approaches both on synthetically generated data and on newswire article data. Such experiments show the efficiency of our approach for large alphabet size except for very short strings. Moreover, compared to the sparse dynamic approach, the proposed approach outperforms absolutely for long strings.
[ "['Slimane Bellaouar' 'Hadda Cherroun' 'Djelloul Ziadi']", "Slimane Bellaouar, Hadda Cherroun, and Djelloul Ziadi" ]
cs.LG stat.ML
null
1502.07813
null
null
http://arxiv.org/pdf/1502.07813v1
2015-02-27T03:32:49Z
2015-02-27T03:32:49Z
Minimum message length estimation of mixtures of multivariate Gaussian and von Mises-Fisher distributions
Mixture modelling involves explaining some observed evidence using a combination of probability distributions. The crux of the problem is the inference of an optimal number of mixture components and their corresponding parameters. This paper discusses unsupervised learning of mixture models using the Bayesian Minimum Message Length (MML) criterion. To demonstrate the effectiveness of search and inference of mixture parameters using the proposed approach, we select two key probability distributions, each handling fundamentally different types of data: the multivariate Gaussian distribution to address mixture modelling of data distributed in Euclidean space, and the multivariate von Mises-Fisher (vMF) distribution to address mixture modelling of directional data distributed on a unit hypersphere. The key contributions of this paper, in addition to the general search and inference methodology, include the derivation of MML expressions for encoding the data using multivariate Gaussian and von Mises-Fisher distributions, and the analytical derivation of the MML estimates of the parameters of the two distributions. Our approach is tested on simulated and real world data sets. For instance, we infer vMF mixtures that concisely explain experimentally determined three-dimensional protein conformations, providing an effective null model description of protein structures that is central to many inference problems in structural bioinformatics. The experimental results demonstrate that the performance of our proposed search and inference method along with the encoding schemes improve on the state of the art mixture modelling techniques.
[ "['Parthan Kasarapu' 'Lloyd Allison']", "Parthan Kasarapu and Lloyd Allison" ]
cs.LG stat.ML
null
1502.07943
null
null
http://arxiv.org/pdf/1502.07943v1
2015-02-27T15:58:45Z
2015-02-27T15:58:45Z
Non-stochastic Best Arm Identification and Hyperparameter Optimization
Motivated by the task of hyperparameter optimization, we introduce the non-stochastic best-arm identification problem. Within the multi-armed bandit literature, the cumulative regret objective enjoys algorithms and analyses for both the non-stochastic and stochastic settings while to the best of our knowledge, the best-arm identification framework has only been considered in the stochastic setting. We introduce the non-stochastic setting under this framework, identify a known algorithm that is well-suited for this setting, and analyze its behavior. Next, by leveraging the iterative nature of standard machine learning algorithms, we cast hyperparameter optimization as an instance of non-stochastic best-arm identification, and empirically evaluate our proposed algorithm on this task. Our empirical results show that, by allocating more resources to promising hyperparameter settings, we typically achieve comparable test accuracies an order of magnitude faster than baseline methods.
[ "Kevin Jamieson, Ameet Talwalkar", "['Kevin Jamieson' 'Ameet Talwalkar']" ]
cs.CV cs.LG
null
1502.07976
null
null
http://arxiv.org/pdf/1502.07976v2
2015-03-05T17:49:16Z
2015-02-27T17:22:53Z
Error-Correcting Factorization
Error Correcting Output Codes (ECOC) is a successful technique in multi-class classification, which is a core problem in Pattern Recognition and Machine Learning. A major advantage of ECOC over other methods is that the multi- class problem is decoupled into a set of binary problems that are solved independently. However, literature defines a general error-correcting capability for ECOCs without analyzing how it distributes among classes, hindering a deeper analysis of pair-wise error-correction. To address these limitations this paper proposes an Error-Correcting Factorization (ECF) method, our contribution is three fold: (I) We propose a novel representation of the error-correction capability, called the design matrix, that enables us to build an ECOC on the basis of allocating correction to pairs of classes. (II) We derive the optimal code length of an ECOC using rank properties of the design matrix. (III) ECF is formulated as a discrete optimization problem, and a relaxed solution is found using an efficient constrained block coordinate descent approach. (IV) Enabled by the flexibility introduced with the design matrix we propose to allocate the error-correction on classes that are prone to confusion. Experimental results in several databases show that when allocating the error-correction to confusable classes ECF outperforms state-of-the-art approaches.
[ "Miguel Angel Bautista, Oriol Pujol, Fernando de la Torre and Sergio\n Escalera", "['Miguel Angel Bautista' 'Oriol Pujol' 'Fernando de la Torre'\n 'Sergio Escalera']" ]
cs.LG stat.ML
null
1502.08009
null
null
http://arxiv.org/pdf/1502.08009v1
2015-02-27T18:56:45Z
2015-02-27T18:56:45Z
Second-order Quantile Methods for Experts and Combinatorial Games
We aim to design strategies for sequential decision making that adjust to the difficulty of the learning problem. We study this question both in the setting of prediction with expert advice, and for more general combinatorial decision tasks. We are not satisfied with just guaranteeing minimax regret rates, but we want our algorithms to perform significantly better on easy data. Two popular ways to formalize such adaptivity are second-order regret bounds and quantile bounds. The underlying notions of 'easy data', which may be paraphrased as "the learning problem has small variance" and "multiple decisions are useful", are synergetic. But even though there are sophisticated algorithms that exploit one of the two, no existing algorithm is able to adapt to both. In this paper we outline a new method for obtaining such adaptive algorithms, based on a potential function that aggregates a range of learning rates (which are essential tuning parameters). By choosing the right prior we construct efficient algorithms and show that they reap both benefits by proving the first bounds that are both second-order and incorporate quantiles.
[ "['Wouter M. Koolen' 'Tim van Erven']", "Wouter M. Koolen and Tim van Erven" ]
stat.ML cs.AI cs.CL cs.CV cs.LG
null
1502.08029
null
null
http://arxiv.org/pdf/1502.08029v5
2015-10-01T00:12:46Z
2015-02-27T19:30:40Z
Describing Videos by Exploiting Temporal Structure
Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions.
[ "['Li Yao' 'Atousa Torabi' 'Kyunghyun Cho' 'Nicolas Ballas'\n 'Christopher Pal' 'Hugo Larochelle' 'Aaron Courville']", "Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal,\n Hugo Larochelle, Aaron Courville" ]
cs.DL cs.CL cs.LG
10.1007/978-3-319-05476-6_13
1502.08030
null
null
http://arxiv.org/abs/1502.08030v2
2017-07-29T01:32:31Z
2015-02-27T19:34:42Z
Author Name Disambiguation by Using Deep Neural Network
Author name ambiguity decreases the quality and reliability of information retrieved from digital libraries. Existing methods have tried to solve this problem by predefining a feature set based on expert's knowledge for a specific dataset. In this paper, we propose a new approach which uses deep neural network to learn features automatically from data. Additionally, we propose the general system architecture for author name disambiguation on any dataset. In this research, we evaluate the proposed method on a dataset containing Vietnamese author names. The results show that this method significantly outperforms other methods that use predefined feature set. The proposed method achieves 99.31% in terms of accuracy. Prediction error rate decreases from 1.83% to 0.69%, i.e., it decreases by 1.14%, or 62.3% relatively compared with other methods that use predefined feature set (Table 3).
[ "Hung Nghiep Tran, Tin Huynh, Tien Do", "['Hung Nghiep Tran' 'Tin Huynh' 'Tien Do']" ]
cs.LG cs.AI cs.CV
null
1502.08039
null
null
http://arxiv.org/pdf/1502.08039v1
2015-02-27T20:00:53Z
2015-02-27T20:00:53Z
Probabilistic Zero-shot Classification with Semantic Rankings
In this paper we propose a non-metric ranking-based representation of semantic similarity that allows natural aggregation of semantic information from multiple heterogeneous sources. We apply the ranking-based representation to zero-shot learning problems, and present deterministic and probabilistic zero-shot classifiers which can be built from pre-trained classifiers without retraining. We demonstrate their the advantages on two large real-world image datasets. In particular, we show that aggregating different sources of semantic information, including crowd-sourcing, leads to more accurate classification.
[ "['Jihun Hamm' 'Mikhail Belkin']", "Jihun Hamm, Mikhail Belkin" ]
math.OC cs.LG stat.ML
null
1502.08053
null
null
http://arxiv.org/pdf/1502.08053v1
2015-02-27T20:54:03Z
2015-02-27T20:54:03Z
Stochastic Dual Coordinate Ascent with Adaptive Probabilities
This paper introduces AdaSDCA: an adaptive variant of stochastic dual coordinate ascent (SDCA) for solving the regularized empirical risk minimization problems. Our modification consists in allowing the method adaptively change the probability distribution over the dual variables throughout the iterative process. AdaSDCA achieves provably better complexity bound than SDCA with the best fixed probability distribution, known as importance sampling. However, it is of a theoretical character as it is expensive to implement. We also propose AdaSDCA+: a practical variant which in our experiments outperforms existing non-adaptive methods.
[ "Dominik Csiba, Zheng Qu, Peter Richt\\'arik", "['Dominik Csiba' 'Zheng Qu' 'Peter Richtárik']" ]
cs.SI cs.LG stat.ML
null
1503.00024
null
null
http://arxiv.org/pdf/1503.00024v4
2016-04-27T18:27:20Z
2015-02-27T21:59:08Z
Influence Maximization with Bandits
We consider the problem of \emph{influence maximization}, the problem of maximizing the number of people that become aware of a product by finding the `best' set of `seed' users to expose the product to. Most prior work on this topic assumes that we know the probability of each user influencing each other user, or we have data that lets us estimate these influences. However, this information is typically not initially available or is difficult to obtain. To avoid this assumption, we adopt a combinatorial multi-armed bandit paradigm that estimates the influence probabilities as we sequentially try different seed sets. We establish bounds on the performance of this procedure under the existing edge-level feedback as well as a novel and more realistic node-level feedback. Beyond our theoretical results, we describe a practical implementation and experimentally demonstrate its efficiency and effectiveness on four real datasets.
[ "['Sharan Vaswani' 'Laks. V. S. Lakshmanan' 'Mark Schmidt']", "Sharan Vaswani, Laks.V.S. Lakshmanan and Mark Schmidt" ]
cs.LG cs.AI cs.NE stat.ML
null
1503.00036
null
null
http://arxiv.org/pdf/1503.00036v2
2015-04-14T22:55:08Z
2015-02-27T23:50:22Z
Norm-Based Capacity Control in Neural Networks
We investigate the capacity, convexity and characterization of a general family of norm-constrained feed-forward networks.
[ "['Behnam Neyshabur' 'Ryota Tomioka' 'Nathan Srebro']", "Behnam Neyshabur, Ryota Tomioka, Nathan Srebro" ]
cs.AI cs.LG stat.ML
null
1503.00038
null
null
http://arxiv.org/pdf/1503.00038v1
2015-02-28T00:04:11Z
2015-02-28T00:04:11Z
Sequential Feature Explanations for Anomaly Detection
In many applications, an anomaly detection system presents the most anomalous data instance to a human analyst, who then must determine whether the instance is truly of interest (e.g. a threat in a security setting). Unfortunately, most anomaly detectors provide no explanation about why an instance was considered anomalous, leaving the analyst with no guidance about where to begin the investigation. To address this issue, we study the problems of computing and evaluating sequential feature explanations (SFEs) for anomaly detectors. An SFE of an anomaly is a sequence of features, which are presented to the analyst one at a time (in order) until the information contained in the highlighted features is enough for the analyst to make a confident judgement about the anomaly. Since analyst effort is related to the amount of information that they consider in an investigation, an explanation's quality is related to the number of features that must be revealed to attain confidence. One of our main contributions is to present a novel framework for large scale quantitative evaluations of SFEs, where the quality measure is based on analyst effort. To do this we construct anomaly detection benchmarks from real data sets along with artificial experts that can be simulated for evaluation. Our second contribution is to evaluate several novel explanation approaches within the framework and on traditional anomaly detection benchmarks, offering several insights into the approaches.
[ "Md Amran Siddiqui, Alan Fern, Thomas G. Dietterich and Weng-Keen Wong", "['Md Amran Siddiqui' 'Alan Fern' 'Thomas G. Dietterich' 'Weng-Keen Wong']" ]
cs.CL cs.AI cs.LG
null
1503.00075
null
null
http://arxiv.org/pdf/1503.00075v3
2015-05-30T06:51:20Z
2015-02-28T06:31:50Z
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
[ "Kai Sheng Tai, Richard Socher, Christopher D. Manning", "['Kai Sheng Tai' 'Richard Socher' 'Christopher D. Manning']" ]
stat.ML cs.LG
10.1016/j.acha.2016.03.007
1503.00164
null
null
http://arxiv.org/abs/1503.00164v2
2016-03-21T11:47:10Z
2015-02-28T18:32:45Z
Analysis of Crowdsourced Sampling Strategies for HodgeRank with Sparse Random Graphs
Crowdsourcing platforms are now extensively used for conducting subjective pairwise comparison studies. In this setting, a pairwise comparison dataset is typically gathered via random sampling, either \emph{with} or \emph{without} replacement. In this paper, we use tools from random graph theory to analyze these two random sampling methods for the HodgeRank estimator. Using the Fiedler value of the graph as a measurement for estimator stability (informativeness), we provide a new estimate of the Fiedler value for these two random graph models. In the asymptotic limit as the number of vertices tends to infinity, we prove the validity of the estimate. Based on our findings, for a small number of items to be compared, we recommend a two-stage sampling strategy where a greedy sampling method is used initially and random sampling \emph{without} replacement is used in the second stage. When a large number of items is to be compared, we recommend random sampling with replacement as this is computationally inexpensive and trivially parallelizable. Experiments on synthetic and real-world datasets support our analysis.
[ "Braxton Osting and Jiechao Xiong and Qianqian Xu and Yuan Yao", "['Braxton Osting' 'Jiechao Xiong' 'Qianqian Xu' 'Yuan Yao']" ]
cs.DB cs.AI cs.IR cs.LG
10.1109/DSAA.2014.7058121
1503.00244
null
null
http://arxiv.org/abs/1503.00244v1
2015-03-01T09:41:11Z
2015-03-01T09:41:11Z
23-bit Metaknowledge Template Towards Big Data Knowledge Discovery and Management
The global influence of Big Data is not only growing but seemingly endless. The trend is leaning towards knowledge that is attained easily and quickly from massive pools of Big Data. Today we are living in the technological world that Dr. Usama Fayyad and his distinguished research fellows discussed in the introductory explanations of Knowledge Discovery in Databases (KDD) predicted nearly two decades ago. Indeed, they were precise in their outlook on Big Data analytics. In fact, the continued improvement of the interoperability of machine learning, statistics, database building and querying fused to create this increasingly popular science- Data Mining and Knowledge Discovery. The next generation computational theories are geared towards helping to extract insightful knowledge from even larger volumes of data at higher rates of speed. As the trend increases in popularity, the need for a highly adaptive solution for knowledge discovery will be necessary. In this research paper, we are introducing the investigation and development of 23 bit-questions for a Metaknowledge template for Big Data Processing and clustering purposes. This research aims to demonstrate the construction of this methodology and proves the validity and the beneficial utilization that brings Knowledge Discovery from Big Data.
[ "['Nima Bari' 'Roman Vichr' 'Kamran Kowsari' 'Simon Y. Berkovich']", "Nima Bari, Roman Vichr, Kamran Kowsari, Simon Y. Berkovich" ]
cs.GT cs.LG
null
1503.00255
null
null
http://arxiv.org/pdf/1503.00255v1
2015-03-01T11:46:35Z
2015-03-01T11:46:35Z
An Online Convex Optimization Approach to Blackwell's Approachability
The notion of approachability in repeated games with vector payoffs was introduced by Blackwell in the 1950s, along with geometric conditions for approachability and corresponding strategies that rely on computing {\em steering directions} as projections from the current average payoff vector to the (convex) target set. Recently, Abernethy, Batlett and Hazan (2011) proposed a class of approachability algorithms that rely on the no-regret properties of Online Linear Programming for computing a suitable sequence of steering directions. This is first carried out for target sets that are convex cones, and then generalized to any convex set by embedding it in a higher-dimensional convex cone. In this paper we present a more direct formulation that relies on the support function of the set, along with suitable Online Convex Optimization algorithms, which leads to a general class of approachability algorithms. We further show that Blackwell's original algorithm and its convergence follow as a special case.
[ "Nahum Shimkin", "['Nahum Shimkin']" ]
stat.ML cs.LG stat.ME
null
1503.00269
null
null
http://arxiv.org/pdf/1503.00269v2
2015-05-10T21:36:53Z
2015-03-01T13:16:43Z
Contrastive Pessimistic Likelihood Estimation for Semi-Supervised Classification
Improvement guarantees for semi-supervised classifiers can currently only be given under restrictive conditions on the data. We propose a general way to perform semi-supervised parameter estimation for likelihood-based classifiers for which, on the full training set, the estimates are never worse than the supervised solution in terms of the log-likelihood. We argue, moreover, that we may expect these solutions to really improve upon the supervised classifier in particular cases. In a worked-out example for LDA, we take it one step further and essentially prove that its semi-supervised version is strictly better than its supervised counterpart. The two new concepts that form the core of our estimation principle are contrast and pessimism. The former refers to the fact that our objective function takes the supervised estimates into account, enabling the semi-supervised solution to explicitly control the potential improvements over this estimate. The latter refers to the fact that our estimates are conservative and therefore resilient to whatever form the true labeling of the unlabeled data takes on. Experiments demonstrate the improvements in terms of both the log-likelihood and the classification error rate on independent test sets.
[ "['Marco Loog']", "Marco Loog" ]
stat.ML cs.LG
null
1503.00323
null
null
http://arxiv.org/pdf/1503.00323v1
2015-03-01T18:30:07Z
2015-03-01T18:30:07Z
Sparse Approximation of a Kernel Mean
Kernel means are frequently used to represent probability distributions in machine learning problems. In particular, the well known kernel density estimator and the kernel mean embedding both have the form of a kernel mean. Unfortunately, kernel means are faced with scalability issues. A single point evaluation of the kernel density estimator, for example, requires a computation time linear in the training sample size. To address this challenge, we present a method to efficiently construct a sparse approximation of a kernel mean. We do so by first establishing an incoherence-based bound on the approximation error, and then noticing that, for the case of radial kernels, the bound can be minimized by solving the $k$-center problem. The outcome is a linear time construction of a sparse kernel mean, which also lends itself naturally to an automatic sparsity selection scheme. We show the computational gains of our method by looking at three problems involving kernel means: Euclidean embedding of distributions, class proportion estimation, and clustering using the mean-shift algorithm.
[ "E. Cruz Cort\\'es, C. Scott", "['E. Cruz Cortés' 'C. Scott']" ]
stat.ML cs.LG
null
1503.00332
null
null
http://arxiv.org/pdf/1503.00332v3
2015-06-05T16:11:10Z
2015-03-01T18:59:12Z
JUMP-Means: Small-Variance Asymptotics for Markov Jump Processes
Markov jump processes (MJPs) are used to model a wide range of phenomena from disease progression to RNA path folding. However, maximum likelihood estimation of parametric models leads to degenerate trajectories and inferential performance is poor in nonparametric models. We take a small-variance asymptotics (SVA) approach to overcome these limitations. We derive the small-variance asymptotics for parametric and nonparametric MJPs for both directly observed and hidden state models. In the parametric case we obtain a novel objective function which leads to non-degenerate trajectories. To derive the nonparametric version we introduce the gamma-gamma process, a novel extension to the gamma-exponential process. We propose algorithms for each of these formulations, which we call \emph{JUMP-means}. Our experiments demonstrate that JUMP-means is competitive with or outperforms widely used MJP inference approaches in terms of both speed and reconstruction accuracy.
[ "['Jonathan H. Huggins' 'Karthik Narasimhan' 'Ardavan Saeedi'\n 'Vikash K. Mansinghka']", "Jonathan H. Huggins, Karthik Narasimhan, Ardavan Saeedi, Vikash K.\n Mansinghka" ]
cs.LG
null
1503.00424
null
null
http://arxiv.org/pdf/1503.00424v2
2015-03-10T02:59:10Z
2015-03-02T06:59:06Z
Learning Mixtures of Gaussians in High Dimensions
Efficiently learning mixture of Gaussians is a fundamental problem in statistics and learning theory. Given samples coming from a random one out of k Gaussian distributions in Rn, the learning problem asks to estimate the means and the covariance matrices of these Gaussians. This learning problem arises in many areas ranging from the natural sciences to the social sciences, and has also found many machine learning applications. Unfortunately, learning mixture of Gaussians is an information theoretically hard problem: in order to learn the parameters up to a reasonable accuracy, the number of samples required is exponential in the number of Gaussian components in the worst case. In this work, we show that provided we are in high enough dimensions, the class of Gaussian mixtures is learnable in its most general form under a smoothed analysis framework, where the parameters are randomly perturbed from an adversarial starting point. In particular, given samples from a mixture of Gaussians with randomly perturbed parameters, when n > {\Omega}(k^2), we give an algorithm that learns the parameters with polynomial running time and using polynomial number of samples. The central algorithmic ideas consist of new ways to decompose the moment tensor of the Gaussian mixture by exploiting its structural properties. The symmetries of this tensor are derived from the combinatorial structure of higher order moments of Gaussian distributions (sometimes referred to as Isserlis' theorem or Wick's theorem). We also develop new tools for bounding smallest singular values of structured random matrices, which could be useful in other smoothed analysis settings.
[ "Rong Ge, Qingqing Huang, Sham M. Kakade", "['Rong Ge' 'Qingqing Huang' 'Sham M. Kakade']" ]
cs.LG
10.1145/2742548
1503.00491
null
null
http://arxiv.org/abs/1503.00491v1
2015-03-02T12:09:23Z
2015-03-02T12:09:23Z
Utility-Theoretic Ranking for Semi-Automated Text Classification
\emph{Semi-Automated Text Classification} (SATC) may be defined as the task of ranking a set $\mathcal{D}$ of automatically labelled textual documents in such a way that, if a human annotator validates (i.e., inspects and corrects where appropriate) the documents in a top-ranked portion of $\mathcal{D}$ with the goal of increasing the overall labelling accuracy of $\mathcal{D}$, the expected increase is maximized. An obvious SATC strategy is to rank $\mathcal{D}$ so that the documents that the classifier has labelled with the lowest confidence are top-ranked. In this work we show that this strategy is suboptimal. We develop new utility-theoretic ranking methods based on the notion of \emph{validation gain}, defined as the improvement in classification effectiveness that would derive by validating a given automatically labelled document. We also propose a new effectiveness measure for SATC-oriented ranking methods, based on the expected reduction in classification error brought about by partially validating a list generated by a given ranking method. We report the results of experiments showing that, with respect to the baseline method above, and according to the proposed measure, our utility-theoretic ranking methods can achieve substantially higher expected reductions in classification error.
[ "Giacomo Berardi, Andrea Esuli, Fabrizio Sebastiani", "['Giacomo Berardi' 'Andrea Esuli' 'Fabrizio Sebastiani']" ]
cs.CV cs.DS cs.LG
null
1503.00516
null
null
http://arxiv.org/pdf/1503.00516v4
2016-01-20T22:11:39Z
2015-03-02T13:20:25Z
Matrix Product State for Feature Extraction of Higher-Order Tensors
This paper introduces matrix product state (MPS) decomposition as a computational tool for extracting features of multidimensional data represented by higher-order tensors. Regardless of tensor order, MPS extracts its relevant features to the so-called core tensor of maximum order three which can be used for classification. Mainly based on a successive sequence of singular value decompositions (SVD), MPS is quite simple to implement without any recursive procedure needed for optimizing local tensors. Thus, it leads to substantial computational savings compared to other tensor feature extraction methods such as higher-order orthogonal iteration (HOOI) underlying the Tucker decomposition (TD). Benchmark results show that MPS can reduce significantly the feature space of data while achieving better classification performance compared to HOOI.
[ "Johann A. Bengua, Ho N. Phien, Hoang D. Tuan and Minh N. Do", "['Johann A. Bengua' 'Ho N. Phien' 'Hoang D. Tuan' 'Minh N. Do']" ]
cs.IT cs.LG math.IT stat.ML
null
1503.00547
null
null
http://arxiv.org/pdf/1503.00547v1
2015-03-02T14:34:48Z
2015-03-02T14:34:48Z
Recovering PCA from Hybrid-$(\ell_1,\ell_2)$ Sparse Sampling of Data Elements
This paper addresses how well we can recover a data matrix when only given a few of its elements. We present a randomized algorithm that element-wise sparsifies the data, retaining only a few its elements. Our new algorithm independently samples the data using sampling probabilities that depend on both the squares ($\ell_2$ sampling) and absolute values ($\ell_1$ sampling) of the entries. We prove that the hybrid algorithm recovers a near-PCA reconstruction of the data from a sublinear sample-size: hybrid-($\ell_1,\ell_2$) inherits the $\ell_2$-ability to sample the important elements as well as the regularization properties of $\ell_1$ sampling, and gives strictly better performance than either $\ell_1$ or $\ell_2$ on their own. We also give a one-pass version of our algorithm and show experiments to corroborate the theory.
[ "['Abhisek Kundu' 'Petros Drineas' 'Malik Magdon-Ismail']", "Abhisek Kundu, Petros Drineas, Malik Magdon-Ismail" ]
cs.CY cs.LG
10.1109/ICDMW.2014.90
1503.00587
null
null
http://arxiv.org/abs/1503.00587v1
2015-02-24T17:48:34Z
2015-02-24T17:48:34Z
Personalising Mobile Advertising Based on Users Installed Apps
Mobile advertising is a billion pound industry that is rapidly expanding. The success of an advert is measured based on how users interact with it. In this paper we investigate whether the application of unsupervised learning and association rule mining could be used to enable personalised targeting of mobile adverts with the aim of increasing the interaction rate. Over May and June 2014 we recorded advert interactions such as tapping the advert or watching the whole advert video along with the set of apps a user has installed at the time of the interaction. Based on the apps that the users have installed we applied k-means clustering to profile the users into one of ten classes. Due to the large number of apps considered we implemented dimension reduction to reduced the app feature space by mapping the apps to their iTunes category and clustered users based on the percentage of their apps that correspond to each iTunes app category. The clustering was externally validated by investigating differences between the way the ten profiles interact with the various adverts genres (lifestyle, finance and entertainment adverts). In addition association rule mining was performed to find whether the time of the day that the advert is served and the number of apps a user has installed makes certain profiles more likely to interact with the advert genres. The results showed there were clear differences in the way the profiles interact with the different advert genres and the results of this paper suggest that mobile advert targeting would improve the frequency that users interact with an advert.
[ "['Jenna Reps' 'Uwe Aickelin' 'Jonathan Garibaldi' 'Chris Damski']", "Jenna Reps, Uwe Aickelin, Jonathan Garibaldi, Chris Damski" ]
cs.LG
null
1503.00600
null
null
http://arxiv.org/pdf/1503.00600v1
2015-03-02T16:35:02Z
2015-03-02T16:35:02Z
An $\mathcal{O}(n\log n)$ projection operator for weighted $\ell_1$-norm regularization with sum constraint
We provide a simple and efficient algorithm for the projection operator for weighted $\ell_1$-norm regularization subject to a sum constraint, together with an elementary proof. The implementation of the proposed algorithm can be downloaded from the author's homepage.
[ "Weiran Wang", "['Weiran Wang']" ]
cs.LG stat.ML
null
1503.00623
null
null
http://arxiv.org/pdf/1503.00623v2
2015-04-26T17:58:51Z
2015-03-02T17:21:23Z
Unregularized Online Learning Algorithms with General Loss Functions
In this paper, we consider unregularized online learning algorithms in a Reproducing Kernel Hilbert Spaces (RKHS). Firstly, we derive explicit convergence rates of the unregularized online learning algorithms for classification associated with a general gamma-activating loss (see Definition 1 in the paper). Our results extend and refine the results in Ying and Pontil (2008) for the least-square loss and the recent result in Bach and Moulines (2011) for the loss function with a Lipschitz-continuous gradient. Moreover, we establish a very general condition on the step sizes which guarantees the convergence of the last iterate of such algorithms. Secondly, we establish, for the first time, the convergence of the unregularized pairwise learning algorithm with a general loss function and derive explicit rates under the assumption of polynomially decaying step sizes. Concrete examples are used to illustrate our main results. The main techniques are tools from convex analysis, refined inequalities of Gaussian averages, and an induction approach.
[ "Yiming Ying and Ding-Xuan Zhou", "['Yiming Ying' 'Ding-Xuan Zhou']" ]
cs.LG cs.CV stat.ML
null
1503.00687
null
null
http://arxiv.org/pdf/1503.00687v1
2015-03-02T20:09:14Z
2015-03-02T20:09:14Z
A review of mean-shift algorithms for clustering
A natural way to characterize the cluster structure of a dataset is by finding regions containing a high density of data. This can be done in a nonparametric way with a kernel density estimate, whose modes and hence clusters can be found using mean-shift algorithms. We describe the theory and practice behind clustering based on kernel density estimates and mean-shift algorithms. We discuss the blurring and non-blurring versions of mean-shift; theoretical results about mean-shift algorithms and Gaussian mixtures; relations with scale-space theory, spectral clustering and other algorithms; extensions to tracking, to manifold and graph data, and to manifold denoising; K-modes and Laplacian K-modes algorithms; acceleration strategies for large datasets; and applications to image segmentation, manifold denoising and multivalued regression.
[ "['Miguel Á. Carreira-Perpiñán']", "Miguel \\'A. Carreira-Perpi\\~n\\'an" ]
cs.CL cs.LG stat.ML
null
1503.00693
null
null
http://arxiv.org/pdf/1503.00693v1
2015-03-02T20:23:18Z
2015-03-02T20:23:18Z
Bayesian Optimization of Text Representations
When applying machine learning to problems in NLP, there are many choices to make about how to represent input texts. These choices can have a big effect on performance, but they are often uninteresting to researchers or practitioners who simply need a module that performs well. We propose an approach to optimizing over this space of choices, formulating the problem as global optimization. We apply a sequential model-based optimization technique and show that our method makes standard linear models competitive with more sophisticated, expensive state-of-the-art methods based on latent variable models or neural networks on various topic classification and sentiment analysis problems. Our approach is a first step towards black-box NLP systems that work with raw text and do not require manual tuning.
[ "['Dani Yogatama' 'Noah A. Smith']", "Dani Yogatama and Noah A. Smith" ]
stat.ML cs.LG
10.1109/JPROC.2015.2483592
1503.00759
null
null
http://arxiv.org/abs/1503.00759v3
2015-09-28T17:40:35Z
2015-03-02T21:35:41Z
A Review of Relational Machine Learning for Knowledge Graphs
Relational machine learning studies methods for the statistical analysis of relational, or graph-structured, data. In this paper, we provide a review of how such statistical models can be "trained" on large knowledge graphs, and then used to predict new facts about the world (which is equivalent to predicting new edges in the graph). In particular, we discuss two fundamentally different kinds of statistical relational models, both of which can scale to massive datasets. The first is based on latent feature models such as tensor factorization and multiway neural networks. The second is based on mining observable patterns in the graph. We also show how to combine these latent and observable models to get improved modeling power at decreased computational cost. Finally, we discuss how such statistical models of graphs can be combined with text-based information extraction methods for automatically constructing knowledge graphs from the Web. To this end, we also discuss Google's Knowledge Vault project as an example of such combination.
[ "['Maximilian Nickel' 'Kevin Murphy' 'Volker Tresp' 'Evgeniy Gabrilovich']", "Maximilian Nickel, Kevin Murphy, Volker Tresp, Evgeniy Gabrilovich" ]
cs.LG cs.DS cs.NE stat.ML
null
1503.00778
null
null
http://arxiv.org/pdf/1503.00778v1
2015-03-02T23:02:56Z
2015-03-02T23:02:56Z
Simple, Efficient, and Neural Algorithms for Sparse Coding
Sparse coding is a basic task in many fields including signal processing, neuroscience and machine learning where the goal is to learn a basis that enables a sparse representation of a given set of data, if one exists. Its standard formulation is as a non-convex optimization problem which is solved in practice by heuristics based on alternating minimization. Re- cent work has resulted in several algorithms for sparse coding with provable guarantees, but somewhat surprisingly these are outperformed by the simple alternating minimization heuristics. Here we give a general framework for understanding alternating minimization which we leverage to analyze existing heuristics and to design new ones also with provable guarantees. Some of these algorithms seem implementable on simple neural architectures, which was the original motivation of Olshausen and Field (1997a) in introducing sparse coding. We also give the first efficient algorithm for sparse coding that works almost up to the information theoretic limit for sparse recovery on incoherent dictionaries. All previous algorithms that approached or surpassed this limit run in time exponential in some natural parameter. Finally, our algorithms improve upon the sample complexity of existing approaches. We believe that our analysis framework will have applications in other settings where simple iterative algorithms are used.
[ "Sanjeev Arora, Rong Ge, Tengyu Ma, Ankur Moitra", "['Sanjeev Arora' 'Rong Ge' 'Tengyu Ma' 'Ankur Moitra']" ]
cs.CL cs.AI cs.IR cs.LG
null
1503.00841
null
null
http://arxiv.org/pdf/1503.00841v1
2015-03-03T06:59:28Z
2015-03-03T06:59:28Z
Robustly Leveraging Prior Knowledge in Text Classification
Prior knowledge has been shown very useful to address many natural language processing tasks. Many approaches have been proposed to formalise a variety of knowledge, however, whether the proposed approach is robust or sensitive to the knowledge supplied to the model has rarely been discussed. In this paper, we propose three regularization terms on top of generalized expectation criteria, and conduct extensive experiments to justify the robustness of the proposed methods. Experimental results demonstrate that our proposed methods obtain remarkable improvements and are much more robust than baselines.
[ "['Biao Liu' 'Minlie Huang']", "Biao Liu, Minlie Huang" ]
cs.LG cs.DB
null
1503.00900
null
null
http://arxiv.org/pdf/1503.00900v1
2015-03-03T11:26:27Z
2015-03-03T11:26:27Z
Normalization based K means Clustering Algorithm
K-means is an effective clustering technique used to separate similar data into groups based on initial centroids of clusters. In this paper, Normalization based K-means clustering algorithm(N-K means) is proposed. Proposed N-K means clustering algorithm applies normalization prior to clustering on the available data as well as the proposed approach calculates initial centroids based on weights. Experimental results prove the betterment of proposed N-K means clustering algorithm over existing K-means clustering algorithm in terms of complexity and overall performance.
[ "Deepali Virmani, Shweta Taneja, Geetika Malhotra", "['Deepali Virmani' 'Shweta Taneja' 'Geetika Malhotra']" ]
cs.LG
null
1503.01002
null
null
http://arxiv.org/pdf/1503.01002v1
2015-03-03T16:40:17Z
2015-03-03T16:40:17Z
Projection onto the capped simplex
We provide a simple and efficient algorithm for computing the Euclidean projection of a point onto the capped simplex---a simplex with an additional uniform bound on each coordinate---together with an elementary proof. Both the MATLAB and C++ implementations of the proposed algorithm can be downloaded at https://eng.ucmerced.edu/people/wwang5.
[ "['Weiran Wang' 'Canyi Lu']", "Weiran Wang, Canyi Lu" ]
cs.NE cs.LG
null
1503.01007
null
null
http://arxiv.org/pdf/1503.01007v4
2015-06-01T20:37:55Z
2015-03-03T16:50:28Z
Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets
Despite the recent achievements in machine learning, we are still very far from achieving real artificial intelligence. In this paper, we discuss the limitations of standard deep learning approaches and show that some of these limitations can be overcome by learning how to grow the complexity of a model in a structured way. Specifically, we study the simplest sequence prediction problems that are beyond the scope of what is learnable with standard recurrent networks, algorithmically generated sequences which can only be learned by models which have the capacity to count and to memorize sequences. We show that some basic algorithms can be learned from sequential data using a recurrent network associated with a trainable memory.
[ "Armand Joulin, Tomas Mikolov", "['Armand Joulin' 'Tomas Mikolov']" ]
cs.LG stat.ML
null
1503.01057
null
null
http://arxiv.org/pdf/1503.01057v1
2015-03-03T19:06:17Z
2015-03-03T19:06:17Z
Kernel Interpolation for Scalable Structured Gaussian Processes (KISS-GP)
We introduce a new structured kernel interpolation (SKI) framework, which generalises and unifies inducing point methods for scalable Gaussian processes (GPs). SKI methods produce kernel approximations for fast computations through kernel interpolation. The SKI framework clarifies how the quality of an inducing point approach depends on the number of inducing (aka interpolation) points, interpolation strategy, and GP covariance kernel. SKI also provides a mechanism to create new scalable kernel methods, through choosing different kernel interpolation strategies. Using SKI, with local cubic kernel interpolation, we introduce KISS-GP, which is 1) more scalable than inducing point alternatives, 2) naturally enables Kronecker and Toeplitz algebra for substantial additional gains in scalability, without requiring any grid data, and 3) can be used for fast and expressive kernel learning. KISS-GP costs O(n) time and storage for GP inference. We evaluate KISS-GP for kernel matrix approximation, kernel learning, and natural sound modelling.
[ "Andrew Gordon Wilson, Hannes Nickisch", "['Andrew Gordon Wilson' 'Hannes Nickisch']" ]
cs.AI cs.LG stat.ML
null
1503.01158
null
null
http://arxiv.org/pdf/1503.01158v2
2016-08-26T06:26:36Z
2015-03-03T23:07:37Z
A Meta-Analysis of the Anomaly Detection Problem
This article provides a thorough meta-analysis of the anomaly detection problem. To accomplish this we first identify approaches to benchmarking anomaly detection algorithms across the literature and produce a large corpus of anomaly detection benchmarks that vary in their construction across several dimensions we deem important to real-world applications: (a) point difficulty, (b) relative frequency of anomalies, (c) clusteredness of anomalies, and (d) relevance of features. We apply a representative set of anomaly detection algorithms to this corpus, yielding a very large collection of experimental results. We analyze these results to understand many phenomena observed in previous work. First we observe the effects of experimental design on experimental results. Second, results are evaluated with two metrics, ROC Area Under the Curve and Average Precision. We employ statistical hypothesis testing to demonstrate the value (or lack thereof) of our benchmarks. We then offer several approaches to summarizing our experimental results, drawing several conclusions about the impact of our methodology as well as the strengths and weaknesses of some algorithms. Last, we compare results against a trivial solution as an alternate means of normalizing the reported performance of algorithms. The intended contributions of this article are many; in addition to providing a large publicly-available corpus of anomaly detection benchmarks, we provide an ontology for describing anomaly detection contexts, a methodology for controlling various aspects of benchmark creation, guidelines for future experimental design and a discussion of the many potential pitfalls of trying to measure success in this field.
[ "['Andrew Emmott' 'Shubhomoy Das' 'Thomas Dietterich' 'Alan Fern'\n 'Weng-Keen Wong']", "Andrew Emmott, Shubhomoy Das, Thomas Dietterich, Alan Fern and\n Weng-Keen Wong" ]
stat.ML cs.LG
null
1503.01161
null
null
http://arxiv.org/pdf/1503.01161v1
2015-03-03T23:25:55Z
2015-03-03T23:25:55Z
The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification
We present the Bayesian Case Model (BCM), a general framework for Bayesian case-based reasoning (CBR) and prototype classification and clustering. BCM brings the intuitive power of CBR to a Bayesian generative framework. The BCM learns prototypes, the "quintessential" observations that best represent clusters in a dataset, by performing joint inference on cluster labels, prototypes and important features. Simultaneously, BCM pursues sparsity by learning subspaces, the sets of features that play important roles in the characterization of the prototypes. The prototype and subspace representation provides quantitative benefits in interpretability while preserving classification accuracy. Human subject experiments verify statistically significant improvements to participants' understanding when using explanations produced by BCM, compared to those given by prior art.
[ "Been Kim, Cynthia Rudin and Julie Shah", "['Been Kim' 'Cynthia Rudin' 'Julie Shah']" ]
stat.ML cs.LG
null
1503.01183
null
null
http://arxiv.org/pdf/1503.01183v2
2015-03-05T22:50:24Z
2015-03-04T01:08:17Z
A General Hybrid Clustering Technique
Here, we propose a clustering technique for general clustering problems including those that have non-convex clusters. For a given desired number of clusters $K$, we use three stages to find a clustering. The first stage uses a hybrid clustering technique to produce a series of clusterings of various sizes (randomly selected). They key steps are to find a $K$-means clustering using $K_\ell$ clusters where $K_\ell \gg K$ and then joins these small clusters by using single linkage clustering. The second stage stabilizes the result of stage one by reclustering via the `membership matrix' under Hamming distance to generate a dendrogram. The third stage is to cut the dendrogram to get $K^*$ clusters where $K^* \geq K$ and then prune back to $K$ to give a final clustering. A variant on our technique also gives a reasonable estimate for $K_T$, the true number of clusters. We provide a series of arguments to justify the steps in the stages of our methods and we provide numerous examples involving real and simulated data to compare our technique with other related techniques.
[ "['Saeid Amiri' 'Bertrand Clarke' 'Jennifer Clarke' 'Hoyt A. Koepke']", "Saeid Amiri, Bertrand Clarke, Jennifer Clarke and Hoyt A. Koepke" ]
cs.CL cs.LG stat.ML
null
1503.01190
null
null
http://arxiv.org/pdf/1503.01190v1
2015-03-04T01:34:36Z
2015-03-04T01:34:36Z
Statistical modality tagging from rule-based annotations and crowdsourcing
We explore training an automatic modality tagger. Modality is the attitude that a speaker might have toward an event or state. One of the main hurdles for training a linguistic tagger is gathering training data. This is particularly problematic for training a tagger for modality because modality triggers are sparse for the overwhelming majority of sentences. We investigate an approach to automatically training a modality tagger where we first gathered sentences based on a high-recall simple rule-based modality tagger and then provided these sentences to Mechanical Turk annotators for further annotation. We used the resulting set of training data to train a precise modality tagger using a multi-class SVM that delivers good performance.
[ "Vinodkumar Prabhakaran, Michael Bloodgood, Mona Diab, Bonnie Dorr,\n Lori Levin, Christine D. Piatko, Owen Rambow and Benjamin Van Durme", "['Vinodkumar Prabhakaran' 'Michael Bloodgood' 'Mona Diab' 'Bonnie Dorr'\n 'Lori Levin' 'Christine D. Piatko' 'Owen Rambow' 'Benjamin Van Durme']" ]
cs.LG cs.DS stat.ML
null
1503.01212
null
null
http://arxiv.org/pdf/1503.01212v2
2015-06-12T00:04:05Z
2015-03-04T04:05:35Z
Hierarchies of Relaxations for Online Prediction Problems with Evolving Constraints
We study online prediction where regret of the algorithm is measured against a benchmark defined via evolving constraints. This framework captures online prediction on graphs, as well as other prediction problems with combinatorial structure. A key aspect here is that finding the optimal benchmark predictor (even in hindsight, given all the data) might be computationally hard due to the combinatorial nature of the constraints. Despite this, we provide polynomial-time \emph{prediction} algorithms that achieve low regret against combinatorial benchmark sets. We do so by building improper learning algorithms based on two ideas that work together. The first is to alleviate part of the computational burden through random playout, and the second is to employ Lasserre semidefinite hierarchies to approximate the resulting integer program. Interestingly, for our prediction algorithms, we only need to compute the values of the semidefinite programs and not the rounded solutions. However, the integrality gap for Lasserre hierarchy \emph{does} enter the generic regret bound in terms of Rademacher complexity of the benchmark set. This establishes a trade-off between the computation time and the regret bound of the algorithm.
[ "['Alexander Rakhlin' 'Karthik Sridharan']", "Alexander Rakhlin, Karthik Sridharan" ]
cs.LG cs.CV stat.ML
null
1503.01228
null
null
http://arxiv.org/pdf/1503.01228v1
2015-03-04T05:41:29Z
2015-03-04T05:41:29Z
Bethe Learning of Conditional Random Fields via MAP Decoding
Many machine learning tasks can be formulated in terms of predicting structured outputs. In frameworks such as the structured support vector machine (SVM-Struct) and the structured perceptron, discriminative functions are learned by iteratively applying efficient maximum a posteriori (MAP) decoding. However, maximum likelihood estimation (MLE) of probabilistic models over these same structured spaces requires computing partition functions, which is generally intractable. This paper presents a method for learning discrete exponential family models using the Bethe approximation to the MLE. Remarkably, this problem also reduces to iterative (MAP) decoding. This connection emerges by combining the Bethe approximation with a Frank-Wolfe (FW) algorithm on a convex dual objective which circumvents the intractable partition function. The result is a new single loop algorithm MLE-Struct, which is substantially more efficient than previous double-loop methods for approximate maximum likelihood estimation. Our algorithm outperforms existing methods in experiments involving image segmentation, matching problems from vision, and a new dataset of university roommate assignments.
[ "['Kui Tang' 'Nicholas Ruozzi' 'David Belanger' 'Tony Jebara']", "Kui Tang, Nicholas Ruozzi, David Belanger, Tony Jebara" ]
cs.LG
null
1503.01239
null
null
http://arxiv.org/pdf/1503.01239v4
2018-09-09T14:13:12Z
2015-03-04T06:47:16Z
Joint Active Learning with Feature Selection via CUR Matrix Decomposition
This paper presents an unsupervised learning approach for simultaneous sample and feature selection, which is in contrast to existing works which mainly tackle these two problems separately. In fact the two tasks are often interleaved with each other: noisy and high-dimensional features will bring adverse effect on sample selection, while informative or representative samples will be beneficial to feature selection. Specifically, we propose a framework to jointly conduct active learning and feature selection based on the CUR matrix decomposition. From the data reconstruction perspective, both the selected samples and features can best approximate the original dataset respectively, such that the selected samples characterized by the features are highly representative. In particular, our method runs in one-shot without the procedure of iterative sample selection for progressive labeling. Thus, our model is especially suitable when there are few labeled samples or even in the absence of supervision, which is a particular challenge for existing methods. As the joint learning problem is NP-hard, the proposed formulation involves a convex but non-smooth optimization problem. We solve it efficiently by an iterative algorithm, and prove its global convergence. Experimental results on publicly available datasets corroborate the efficacy of our method compared with the state-of-the-art.
[ "['Changsheng Li' 'Xiangfeng Wang' 'Weishan Dong' 'Junchi Yan'\n 'Qingshan Liu' 'Hongyuan Zha']", "Changsheng Li and Xiangfeng Wang and Weishan Dong and Junchi Yan and\n Qingshan Liu and Hongyuan Zha" ]
stat.ML cs.CL cs.LG
null
1503.01397
null
null
http://arxiv.org/pdf/1503.01397v3
2016-11-28T18:44:53Z
2015-03-04T17:36:49Z
Bethe Projections for Non-Local Inference
Many inference problems in structured prediction are naturally solved by augmenting a tractable dependency structure with complex, non-local auxiliary objectives. This includes the mean field family of variational inference algorithms, soft- or hard-constrained inference using Lagrangian relaxation or linear programming, collective graphical models, and forms of semi-supervised learning such as posterior regularization. We present a method to discriminatively learn broad families of inference objectives, capturing powerful non-local statistics of the latent variables, while maintaining tractable and provably fast inference using non-Euclidean projected gradient descent with a distance-generating function given by the Bethe entropy. We demonstrate the performance and flexibility of our method by (1) extracting structured citations from research papers by learning soft global constraints, (2) achieving state-of-the-art results on a widely-used handwriting recognition task using a novel learned non-convex inference procedure, and (3) providing a fast and highly scalable algorithm for the challenging problem of inference in a collective graphical model applied to bird migration.
[ "['Luke Vilnis' 'David Belanger' 'Daniel Sheldon' 'Andrew McCallum']", "Luke Vilnis and David Belanger and Daniel Sheldon and Andrew McCallum" ]
cs.LG
null
1503.01428
null
null
http://arxiv.org/pdf/1503.01428v3
2015-12-22T17:18:32Z
2015-03-04T19:23:55Z
Probabilistic Label Relation Graphs with Ising Models
We consider classification problems in which the label space has structure. A common example is hierarchical label spaces, corresponding to the case where one label subsumes another (e.g., animal subsumes dog). But labels can also be mutually exclusive (e.g., dog vs cat) or unrelated (e.g., furry, carnivore). To jointly model hierarchy and exclusion relations, the notion of a HEX (hierarchy and exclusion) graph was introduced in [7]. This combined a conditional random field (CRF) with a deep neural network (DNN), resulting in state of the art results when applied to visual object classification problems where the training labels were drawn from different levels of the ImageNet hierarchy (e.g., an image might be labeled with the basic level category "dog", rather than the more specific label "husky"). In this paper, we extend the HEX model to allow for soft or probabilistic relations between labels, which is useful when there is uncertainty about the relationship between two labels (e.g., an antelope is "sort of" furry, but not to the same degree as a grizzly bear). We call our new model pHEX, for probabilistic HEX. We show that the pHEX graph can be converted to an Ising model, which allows us to use existing off-the-shelf inference methods (in contrast to the HEX method, which needed specialized inference algorithms). Experimental results show significant improvements in a number of large-scale visual object classification tasks, outperforming the previous HEX model.
[ "Nan Ding and Jia Deng and Kevin Murphy and Hartmut Neven", "['Nan Ding' 'Jia Deng' 'Kevin Murphy' 'Hartmut Neven']" ]
cs.LG cs.CG stat.ML
null
1503.01436
null
null
http://arxiv.org/pdf/1503.01436v7
2016-02-11T05:35:31Z
2015-03-04T19:51:19Z
Class Probability Estimation via Differential Geometric Regularization
We study the problem of supervised learning for both binary and multiclass classification from a unified geometric perspective. In particular, we propose a geometric regularization technique to find the submanifold corresponding to a robust estimator of the class probability $P(y|\pmb{x})$. The regularization term measures the volume of this submanifold, based on the intuition that overfitting produces rapid local oscillations and hence large volume of the estimator. This technique can be applied to regularize any classification function that satisfies two requirements: firstly, an estimator of the class probability can be obtained; secondly, first and second derivatives of the class probability estimator can be calculated. In experiments, we apply our regularization technique to standard loss functions for classification, our RBF-based implementation compares favorably to widely used regularization methods for both binary and multiclass classification.
[ "['Qinxun Bai' 'Steven Rosenberg' 'Zheng Wu' 'Stan Sclaroff']", "Qinxun Bai, Steven Rosenberg, Zheng Wu, Stan Sclaroff" ]