categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG | null | 1009.3346 | null | null | http://arxiv.org/pdf/1009.3346v1 | 2010-09-17T06:47:25Z | 2010-09-17T06:47:25Z | Conditional Random Fields and Support Vector Machines: A Hybrid Approach | We propose a novel hybrid loss for multiclass and structured prediction
problems that is a convex combination of log loss for Conditional Random Fields
(CRFs) and a multiclass hinge loss for Support Vector Machines (SVMs). We
provide a sufficient condition for when the hybrid loss is Fisher consistent
for classification. This condition depends on a measure of dominance between
labels - specifically, the gap in per observation probabilities between the
most likely labels. We also prove Fisher consistency is necessary for
parametric consistency when learning models such as CRFs.
We demonstrate empirically that the hybrid loss typically performs as least
as well as - and often better than - both of its constituent losses on variety
of tasks. In doing so we also provide an empirical comparison of the efficacy
of probabilistic and margin based approaches to multiclass and structured
prediction and the effects of label dominance on these results.
| [
"Qinfeng Shi, Mark D. Reid, Tiberio Caetano",
"['Qinfeng Shi' 'Mark D. Reid' 'Tiberio Caetano']"
] |
cs.LG math.OC | null | 1009.3515 | null | null | http://arxiv.org/pdf/1009.3515v2 | 2010-10-26T21:04:38Z | 2010-09-17T21:29:09Z | Safe Feature Elimination in Sparse Supervised Learning | We investigate fast methods that allow to quickly eliminate variables
(features) in supervised learning problems involving a convex loss function and
a $l_1$-norm penalty, leading to a potentially substantial reduction in the
number of variables prior to running the supervised learning algorithm. The
methods are not heuristic: they only eliminate features that are {\em
guaranteed} to be absent after solving the learning problem. Our framework
applies to a large class of problems, including support vector machine
classification, logistic regression and least-squares.
The complexity of the feature elimination step is negligible compared to the
typical computational effort involved in the sparse supervised learning
problem: it grows linearly with the number of features times the number of
examples, with much better count if data is sparse. We apply our method to data
sets arising in text classification and observe a dramatic reduction of the
dimensionality, hence in computational effort required to solve the learning
problem, especially when very sparse classifiers are sought. Our method allows
to immediately extend the scope of existing algorithms, allowing us to run them
on data sets of sizes that were out of their reach before.
| [
"['Laurent El Ghaoui' 'Vivian Viallon' 'Tarek Rabbani']",
"Laurent El Ghaoui and Vivian Viallon and Tarek Rabbani"
] |
cs.LG cs.CV cs.NE | null | 1009.3589 | null | null | http://arxiv.org/pdf/1009.3589v1 | 2010-09-18T22:11:05Z | 2010-09-18T22:11:05Z | Deep Self-Taught Learning for Handwritten Character Recognition | Recent theoretical and empirical work in statistical machine learning has
demonstrated the importance of learning algorithms for deep architectures,
i.e., function classes obtained by composing multiple non-linear
transformations. Self-taught learning (exploiting unlabeled examples or
examples from other distributions) has already been applied to deep learners,
but mostly to show the advantage of unlabeled examples. Here we explore the
advantage brought by {\em out-of-distribution examples}. For this purpose we
developed a powerful generator of stochastic variations and noise processes for
character images, including not only affine transformations but also slant,
local elastic deformations, changes in thickness, background images, grey level
changes, contrast, occlusion, and various types of noise. The
out-of-distribution examples are obtained from these highly distorted images or
by including examples of object classes different from those in the target test
set. We show that {\em deep learners benefit more from out-of-distribution
examples than a corresponding shallow learner}, at least in the area of
handwritten character recognition. In fact, we show that they beat previously
published results and reach human-level performance on both handwritten digit
classification and 62-class handwritten character recognition.
| [
"['Frédéric Bastien' 'Yoshua Bengio' 'Arnaud Bergeron'\n 'Nicolas Boulanger-Lewandowski' 'Thomas Breuel' 'Youssouf Chherawala'\n 'Moustapha Cisse' 'Myriam Côté' 'Dumitru Erhan' 'Jeremy Eustache'\n 'Xavier Glorot' 'Xavier Muller' 'Sylvain Pannetier Lebeuf'\n 'Razvan Pascanu' 'Salah Rifai' 'Francois Savard' 'Guillaume Sicard']",
"Fr\\'ed\\'eric Bastien and Yoshua Bengio and Arnaud Bergeron and Nicolas\n Boulanger-Lewandowski and Thomas Breuel and Youssouf Chherawala and Moustapha\n Cisse and Myriam C\\^ot\\'e and Dumitru Erhan and Jeremy Eustache and Xavier\n Glorot and Xavier Muller and Sylvain Pannetier Lebeuf and Razvan Pascanu and\n Salah Rifai and Francois Savard and Guillaume Sicard"
] |
cs.LG | 10.1109/TSMCB.2011.2163392 | 1009.3604 | null | null | http://arxiv.org/abs/1009.3604v5 | 2012-10-13T11:09:48Z | 2010-09-19T03:54:12Z | Geometric Decision Tree | In this paper we present a new algorithm for learning oblique decision trees.
Most of the current decision tree algorithms rely on impurity measures to
assess the goodness of hyperplanes at each node while learning a decision tree
in a top-down fashion. These impurity measures do not properly capture the
geometric structures in the data. Motivated by this, our algorithm uses a
strategy to assess the hyperplanes in such a way that the geometric structure
in the data is taken into account. At each node of the decision tree, we find
the clustering hyperplanes for both the classes and use their angle bisectors
as the split rule at that node. We show through empirical studies that this
idea leads to small decision trees and better performance. We also present some
analysis to show that the angle bisectors of clustering hyperplanes that we use
as the split rules at each node, are solutions of an interesting optimization
problem and hence argue that this is a principled method of learning a decision
tree.
| [
"['Naresh Manwani' 'P. S. Sastry']",
"Naresh Manwani and P. S. Sastry"
] |
cs.LG | null | 1009.3613 | null | null | http://arxiv.org/pdf/1009.3613v5 | 2013-08-28T03:03:18Z | 2010-09-19T07:26:37Z | On the Doubt about Margin Explanation of Boosting | Margin theory provides one of the most popular explanations to the success of
\texttt{AdaBoost}, where the central point lies in the recognition that
\textit{margin} is the key for characterizing the performance of
\texttt{AdaBoost}. This theory has been very influential, e.g., it has been
used to argue that \texttt{AdaBoost} usually does not overfit since it tends to
enlarge the margin even after the training error reaches zero. Previously the
\textit{minimum margin bound} was established for \texttt{AdaBoost}, however,
\cite{Breiman1999} pointed out that maximizing the minimum margin does not
necessarily lead to a better generalization. Later, \cite{Reyzin:Schapire2006}
emphasized that the margin distribution rather than minimum margin is crucial
to the performance of \texttt{AdaBoost}. In this paper, we first present the
\textit{$k$th margin bound} and further study on its relationship to previous
work such as the minimum margin bound and Emargin bound. Then, we improve the
previous empirical Bernstein bounds
\citep{Maurer:Pontil2009,Audibert:Munos:Szepesvari2009}, and based on such
findings, we defend the margin-based explanation against Breiman's doubts by
proving a new generalization error bound that considers exactly the same
factors as \cite{Schapire:Freund:Bartlett:Lee1998} but is sharper than
\cite{Breiman1999}'s minimum margin bound. By incorporating factors such as
average margin and variance, we present a generalization error bound that is
heavily related to the whole margin distribution. We also provide margin
distribution bounds for generalization error of voting classifiers in finite
VC-dimension space.
| [
"['Wei Gao' 'Zhi-Hua Zhou']",
"Wei Gao, Zhi-Hua Zhou"
] |
cs.LG | null | 1009.3702 | null | null | http://arxiv.org/pdf/1009.3702v1 | 2010-09-20T06:35:11Z | 2010-09-20T06:35:11Z | Totally Corrective Multiclass Boosting with Binary Weak Learners | In this work, we propose a new optimization framework for multiclass boosting
learning. In the literature, AdaBoost.MO and AdaBoost.ECC are the two
successful multiclass boosting algorithms, which can use binary weak learners.
We explicitly derive these two algorithms' Lagrange dual problems based on
their regularized loss functions. We show that the Lagrange dual formulations
enable us to design totally-corrective multiclass algorithms by using the
primal-dual optimization technique. Experiments on benchmark data sets suggest
that our multiclass boosting can achieve a comparable generalization capability
with state-of-the-art, but the convergence speed is much faster than stage-wise
gradient descent boosting. In other words, the new totally corrective
algorithms can maximize the margin more aggressively.
| [
"Zhihui Hao, Chunhua Shen, Nick Barnes, and Bo Wang",
"['Zhihui Hao' 'Chunhua Shen' 'Nick Barnes' 'Bo Wang']"
] |
cs.SE cs.CR cs.LG | 10.4204/EPTCS.35.2 | 1009.3711 | null | null | http://arxiv.org/abs/1009.3711v1 | 2010-09-20T07:19:27Z | 2010-09-20T07:19:27Z | Structural Learning of Attack Vectors for Generating Mutated XSS Attacks | Web applications suffer from cross-site scripting (XSS) attacks that
resulting from incomplete or incorrect input sanitization. Learning the
structure of attack vectors could enrich the variety of manifestations in
generated XSS attacks. In this study, we focus on generating more threatening
XSS attacks for the state-of-the-art detection approaches that can find
potential XSS vulnerabilities in Web applications, and propose a mechanism for
structural learning of attack vectors with the aim of generating mutated XSS
attacks in a fully automatic way. Mutated XSS attack generation depends on the
analysis of attack vectors and the structural learning mechanism. For the
kernel of the learning mechanism, we use a Hidden Markov model (HMM) as the
structure of the attack vector model to capture the implicit manner of the
attack vector, and this manner is benefited from the syntax meanings that are
labeled by the proposed tokenizing mechanism. Bayes theorem is used to
determine the number of hidden states in the model for generalizing the
structure model. The paper has the contributions as following: (1)
automatically learn the structure of attack vectors from practical data
analysis to modeling a structure model of attack vectors, (2) mimic the manners
and the elements of attack vectors to extend the ability of testing tool for
identifying XSS vulnerabilities, (3) be helpful to verify the flaws of
blacklist sanitization procedures of Web applications. We evaluated the
proposed mechanism by Burp Intruder with a dataset collected from public XSS
archives. The results show that mutated XSS attack generation can identify
potential vulnerabilities.
| [
"Yi-Hsun Wang, Ching-Hao Mao, Hahn-Ming Lee",
"['Yi-Hsun Wang' 'Ching-Hao Mao' 'Hahn-Ming Lee']"
] |
cs.CV cs.IT cs.LG math.IT | null | 1009.3802 | null | null | http://arxiv.org/pdf/1009.3802v3 | 2010-10-08T16:53:06Z | 2010-09-20T12:54:12Z | Robust Low-Rank Subspace Segmentation with Semidefinite Guarantees | Recently there is a line of research work proposing to employ Spectral
Clustering (SC) to segment (group){Throughout the paper, we use segmentation,
clustering, and grouping, and their verb forms, interchangeably.}
high-dimensional structural data such as those (approximately) lying on
subspaces {We follow {liu2010robust} and use the term "subspace" to denote both
linear subspaces and affine subspaces. There is a trivial conversion between
linear subspaces and affine subspaces as mentioned therein.} or low-dimensional
manifolds. By learning the affinity matrix in the form of sparse
reconstruction, techniques proposed in this vein often considerably boost the
performance in subspace settings where traditional SC can fail. Despite the
success, there are fundamental problems that have been left unsolved: the
spectrum property of the learned affinity matrix cannot be gauged in advance,
and there is often one ugly symmetrization step that post-processes the
affinity for SC input. Hence we advocate to enforce the symmetric positive
semidefinite constraint explicitly during learning (Low-Rank Representation
with Positive SemiDefinite constraint, or LRR-PSD), and show that factually it
can be solved in an exquisite scheme efficiently instead of general-purpose SDP
solvers that usually scale up poorly. We provide rigorous mathematical
derivations to show that, in its canonical form, LRR-PSD is equivalent to the
recently proposed Low-Rank Representation (LRR) scheme {liu2010robust}, and
hence offer theoretic and practical insights to both LRR-PSD and LRR, inviting
future research. As per the computational cost, our proposal is at most
comparable to that of LRR, if not less. We validate our theoretic analysis and
optimization scheme by experiments on both synthetic and real data sets.
| [
"['Yuzhao Ni' 'Ju Sun' 'Xiaotong Yuan' 'Shuicheng Yan' 'Loong-Fah Cheong']",
"Yuzhao Ni, Ju Sun, Xiaotong Yuan, Shuicheng Yan, Loong-Fah Cheong"
] |
cs.LG | null | 1009.3896 | null | null | http://arxiv.org/pdf/1009.3896v2 | 2012-11-26T06:42:25Z | 2010-09-20T17:35:35Z | Optimistic Rates for Learning with a Smooth Loss | We establish an excess risk bound of O(H R_n^2 + R_n \sqrt{H L*}) for
empirical risk minimization with an H-smooth loss function and a hypothesis
class with Rademacher complexity R_n, where L* is the best risk achievable by
the hypothesis class. For typical hypothesis classes where R_n = \sqrt{R/n},
this translates to a learning rate of O(RH/n) in the separable (L*=0) case and
O(RH/n + \sqrt{L^* RH/n}) more generally. We also provide similar guarantees
for online and stochastic convex optimization with a smooth non-negative
objective.
| [
"Nathan Srebro, Karthik Sridharan, Ambuj Tewari",
"['Nathan Srebro' 'Karthik Sridharan' 'Ambuj Tewari']"
] |
cs.LG stat.ML | null | 1009.3958 | null | null | http://arxiv.org/pdf/1009.3958v1 | 2010-09-20T21:44:30Z | 2010-09-20T21:44:30Z | Approximate Inference and Stochastic Optimal Control | We propose a novel reformulation of the stochastic optimal control problem as
an approximate inference problem, demonstrating, that such a interpretation
leads to new practical methods for the original problem. In particular we
characterise a novel class of iterative solutions to the stochastic optimal
control problem based on a natural relaxation of the exact dual formulation.
These theoretical insights are applied to the Reinforcement Learning problem
where they lead to new model free, off policy methods for discrete and
continuous problems.
| [
"Konrad Rawlik, Marc Toussaint and Sethu Vijayakumar",
"['Konrad Rawlik' 'Marc Toussaint' 'Sethu Vijayakumar']"
] |
cs.LG cs.SY math.OC | null | 1009.4219 | null | null | http://arxiv.org/pdf/1009.4219v2 | 2011-05-18T16:38:10Z | 2010-09-21T21:13:15Z | Safe Feature Elimination for the LASSO and Sparse Supervised Learning
Problems | We describe a fast method to eliminate features (variables) in l1 -penalized
least-square regression (or LASSO) problems. The elimination of features leads
to a potentially substantial reduction in running time, specially for large
values of the penalty parameter. Our method is not heuristic: it only
eliminates features that are guaranteed to be absent after solving the LASSO
problem. The feature elimination step is easy to parallelize and can test each
feature for elimination independently. Moreover, the computational effort of
our method is negligible compared to that of solving the LASSO problem -
roughly it is the same as single gradient step. Our method extends the scope of
existing LASSO algorithms to treat larger data sets, previously out of their
reach. We show how our method can be extended to general l1 -penalized convex
problems and present preliminary results for the Sparse Support Vector Machine
and Logistic Regression problems.
| [
"['Laurent El Ghaoui' 'Vivian Viallon' 'Tarek Rabbani']",
"Laurent El Ghaoui, Vivian Viallon, Tarek Rabbani"
] |
cs.NE cs.IR cs.LG | null | 1009.4574 | null | null | http://arxiv.org/pdf/1009.4574v1 | 2010-09-23T10:50:06Z | 2010-09-23T10:50:06Z | A hybrid learning algorithm for text classification | Text classification is the process of classifying documents into predefined
categories based on their content. Existing supervised learning algorithms to
automatically classify text need sufficient documents to learn accurately. This
paper presents a new algorithm for text classification that requires fewer
documents for training. Instead of using words, word relation i.e association
rules from these words is used to derive feature set from preclassified text
documents. The concept of Naive Bayes classifier is then used on derived
features and finally only a single concept of Genetic Algorithm has been added
for final classification. Experimental results show that the classifier build
this way is more accurate than the existing text classification systems.
| [
"S. M. Kamruzzaman and Farhana Haider",
"['S. M. Kamruzzaman' 'Farhana Haider']"
] |
cs.LG cs.DB cs.IR | null | 1009.4582 | null | null | http://arxiv.org/pdf/1009.4582v1 | 2010-09-23T11:32:16Z | 2010-09-23T11:32:16Z | Text Classification using the Concept of Association Rule of Data Mining | As the amount of online text increases, the demand for text classification to
aid the analysis and management of text is increasing. Text is cheap, but
information, in the form of knowing what classes a text belongs to, is
expensive. Automatic classification of text can provide this information at low
cost, but the classifiers themselves must be built with expensive human effort,
or trained from texts which have themselves been manually classified. In this
paper we will discuss a procedure of classifying text using the concept of
association rule of data mining. Association rule mining technique has been
used to derive feature set from pre-classified text documents. Naive Bayes
classifier is then used on derived features for final classification.
| [
"Chowdhury Mofizur Rahman, Ferdous Ahmed Sohel, Parvez Naushad, and S.\n M. Kamruzzaman",
"['Chowdhury Mofizur Rahman' 'Ferdous Ahmed Sohel' 'Parvez Naushad'\n 'S. M. Kamruzzaman']"
] |
cs.SD cs.LG | null | 1009.4719 | null | null | http://arxiv.org/pdf/1009.4719v1 | 2010-09-23T20:38:06Z | 2010-09-23T20:38:06Z | A Fast Audio Clustering Using Vector Quantization and Second Order
Statistics | This paper describes an effective unsupervised speaker indexing approach. We
suggest a two stage algorithm to speed-up the state-of-the-art algorithm based
on the Bayesian Information Criterion (BIC). In the first stage of the merging
process a computationally cheap method based on the vector quantization (VQ) is
used. Then in the second stage a more computational expensive technique based
on the BIC is applied. In the speaker indexing task a turning parameter or a
threshold is used. We suggest an on-line procedure to define the value of a
turning parameter without using development data. The results are evaluated
using 10 hours of audio data.
| [
"['Konstantin Biatov']",
"Konstantin Biatov"
] |
cs.LG | null | 1009.4766 | null | null | http://arxiv.org/pdf/1009.4766v1 | 2010-09-24T05:53:28Z | 2010-09-24T05:53:28Z | Efficient L1/Lq Norm Regularization | Sparse learning has recently received increasing attention in many areas
including machine learning, statistics, and applied mathematics. The mixed-norm
regularization based on the L1/Lq norm with q > 1 is attractive in many
applications of regression and classification in that it facilitates group
sparsity in the model. The resulting optimization problem is, however,
challenging to solve due to the structure of the L1/Lq -regularization.
Existing work deals with special cases including q = 2,infinity, and they
cannot be easily extended to the general case. In this paper, we propose an
efficient algorithm based on the accelerated gradient method for solving the
L1/Lq -regularized problem, which is applicable for all values of q larger than
1, thus significantly extending existing work. One key building block of the
proposed algorithm is the L1/Lq -regularized Euclidean projection (EP1q). Our
theoretical analysis reveals the key properties of EP1q and illustrates why
EP1q for the general q is significantly more challenging to solve than the
special cases. Based on our theoretical analysis, we develop an efficient
algorithm for EP1q by solving two zero finding problems. Experimental results
demonstrate the efficiency of the proposed algorithm.
| [
"['Jun Liu' 'Jieping Ye']",
"Jun Liu, Jieping Ye"
] |
cs.LG | null | 1009.4791 | null | null | http://arxiv.org/pdf/1009.4791v2 | 2010-11-01T13:23:49Z | 2010-09-24T09:53:32Z | Multi-parametric Solution-path Algorithm for Instance-weighted Support
Vector Machines | An instance-weighted variant of the support vector machine (SVM) has
attracted considerable attention recently since they are useful in various
machine learning tasks such as non-stationary data analysis, heteroscedastic
data modeling, transfer learning, learning to rank, and transduction. An
important challenge in these scenarios is to overcome the computational
bottleneck---instance weights often change dynamically or adaptively, and thus
the weighted SVM solutions must be repeatedly computed. In this paper, we
develop an algorithm that can efficiently and exactly update the weighted SVM
solutions for arbitrary change of instance weights. Technically, this
contribution can be regarded as an extension of the conventional solution-path
algorithm for a single regularization parameter to multiple instance-weight
parameters. However, this extension gives rise to a significant problem that
breakpoints (at which the solution path turns) have to be identified in
high-dimensional space. To facilitate this, we introduce a parametric
representation of instance weights. We also provide a geometric interpretation
in weight space using a notion of critical region: a polyhedron in which the
current affine solution remains to be optimal. Then we find breakpoints at
intersections of the solution path and boundaries of polyhedrons. Through
extensive experiments on various practical applications, we demonstrate the
usefulness of the proposed algorithm.
| [
"Masayuki Karasuyama, Naoyuki Harada, Masashi Sugiyama, Ichiro Takeuchi",
"['Masayuki Karasuyama' 'Naoyuki Harada' 'Masashi Sugiyama'\n 'Ichiro Takeuchi']"
] |
cs.LG cs.SD | 10.3923/ijepe.2007.274.278 | 1009.4972 | null | null | http://arxiv.org/abs/1009.4972v1 | 2010-09-25T05:32:44Z | 2010-09-25T05:32:44Z | Speaker Identification using MFCC-Domain Support Vector Machine | Speech recognition and speaker identification are important for
authentication and verification in security purpose, but they are difficult to
achieve. Speaker identification methods can be divided into text-independent
and text-dependent. This paper presents a technique of text-dependent speaker
identification using MFCC-domain support vector machine (SVM). In this work,
melfrequency cepstrum coefficients (MFCCs) and their statistical distribution
properties are used as features, which will be inputs to the neural network.
This work firstly used sequential minimum optimization (SMO) learning technique
for SVM that improve performance over traditional techniques Chunking, Osuna.
The cepstrum coefficients representing the speaker characteristics of a speech
segment are computed by nonlinear filter bank analysis and discrete cosine
transform. The speaker identification ability and convergence speed of the SVMs
are investigated for different combinations of features. Extensive experimental
results on several samples show the effectiveness of the proposed approach.
| [
"S. M. Kamruzzaman, A. N. M. Rezaul Karim, Md. Saiful Islam, and Md.\n Emdadul Haque",
"['S. M. Kamruzzaman' 'A. N. M. Rezaul Karim' 'Md. Saiful Islam'\n 'Md. Emdadul Haque']"
] |
cs.IR cs.DB cs.LG | null | 1009.4976 | null | null | http://arxiv.org/pdf/1009.4976v1 | 2010-09-25T06:10:33Z | 2010-09-25T06:10:33Z | Text Classification using Association Rule with a Hybrid Concept of
Naive Bayes Classifier and Genetic Algorithm | Text classification is the automated assignment of natural language texts to
predefined categories based on their content. Text classification is the
primary requirement of text retrieval systems, which retrieve texts in response
to a user query, and text understanding systems, which transform text in some
way such as producing summaries, answering questions or extracting data. Now a
day the demand of text classification is increasing tremendously. Keeping this
demand into consideration, new and updated techniques are being developed for
the purpose of automated text classification. This paper presents a new
algorithm for text classification. Instead of using words, word relation i.e.
association rules is used to derive feature set from pre-classified text
documents. The concept of Naive Bayes Classifier is then used on derived
features and finally a concept of Genetic Algorithm has been added for final
classification. A system based on the proposed algorithm has been implemented
and tested. The experimental results show that the proposed system works as a
successful text classifier.
| [
"S. M. Kamruzzaman, Farhana Haider, and Ahmed Ryadh Hasan",
"['S. M. Kamruzzaman' 'Farhana Haider' 'Ahmed Ryadh Hasan']"
] |
cs.LG | null | 1009.5419 | null | null | http://arxiv.org/pdf/1009.5419v2 | 2011-03-07T13:45:22Z | 2010-09-28T00:41:45Z | Portfolio Allocation for Bayesian Optimization | Bayesian optimization with Gaussian processes has become an increasingly
popular tool in the machine learning community. It is efficient and can be used
when very little is known about the objective function, making it popular in
expensive black-box optimization scenarios. It uses Bayesian methods to sample
the objective efficiently using an acquisition function which incorporates the
model's estimate of the objective and the uncertainty at any given point.
However, there are several different parameterized acquisition functions in the
literature, and it is often unclear which one to use. Instead of using a single
acquisition function, we adopt a portfolio of acquisition functions governed by
an online multi-armed bandit strategy. We propose several portfolio strategies,
the best of which we call GP-Hedge, and show that this method outperforms the
best individual acquisition function. We also provide a theoretical bound on
the algorithm's performance.
| [
"['Eric Brochu' 'Matthew W. Hoffman' 'Nando de Freitas']",
"Eric Brochu, Matthew W. Hoffman, Nando de Freitas"
] |
cs.SD cs.LG | null | 1009.5761 | null | null | http://arxiv.org/pdf/1009.5761v1 | 2010-09-29T03:20:40Z | 2010-09-29T03:20:40Z | Approximate Maximum A Posteriori Inference with Entropic Priors | In certain applications it is useful to fit multinomial distributions to
observed data with a penalty term that encourages sparsity. For example, in
probabilistic latent audio source decomposition one may wish to encode the
assumption that only a few latent sources are active at any given time. The
standard heuristic of applying an L1 penalty is not an option when fitting the
parameters to a multinomial distribution, which are constrained to sum to 1. An
alternative is to use a penalty term that encourages low-entropy solutions,
which corresponds to maximum a posteriori (MAP) parameter estimation with an
entropic prior. The lack of conjugacy between the entropic prior and the
multinomial distribution complicates this approach. In this report I propose a
simple iterative algorithm for MAP estimation of multinomial distributions with
sparsity-inducing entropic priors.
| [
"Matthew D. Hoffman",
"['Matthew D. Hoffman']"
] |
cs.LG | 10.1109/TSP.2011.2165211 | 1009.5773 | null | null | http://arxiv.org/abs/1009.5773v4 | 2013-06-05T01:57:55Z | 2010-09-29T05:23:20Z | Fast Reinforcement Learning for Energy-Efficient Wireless Communications | We consider the problem of energy-efficient point-to-point transmission of
delay-sensitive data (e.g. multimedia data) over a fading channel. Existing
research on this topic utilizes either physical-layer centric solutions, namely
power-control and adaptive modulation and coding (AMC), or system-level
solutions based on dynamic power management (DPM); however, there is currently
no rigorous and unified framework for simultaneously utilizing both
physical-layer centric and system-level techniques to achieve the minimum
possible energy consumption, under delay constraints, in the presence of
stochastic and a priori unknown traffic and channel conditions. In this report,
we propose such a framework. We formulate the stochastic optimization problem
as a Markov decision process (MDP) and solve it online using reinforcement
learning. The advantages of the proposed online method are that (i) it does not
require a priori knowledge of the traffic arrival and channel statistics to
determine the jointly optimal power-control, AMC, and DPM policies; (ii) it
exploits partial information about the system so that less information needs to
be learned than when using conventional reinforcement learning algorithms; and
(iii) it obviates the need for action exploration, which severely limits the
adaptation speed and run-time performance of conventional reinforcement
learning algorithms. Our results show that the proposed learning algorithms can
converge up to two orders of magnitude faster than a state-of-the-art learning
algorithm for physical layer power-control and up to three orders of magnitude
faster than conventional reinforcement learning algorithms.
| [
"['Nicholas Mastronarde' 'Mihaela van der Schaar']",
"Nicholas Mastronarde and Mihaela van der Schaar"
] |
cs.LG | null | 1009.5972 | null | null | http://arxiv.org/pdf/1009.5972v1 | 2010-09-29T18:55:02Z | 2010-09-29T18:55:02Z | The Attentive Perceptron | We propose a focus of attention mechanism to speed up the Perceptron
algorithm. Focus of attention speeds up the Perceptron algorithm by lowering
the number of features evaluated throughout training and prediction. Whereas
the traditional Perceptron evaluates all the features of each example, the
Attentive Perceptron evaluates less features for easy to classify examples,
thereby achieving significant speedups and small losses in prediction accuracy.
Focus of attention allows the Attentive Perceptron to stop the evaluation of
features at any interim point and filter the example. This creates an attentive
filter which concentrates computation at examples that are hard to classify,
and quickly filters examples that are easy to classify.
| [
"['Raphael Pelossof' 'Zhiliang Ying']",
"Raphael Pelossof and Zhiliang Ying"
] |
math.OC cs.LG | null | 1010.0056 | null | null | http://arxiv.org/pdf/1010.0056v1 | 2010-10-01T03:23:17Z | 2010-10-01T03:23:17Z | Online Learning in Opportunistic Spectrum Access: A Restless Bandit
Approach | We consider an opportunistic spectrum access (OSA) problem where the
time-varying condition of each channel (e.g., as a result of random fading or
certain primary users' activities) is modeled as an arbitrary finite-state
Markov chain. At each instance of time, a (secondary) user probes a channel and
collects a certain reward as a function of the state of the channel (e.g., good
channel condition results in higher data rate for the user). Each channel has
potentially different state space and statistics, both unknown to the user, who
tries to learn which one is the best as it goes and maximizes its usage of the
best channel. The objective is to construct a good online learning algorithm so
as to minimize the difference between the user's performance in total rewards
and that of using the best channel (on average) had it known which one is the
best from a priori knowledge of the channel statistics (also known as the
regret). This is a classic exploration and exploitation problem and results
abound when the reward processes are assumed to be iid. Compared to prior work,
the biggest difference is that in our case the reward process is assumed to be
Markovian, of which iid is a special case. In addition, the reward processes
are restless in that the channel conditions will continue to evolve independent
of the user's actions. This leads to a restless bandit problem, for which there
exists little result on either algorithms or performance bounds in this
learning context to the best of our knowledge. In this paper we introduce an
algorithm that utilizes regenerative cycles of a Markov chain and computes a
sample-mean based index policy, and show that under mild conditions on the
state transition probabilities of the Markov chains this algorithm achieves
logarithmic regret uniformly over time, and that this regret bound is also
optimal.
| [
"['Cem Tekin' 'Mingyan Liu']",
"Cem Tekin, Mingyan Liu"
] |
cs.LG | 10.1109/TSP.2010.2086449 | 1010.0287 | null | null | http://arxiv.org/abs/1010.0287v1 | 2010-10-02T03:57:46Z | 2010-10-02T03:57:46Z | Queue-Aware Distributive Resource Control for Delay-Sensitive Two-Hop
MIMO Cooperative Systems | In this paper, we consider a queue-aware distributive resource control
algorithm for two-hop MIMO cooperative systems. We shall illustrate that relay
buffering is an effective way to reduce the intrinsic half-duplex penalty in
cooperative systems. The complex interactions of the queues at the source node
and the relays are modeled as an average-cost infinite horizon Markov Decision
Process (MDP). The traditional approach solving this MDP problem involves
centralized control with huge complexity. To obtain a distributive and low
complexity solution, we introduce a linear structure which approximates the
value function of the associated Bellman equation by the sum of per-node value
functions. We derive a distributive two-stage two-winner auction-based control
policy which is a function of the local CSI and local QSI only. Furthermore, to
estimate the best fit approximation parameter, we propose a distributive online
stochastic learning algorithm using stochastic approximation theory. Finally,
we establish technical conditions for almost-sure convergence and show that
under heavy traffic, the proposed low complexity distributive control is global
optimal.
| [
"Rui Wang, Vincent K. N. Lau and Ying Cui",
"['Rui Wang' 'Vincent K. N. Lau' 'Ying Cui']"
] |
math.PR cs.IT cs.LG math.IT | null | 1010.1042 | null | null | http://arxiv.org/pdf/1010.1042v3 | 2011-05-05T08:34:07Z | 2010-10-06T00:36:04Z | Hidden Markov Models with Multiple Observation Processes | We consider a hidden Markov model with multiple observation processes, one of
which is chosen at each point in time by a policy---a deterministic function of
the information state---and attempt to determine which policy minimises the
limiting expected entropy of the information state. Focusing on a special case,
we prove analytically that the information state always converges in
distribution, and derive a formula for the limiting entropy which can be used
for calculations with high precision. Using this fomula, we find
computationally that the optimal policy is always a threshold policy, allowing
it to be easily found. We also find that the greedy policy is almost optimal.
| [
"['James Y. Zhao']",
"James Y. Zhao"
] |
cs.LG | 10.1007/s11634-012-0110-6 | 1010.1526 | null | null | http://arxiv.org/abs/1010.1526v6 | 2012-07-02T20:57:01Z | 2010-10-07T19:48:23Z | Time Series Classification by Class-Specific Mahalanobis Distance
Measures | To classify time series by nearest neighbors, we need to specify or learn one
or several distance measures. We consider variations of the Mahalanobis
distance measures which rely on the inverse covariance matrix of the data.
Unfortunately --- for time series data --- the covariance matrix has often low
rank. To alleviate this problem we can either use a pseudoinverse, covariance
shrinking or limit the matrix to its diagonal. We review these alternatives and
benchmark them against competitive methods such as the related Large Margin
Nearest Neighbor Classification (LMNN) and the Dynamic Time Warping (DTW)
distance. As we expected, we find that the DTW is superior, but the Mahalanobis
distance measures are one to two orders of magnitude faster. To get best
results with Mahalanobis distance measures, we recommend learning one distance
measure per class using either covariance shrinking or the diagonal approach.
| [
"Zolt\\'an Prekopcs\\'ak and Daniel Lemire",
"['Zoltán Prekopcsák' 'Daniel Lemire']"
] |
cs.LG | null | 1010.1763 | null | null | http://arxiv.org/pdf/1010.1763v3 | 2011-03-08T12:56:39Z | 2010-10-08T18:53:27Z | Algorithms for nonnegative matrix factorization with the beta-divergence | This paper describes algorithms for nonnegative matrix factorization (NMF)
with the beta-divergence (beta-NMF). The beta-divergence is a family of cost
functions parametrized by a single shape parameter beta that takes the
Euclidean distance, the Kullback-Leibler divergence and the Itakura-Saito
divergence as special cases (beta = 2,1,0, respectively). The proposed
algorithms are based on a surrogate auxiliary function (a local majorization of
the criterion function). We first describe a majorization-minimization (MM)
algorithm that leads to multiplicative updates, which differ from standard
heuristic multiplicative updates by a beta-dependent power exponent. The
monotonicity of the heuristic algorithm can however be proven for beta in (0,1)
using the proposed auxiliary function. Then we introduce the concept of
majorization-equalization (ME) algorithm which produces updates that move along
constant level sets of the auxiliary function and lead to larger steps than MM.
Simulations on synthetic and real data illustrate the faster convergence of the
ME approach. The paper also describes how the proposed algorithms can be
adapted to two common variants of NMF : penalized NMF (i.e., when a penalty
function of the factors is added to the criterion function) and convex-NMF
(when the dictionary is assumed to belong to a known subspace).
| [
"['Cédric Févotte' 'Jérôme Idier']",
"C\\'edric F\\'evotte (LTCI), J\\'er\\^ome Idier (IRCCyN)"
] |
cs.LG cs.NE | null | 1010.1888 | null | null | http://arxiv.org/pdf/1010.1888v1 | 2010-10-10T02:34:22Z | 2010-10-10T02:34:22Z | Multi-Objective Genetic Programming Projection Pursuit for Exploratory
Data Modeling | For classification problems, feature extraction is a crucial process which
aims to find a suitable data representation that increases the performance of
the machine learning algorithm. According to the curse of dimensionality
theorem, the number of samples needed for a classification task increases
exponentially as the number of dimensions (variables, features) increases. On
the other hand, it is costly to collect, store and process data. Moreover,
irrelevant and redundant features might hinder classifier performance. In
exploratory analysis settings, high dimensionality prevents the users from
exploring the data visually. Feature extraction is a two-step process: feature
construction and feature selection. Feature construction creates new features
based on the original features and feature selection is the process of
selecting the best features as in filter, wrapper and embedded methods.
In this work, we focus on feature construction methods that aim to decrease
data dimensionality for visualization tasks. Various linear (such as principal
components analysis (PCA), multiple discriminants analysis (MDA), exploratory
projection pursuit) and non-linear (such as multidimensional scaling (MDS),
manifold learning, kernel PCA/LDA, evolutionary constructive induction)
techniques have been proposed for dimensionality reduction. Our algorithm is an
adaptive feature extraction method which consists of evolutionary constructive
induction for feature construction and a hybrid filter/wrapper method for
feature selection.
| [
"Ilknur Icke and Andrew Rosenberg",
"['Ilknur Icke' 'Andrew Rosenberg']"
] |
cs.IT cs.CV cs.LG math.IT | 10.1109/TPAMI.2012.88 | 1010.2955 | null | null | http://arxiv.org/abs/1010.2955v6 | 2012-05-06T08:23:16Z | 2010-10-14T15:38:48Z | Robust Recovery of Subspace Structures by Low-Rank Representation | In this work we address the subspace recovery problem. Given a set of data
samples (vectors) approximately drawn from a union of multiple subspaces, our
goal is to segment the samples into their respective subspaces and correct the
possible errors as well. To this end, we propose a novel method termed Low-Rank
Representation (LRR), which seeks the lowest-rank representation among all the
candidates that can represent the data samples as linear combinations of the
bases in a given dictionary. It is shown that LRR well solves the subspace
recovery problem: when the data is clean, we prove that LRR exactly captures
the true subspace structures; for the data contaminated by outliers, we prove
that under certain conditions LRR can exactly recover the row space of the
original data and detect the outlier as well; for the data corrupted by
arbitrary errors, LRR can also approximately recover the row space with
theoretical guarantees. Since the subspace membership is provably determined by
the row space, these further imply that LRR can perform robust subspace
segmentation and error correction, in an efficient way.
| [
"['Guangcan Liu' 'Zhouchen Lin' 'Shuicheng Yan' 'Ju Sun' 'Yong Yu' 'Yi Ma']",
"Guangcan Liu, Zhouchen Lin, Shuicheng Yan, Ju Sun, Yong Yu, Yi Ma"
] |
cs.LG cs.AI cs.DS | null | 1010.3091 | null | null | http://arxiv.org/pdf/1010.3091v2 | 2013-12-16T06:42:05Z | 2010-10-15T08:20:46Z | Near-Optimal Bayesian Active Learning with Noisy Observations | We tackle the fundamental problem of Bayesian active learning with noise,
where we need to adaptively select from a number of expensive tests in order to
identify an unknown hypothesis sampled from a known prior distribution. In the
case of noise-free observations, a greedy algorithm called generalized binary
search (GBS) is known to perform near-optimally. We show that if the
observations are noisy, perhaps surprisingly, GBS can perform very poorly. We
develop EC2, a novel, greedy active learning algorithm and prove that it is
competitive with the optimal policy, thus obtaining the first competitiveness
guarantees for Bayesian active learning with noisy observations. Our bounds
rely on a recently discovered diminishing returns property called adaptive
submodularity, generalizing the classical notion of submodular set functions to
adaptive policies. Our results hold even if the tests have non-uniform cost and
their noise is correlated. We also propose EffECXtive, a particularly fast
approximation of EC2, and evaluate it on a Bayesian experimental design problem
involving human subjects, intended to tease apart competing economic theories
of how people make decisions under uncertainty.
| [
"Daniel Golovin and Andreas Krause and Debajyoti Ray",
"['Daniel Golovin' 'Andreas Krause' 'Debajyoti Ray']"
] |
cs.CV cs.LG | null | 1010.3467 | null | null | http://arxiv.org/pdf/1010.3467v1 | 2010-10-18T02:31:21Z | 2010-10-18T02:31:21Z | Fast Inference in Sparse Coding Algorithms with Applications to Object
Recognition | Adaptive sparse coding methods learn a possibly overcomplete set of basis
functions, such that natural image patches can be reconstructed by linearly
combining a small subset of these bases. The applicability of these methods to
visual object recognition tasks has been limited because of the prohibitive
cost of the optimization algorithms required to compute the sparse
representation. In this work we propose a simple and efficient algorithm to
learn basis functions. After training, this model also provides a fast and
smooth approximator to the optimal representation, achieving even better
accuracy than exact sparse coding algorithms on visual object recognition
tasks.
| [
"Koray Kavukcuoglu, Marc'Aurelio Ranzato and Yann LeCun",
"['Koray Kavukcuoglu' \"Marc'Aurelio Ranzato\" 'Yann LeCun']"
] |
cs.LG | null | 1010.3484 | null | null | http://arxiv.org/pdf/1010.3484v1 | 2010-10-18T05:46:46Z | 2010-10-18T05:46:46Z | Hardness Results for Agnostically Learning Low-Degree Polynomial
Threshold Functions | Hardness results for maximum agreement problems have close connections to
hardness results for proper learning in computational learning theory. In this
paper we prove two hardness results for the problem of finding a low degree
polynomial threshold function (PTF) which has the maximum possible agreement
with a given set of labeled examples in $\R^n \times \{-1,1\}.$ We prove that
for any constants $d\geq 1, \eps > 0$,
{itemize}
Assuming the Unique Games Conjecture, no polynomial-time algorithm can find a
degree-$d$ PTF that is consistent with a $(\half + \eps)$ fraction of a given
set of labeled examples in $\R^n \times \{-1,1\}$, even if there exists a
degree-$d$ PTF that is consistent with a $1-\eps$ fraction of the examples.
It is $\NP$-hard to find a degree-2 PTF that is consistent with a $(\half +
\eps)$ fraction of a given set of labeled examples in $\R^n \times \{-1,1\}$,
even if there exists a halfspace (degree-1 PTF) that is consistent with a $1 -
\eps$ fraction of the examples.
{itemize}
These results immediately imply the following hardness of learning results:
(i) Assuming the Unique Games Conjecture, there is no better-than-trivial
proper learning algorithm that agnostically learns degree-$d$ PTFs under
arbitrary distributions; (ii) There is no better-than-trivial learning
algorithm that outputs degree-2 PTFs and agnostically learns halfspaces (i.e.
degree-1 PTFs) under arbitrary distributions.
| [
"['Ilias Diakonikolas' \"Ryan O'Donnell\" 'Rocco A. Servedio' 'Yi Wu']",
"Ilias Diakonikolas and Ryan O'Donnell and Rocco A. Servedio and Yi Wu"
] |
cs.LG | null | 1010.4050 | null | null | http://arxiv.org/pdf/1010.4050v1 | 2010-10-19T21:01:45Z | 2010-10-19T21:01:45Z | Efficient Matrix Completion with Gaussian Models | A general framework based on Gaussian models and a MAP-EM algorithm is
introduced in this paper for solving matrix/table completion problems. The
numerical experiments with the standard and challenging movie ratings data show
that the proposed approach, based on probably one of the simplest probabilistic
models, leads to the results in the same ballpark as the state-of-the-art, at a
lower computational cost.
| [
"['Flavien Léger' 'Guoshen Yu' 'Guillermo Sapiro']",
"Flavien L\\'eger, Guoshen Yu, Guillermo Sapiro"
] |
cs.LG math.OC stat.ML | null | 1010.4207 | null | null | http://arxiv.org/pdf/1010.4207v2 | 2010-11-14T17:19:42Z | 2010-10-20T14:02:21Z | Convex Analysis and Optimization with Submodular Functions: a Tutorial | Set-functions appear in many areas of computer science and applied
mathematics, such as machine learning, computer vision, operations research or
electrical networks. Among these set-functions, submodular functions play an
important role, similar to convex functions on vector spaces. In this tutorial,
the theory of submodular functions is presented, in a self-contained way, with
all results shown from first principles. A good knowledge of convex analysis is
assumed.
| [
"['Francis Bach']",
"Francis Bach (INRIA Rocquencourt, LIENS)"
] |
cs.LG cs.IT math.IT stat.ML | null | 1010.4237 | null | null | http://arxiv.org/pdf/1010.4237v2 | 2010-12-31T18:36:49Z | 2010-10-20T16:05:28Z | Robust PCA via Outlier Pursuit | Singular Value Decomposition (and Principal Component Analysis) is one of the
most widely used techniques for dimensionality reduction: successful and
efficiently computable, it is nevertheless plagued by a well-known,
well-documented sensitivity to outliers. Recent work has considered the setting
where each point has a few arbitrarily corrupted components. Yet, in
applications of SVD or PCA such as robust collaborative filtering or
bioinformatics, malicious agents, defective genes, or simply corrupted or
contaminated experiments may effectively yield entire points that are
completely corrupted.
We present an efficient convex optimization-based algorithm we call Outlier
Pursuit, that under some mild assumptions on the uncorrupted points (satisfied,
e.g., by the standard generative assumption in PCA problems) recovers the exact
optimal low-dimensional subspace, and identifies the corrupted points. Such
identification of corrupted points that do not conform to the low-dimensional
approximation, is of paramount interest in bioinformatics and financial
applications, and beyond. Our techniques involve matrix decomposition using
nuclear norm minimization, however, our results, setup, and approach,
necessarily differ considerably from the existing line of work in matrix
completion and matrix decomposition, since we develop an approach to recover
the correct column space of the uncorrupted matrix, rather than the exact
matrix itself. In any problem where one seeks to recover a structure rather
than the exact initial matrices, techniques developed thus far relying on
certificates of optimality, will fail. We present an important extension of
these methods, that allows the treatment of such problems.
| [
"Huan Xu, Constantine Caramanis and Sujay Sanghavi",
"['Huan Xu' 'Constantine Caramanis' 'Sujay Sanghavi']"
] |
cs.LG | null | 1010.4253 | null | null | http://arxiv.org/pdf/1010.4253v1 | 2010-10-20T17:21:38Z | 2010-10-20T17:21:38Z | Large-Scale Clustering Based on Data Compression | This paper considers the clustering problem for large data sets. We propose
an approach based on distributed optimization. The clustering problem is
formulated as an optimization problem of maximizing the classification gain. We
show that the optimization problem can be reformulated and decomposed into
small-scale sub optimization problems by using the Dantzig-Wolfe decomposition
method. Generally speaking, the Dantzig-Wolfe method can only be used for
convex optimization problems, where the duality gaps are zero. Even though, the
considered optimization problem in this paper is non-convex, we prove that the
duality gap goes to zero, as the problem size goes to infinity. Therefore, the
Dantzig-Wolfe method can be applied here. In the proposed approach, the
clustering problem is iteratively solved by a group of computers coordinated by
one center processor, where each computer solves one independent small-scale
sub optimization problem during each iteration, and only a small amount of data
communication is needed between the computers and center processor. Numerical
results show that the proposed approach is effective and efficient.
| [
"['Xudong Ma']",
"Xudong Ma"
] |
cs.LG | null | 1010.4408 | null | null | http://arxiv.org/pdf/1010.4408v1 | 2010-10-21T09:57:12Z | 2010-10-21T09:57:12Z | Sublinear Optimization for Machine Learning | We give sublinear-time approximation algorithms for some optimization
problems arising in machine learning, such as training linear classifiers and
finding minimum enclosing balls. Our algorithms can be extended to some
kernelized versions of these problems, such as SVDD, hard margin SVM, and
L2-SVM, for which sublinear-time algorithms were not known before. These new
algorithms use a combination of a novel sampling techniques and a new
multiplicative update algorithm. We give lower bounds which show the running
times of many of our algorithms to be nearly best possible in the unit-cost RAM
model. We also give implementations of our algorithms in the semi-streaming
setting, obtaining the first low pass polylogarithmic space and sublinear time
algorithms achieving arbitrary approximation factor.
| [
"Kenneth L. Clarkson and Elad Hazan and David P. Woodruff",
"['Kenneth L. Clarkson' 'Elad Hazan' 'David P. Woodruff']"
] |
cs.LG cs.AI | null | 1010.4466 | null | null | http://arxiv.org/pdf/1010.4466v1 | 2010-10-21T13:28:09Z | 2010-10-21T13:28:09Z | On the Foundations of Adversarial Single-Class Classification | Motivated by authentication, intrusion and spam detection applications we
consider single-class classification (SCC) as a two-person game between the
learner and an adversary. In this game the learner has a sample from a target
distribution and the goal is to construct a classifier capable of
distinguishing observations from the target distribution from observations
emitted from an unknown other distribution. The ideal SCC classifier must
guarantee a given tolerance for the false-positive error (false alarm rate)
while minimizing the false negative error (intruder pass rate). Viewing SCC as
a two-person zero-sum game we identify both deterministic and randomized
optimal classification strategies for different game variants. We demonstrate
that randomized classification can provide a significant advantage. In the
deterministic setting we show how to reduce SCC to two-class classification
where in the two-class problem the other class is a synthetically generated
distribution. We provide an efficient and practical algorithm for constructing
and solving the two class problem. The algorithm distinguishes low density
regions of the target distribution and is shown to be consistent.
| [
"Ran El-Yaniv and Mordechai Nisenson",
"['Ran El-Yaniv' 'Mordechai Nisenson']"
] |
cs.CV cs.LG | null | 1010.4951 | null | null | http://arxiv.org/pdf/1010.4951v2 | 2012-07-20T01:17:25Z | 2010-10-24T11:28:11Z | Local Component Analysis for Nonparametric Bayes Classifier | The decision boundaries of Bayes classifier are optimal because they lead to
maximum probability of correct decision. It means if we knew the prior
probabilities and the class-conditional densities, we could design a classifier
which gives the lowest probability of error. However, in classification based
on nonparametric density estimation methods such as Parzen windows, the
decision regions depend on the choice of parameters such as window width.
Moreover, these methods suffer from curse of dimensionality of the feature
space and small sample size problem which severely restricts their practical
applications. In this paper, we address these problems by introducing a novel
dimension reduction and classification method based on local component
analysis. In this method, by adopting an iterative cross-validation algorithm,
we simultaneously estimate the optimal transformation matrices (for dimension
reduction) and classifier parameters based on local information. The proposed
method can classify the data with complicated boundary and also alleviate the
course of dimensionality dilemma. Experiments on real data show the superiority
of the proposed algorithm in term of classification accuracies for pattern
classification applications like age, facial expression and character
recognition. Keywords: Bayes classifier, curse of dimensionality dilemma,
Parzen window, pattern classification, subspace learning.
| [
"['Mahmoud Khademi' 'Mohammad T. Manzuri-Shalmani' 'Meharn safayani']",
"Mahmoud Khademi, Mohammad T. Manzuri-Shalmani, and Meharn safayani"
] |
cs.LG cs.NA | null | 1010.5290 | null | null | http://arxiv.org/pdf/1010.5290v2 | 2011-03-16T05:53:38Z | 2010-10-26T00:28:36Z | Converged Algorithms for Orthogonal Nonnegative Matrix Factorizations | This paper proposes uni-orthogonal and bi-orthogonal nonnegative matrix
factorization algorithms with robust convergence proofs. We design the
algorithms based on the work of Lee and Seung [1], and derive the converged
versions by utilizing ideas from the work of Lin [2]. The experimental results
confirm the theoretical guarantees of the convergences.
| [
"Andri Mirzal",
"['Andri Mirzal']"
] |
cs.CC cs.LG | null | 1010.5470 | null | null | http://arxiv.org/pdf/1010.5470v2 | 2011-01-14T11:21:46Z | 2010-10-26T17:48:25Z | Resource-bounded Dimension in Computational Learning Theory | This paper focuses on the relation between computational learning theory and
resource-bounded dimension. We intend to establish close connections between
the learnability/nonlearnability of a concept class and its corresponding size
in terms of effective dimension, which will allow the use of powerful dimension
techniques in computational learning and viceversa, the import of learning
results into complexity via dimension. Firstly, we obtain a tight result on the
dimension of online mistake-bound learnable classes. Secondly, in relation with
PAC learning, we show that the polynomial-space dimension of PAC learnable
classes of concepts is zero. This provides a hypothesis on effective dimension
that implies the inherent unpredictability of concept classes (the classes that
verify this property are classes not efficiently PAC learnable using any
hypothesis). Thirdly, in relation to space dimension of classes that are
learnable by membership query algorithms, the main result proves that
polynomial-space dimension of concept classes learnable by a membership-query
algorithm is zero.
| [
"['Ricard Gavalda' 'Maria Lopez-Valdes' 'Elvira Mayordomo'\n 'N. V. Vinodchandran']",
"Ricard Gavalda, Maria Lopez-Valdes, Elvira Mayordomo, N. V.\n Vinodchandran"
] |
cs.LG math.OC | null | 1010.5511 | null | null | http://arxiv.org/pdf/1010.5511v1 | 2010-10-26T20:23:39Z | 2010-10-26T20:23:39Z | Efficient Minimization of Decomposable Submodular Functions | Many combinatorial problems arising in machine learning can be reduced to the
problem of minimizing a submodular function. Submodular functions are a natural
discrete analog of convex functions, and can be minimized in strongly
polynomial time. Unfortunately, state-of-the-art algorithms for general
submodular minimization are intractable for larger problems. In this paper, we
introduce a novel subclass of submodular minimization problems that we call
decomposable. Decomposable submodular functions are those that can be
represented as sums of concave functions applied to modular functions. We
develop an algorithm, SLG, that can efficiently minimize decomposable
submodular functions with tens of thousands of variables. Our algorithm
exploits recent results in smoothed convex minimization. We apply SLG to
synthetic benchmarks and a joint classification-and-segmentation task, and show
that it outperforms the state-of-the-art general purpose submodular
minimization algorithms by several orders of magnitude.
| [
"['Peter Stobbe' 'Andreas Krause']",
"Peter Stobbe, Andreas Krause"
] |
cs.AI cs.LG cs.MA | null | 1010.6234 | null | null | http://arxiv.org/pdf/1010.6234v1 | 2010-10-29T14:50:49Z | 2010-10-29T14:50:49Z | Analysing the behaviour of robot teams through relational sequential
pattern mining | This report outlines the use of a relational representation in a Multi-Agent
domain to model the behaviour of the whole system. A desired property in this
systems is the ability of the team members to work together to achieve a common
goal in a cooperative manner. The aim is to define a systematic method to
verify the effective collaboration among the members of a team and comparing
the different multi-agent behaviours. Using external observations of a
Multi-Agent System to analyse, model, recognize agent behaviour could be very
useful to direct team actions. In particular, this report focuses on the
challenge of autonomous unsupervised sequential learning of the team's
behaviour from observations. Our approach allows to learn a symbolic sequence
(a relational representation) to translate raw multi-agent, multi-variate
observations of a dynamic, complex environment, into a set of sequential
behaviours that are characteristic of the team in question, represented by a
set of sequences expressed in first-order logic atoms. We propose to use a
relational learning algorithm to mine meaningful frequent patterns among the
relational sequences to characterise team behaviours. We compared the
performance of two teams in the RoboCup four-legged league environment, that
have a very different approach to the game. One uses a Case Based Reasoning
approach, the other uses a pure reactive behaviour.
| [
"Grazia Bombini, Raquel Ros, Stefano Ferilli, Ramon Lopez de Mantaras",
"['Grazia Bombini' 'Raquel Ros' 'Stefano Ferilli' 'Ramon Lopez de Mantaras']"
] |
cs.LG cs.AI | null | 1011.0041 | null | null | http://arxiv.org/pdf/1011.0041v2 | 2011-01-18T02:04:12Z | 2010-10-30T03:09:11Z | Predictive State Temporal Difference Learning | We propose a new approach to value function approximation which combines
linear temporal difference reinforcement learning with subspace identification.
In practical applications, reinforcement learning (RL) is complicated by the
fact that state is either high-dimensional or partially observable. Therefore,
RL methods are designed to work with features of state rather than state
itself, and the success or failure of learning is often determined by the
suitability of the selected features. By comparison, subspace identification
(SSID) methods are designed to select a feature set which preserves as much
information as possible about state. In this paper we connect the two
approaches, looking at the problem of reinforcement learning with a large set
of features, each of which may only be marginally useful for value function
approximation. We introduce a new algorithm for this situation, called
Predictive State Temporal Difference (PSTD) learning. As in SSID for predictive
state representations, PSTD finds a linear compression operator that projects a
large set of features down to a small set that preserves the maximum amount of
predictive information. As in RL, PSTD then uses a Bellman recursion to
estimate a value function. We discuss the connection between PSTD and prior
approaches in RL and SSID. We prove that PSTD is statistically consistent,
perform several experiments that illustrate its properties, and demonstrate its
potential on a difficult optimal stopping problem.
| [
"['Byron Boots' 'Geoffrey J. Gordon']",
"Byron Boots and Geoffrey J. Gordon"
] |
cs.LG math.OC stat.ML | null | 1011.0097 | null | null | http://arxiv.org/pdf/1011.0097v1 | 2010-10-30T18:30:43Z | 2010-10-30T18:30:43Z | Sparse Inverse Covariance Selection via Alternating Linearization
Methods | Gaussian graphical models are of great interest in statistical learning.
Because the conditional independencies between different nodes correspond to
zero entries in the inverse covariance matrix of the Gaussian distribution, one
can learn the structure of the graph by estimating a sparse inverse covariance
matrix from sample data, by solving a convex maximum likelihood problem with an
$\ell_1$-regularization term. In this paper, we propose a first-order method
based on an alternating linearization technique that exploits the problem's
special structure; in particular, the subproblems solved in each iteration have
closed-form solutions. Moreover, our algorithm obtains an $\epsilon$-optimal
solution in $O(1/\epsilon)$ iterations. Numerical experiments on both synthetic
and real data from gene association networks show that a practical version of
this algorithm outperforms other competitive algorithms.
| [
"Katya Scheinberg, Shiqian Ma, Donald Goldfarb",
"['Katya Scheinberg' 'Shiqian Ma' 'Donald Goldfarb']"
] |
cs.LG cs.HC cs.SE | null | 1011.0350 | null | null | http://arxiv.org/pdf/1011.0350v1 | 2010-11-01T15:40:31Z | 2010-11-01T15:40:31Z | Developing courses with HoloRena, a framework for scenario- and game
based e-learning environments | However utilizing rich, interactive solutions can make learning more
effective and attractive, scenario- and game-based educational resources on the
web are not widely used. Creating these applications is a complex, expensive
and challenging process. Development frameworks and authoring tools hardly
support reusable components, teamwork and learning management
system-independent courseware architecture. In this article we initiate the
concept of a low-level, thick-client solution addressing these problems. With
some example applications we try to demonstrate, how a framework, based on this
concept can be useful for developing scenario- and game-based e-learning
environments.
| [
"Laszlo Juracz",
"['Laszlo Juracz']"
] |
math.ST cond-mat.stat-mech cs.IT cs.LG math.IT stat.TH | null | 1011.0415 | null | null | http://arxiv.org/pdf/1011.0415v1 | 2010-11-01T19:09:57Z | 2010-11-01T19:09:57Z | Learning Networks of Stochastic Differential Equations | We consider linear models for stochastic dynamics. To any such model can be
associated a network (namely a directed graph) describing which degrees of
freedom interact under the dynamics. We tackle the problem of learning such a
network from observation of the system trajectory over a time interval $T$.
We analyze the $\ell_1$-regularized least squares algorithm and, in the
setting in which the underlying network is sparse, we prove performance
guarantees that are \emph{uniform in the sampling rate} as long as this is
sufficiently high. This result substantiates the notion of a well defined `time
complexity' for the network inference problem.
| [
"['José Bento' 'Morteza Ibrahimi' 'Andrea Montanari']",
"Jos\\'e Bento, Morteza Ibrahimi, and Andrea Montanari"
] |
stat.ML cs.IT cs.LG math.IT | 10.1109/TSP.2011.2141661 | 1011.0450 | null | null | http://arxiv.org/abs/1011.0450v2 | 2011-03-27T20:05:18Z | 2010-11-01T20:59:12Z | From Sparse Signals to Sparse Residuals for Robust Sensing | One of the key challenges in sensor networks is the extraction of information
by fusing data from a multitude of distinct, but possibly unreliable sensors.
Recovering information from the maximum number of dependable sensors while
specifying the unreliable ones is critical for robust sensing. This sensing
task is formulated here as that of finding the maximum number of feasible
subsystems of linear equations, and proved to be NP-hard. Useful links are
established with compressive sampling, which aims at recovering vectors that
are sparse. In contrast, the signals here are not sparse, but give rise to
sparse residuals. Capitalizing on this form of sparsity, four sensing schemes
with complementary strengths are developed. The first scheme is a convex
relaxation of the original problem expressed as a second-order cone program
(SOCP). It is shown that when the involved sensing matrices are Gaussian and
the reliable measurements are sufficiently many, the SOCP can recover the
optimal solution with overwhelming probability. The second scheme is obtained
by replacing the initial objective function with a concave one. The third and
fourth schemes are tailored for noisy sensor data. The noisy case is cast as a
combinatorial problem that is subsequently surrogated by a (weighted) SOCP.
Interestingly, the derived cost functions fall into the framework of robust
multivariate linear regression, while an efficient block-coordinate descent
algorithm is developed for their minimization. The robust sensing capabilities
of all schemes are verified by simulated tests.
| [
"['Vassilis Kekatos' 'Georgios B. Giannakis']",
"Vassilis Kekatos and Georgios B. Giannakis"
] |
cs.LG | null | 1011.0472 | null | null | http://arxiv.org/pdf/1011.0472v1 | 2010-11-01T23:41:35Z | 2010-11-01T23:41:35Z | Regularized Risk Minimization by Nesterov's Accelerated Gradient
Methods: Algorithmic Extensions and Empirical Studies | Nesterov's accelerated gradient methods (AGM) have been successfully applied
in many machine learning areas. However, their empirical performance on
training max-margin models has been inferior to existing specialized solvers.
In this paper, we first extend AGM to strongly convex and composite objective
functions with Bregman style prox-functions. Our unifying framework covers both
the $\infty$-memory and 1-memory styles of AGM, tunes the Lipschiz constant
adaptively, and bounds the duality gap. Then we demonstrate various ways to
apply this framework of methods to a wide range of machine learning problems.
Emphasis will be given on their rate of convergence and how to efficiently
compute the gradient and optimize the models. The experimental results show
that with our extensions AGM outperforms state-of-the-art solvers on max-margin
models.
| [
"['Xinhua Zhang' 'Ankan Saha' 'S. V. N. Vishwanathan']",
"Xinhua Zhang and Ankan Saha and S.V.N. Vishwanathan"
] |
cs.LG cs.AI stat.ML | null | 1011.0686 | null | null | http://arxiv.org/pdf/1011.0686v3 | 2011-03-16T18:51:21Z | 2010-11-02T17:55:55Z | A Reduction of Imitation Learning and Structured Prediction to No-Regret
Online Learning | Sequential prediction problems such as imitation learning, where future
observations depend on previous predictions (actions), violate the common
i.i.d. assumptions made in statistical learning. This leads to poor performance
in theory and often in practice. Some recent approaches provide stronger
guarantees in this setting, but remain somewhat unsatisfactory as they train
either non-stationary or stochastic policies and require a large number of
iterations. In this paper, we propose a new iterative algorithm, which trains a
stationary deterministic policy, that can be seen as a no regret algorithm in
an online learning setting. We show that any such no regret algorithm, combined
with additional reduction assumptions, must find a policy with good performance
under the distribution of observations it induces in such sequential settings.
We demonstrate that this new approach outperforms previous approaches on two
challenging imitation learning problems and a benchmark sequence labeling
problem.
| [
"['Stephane Ross' 'Geoffrey J. Gordon' 'J. Andrew Bagnell']",
"Stephane Ross, Geoffrey J. Gordon, J. Andrew Bagnell"
] |
cs.DS cs.LG | null | 1011.1161 | null | null | http://arxiv.org/pdf/1011.1161v3 | 2013-06-18T15:10:04Z | 2010-11-04T14:00:41Z | Multiarmed Bandit Problems with Delayed Feedback | In this paper we initiate the study of optimization of bandit type problems
in scenarios where the feedback of a play is not immediately known. This arises
naturally in allocation problems which have been studied extensively in the
literature, albeit in the absence of delays in the feedback. We study this
problem in the Bayesian setting. In presence of delays, no solution with
provable guarantees is known to exist with sub-exponential running time.
We show that bandit problems with delayed feedback that arise in allocation
settings can be forced to have significant structure, with a slight loss in
optimality. This structure gives us the ability to reason about the
relationship of single arm policies to the entangled optimum policy, and
eventually leads to a O(1) approximation for a significantly general class of
priors. The structural insights we develop are of key interest and carry over
to the setting where the feedback of an action is available instantaneously,
and we improve all previous results in this setting as well.
| [
"['Sudipto Guha' 'Kamesh Munagala' 'Martin Pal']",
"Sudipto Guha and Kamesh Munagala and Martin Pal"
] |
cs.DS cs.CR cs.LG | null | 1011.1296 | null | null | http://arxiv.org/pdf/1011.1296v4 | 2011-10-27T16:50:37Z | 2010-11-04T23:59:08Z | Privately Releasing Conjunctions and the Statistical Query Barrier | Suppose we would like to know all answers to a set of statistical queries C
on a data set up to small error, but we can only access the data itself using
statistical queries. A trivial solution is to exhaustively ask all queries in
C. Can we do any better?
+ We show that the number of statistical queries necessary and sufficient for
this task is---up to polynomial factors---equal to the agnostic learning
complexity of C in Kearns' statistical query (SQ) model. This gives a complete
answer to the question when running time is not a concern.
+ We then show that the problem can be solved efficiently (allowing arbitrary
error on a small fraction of queries) whenever the answers to C can be
described by a submodular function. This includes many natural concept classes,
such as graph cuts and Boolean disjunctions and conjunctions.
While interesting from a learning theoretic point of view, our main
applications are in privacy-preserving data analysis:
Here, our second result leads to the first algorithm that efficiently
releases differentially private answers to of all Boolean conjunctions with 1%
average error. This presents significant progress on a key open problem in
privacy-preserving data analysis.
Our first result on the other hand gives unconditional lower bounds on any
differentially private algorithm that admits a (potentially
non-privacy-preserving) implementation using only statistical queries. Not only
our algorithms, but also most known private algorithms can be implemented using
only statistical queries, and hence are constrained by these lower bounds. Our
result therefore isolates the complexity of agnostic learning in the SQ-model
as a new barrier in the design of differentially private algorithms.
| [
"Anupam Gupta, Moritz Hardt, Aaron Roth, Jonathan Ullman",
"['Anupam Gupta' 'Moritz Hardt' 'Aaron Roth' 'Jonathan Ullman']"
] |
stat.ML cs.LG math.NA | null | 1011.1518 | null | null | http://arxiv.org/pdf/1011.1518v3 | 2010-12-04T01:44:01Z | 2010-11-05T21:43:02Z | Robust Matrix Decomposition with Outliers | Suppose a given observation matrix can be decomposed as the sum of a low-rank
matrix and a sparse matrix (outliers), and the goal is to recover these
individual components from the observed sum. Such additive decompositions have
applications in a variety of numerical problems including system
identification, latent variable graphical modeling, and principal components
analysis. We study conditions under which recovering such a decomposition is
possible via a combination of $\ell_1$ norm and trace norm minimization. We are
specifically interested in the question of how many outliers are allowed so
that convex programming can still achieve accurate recovery, and we obtain
stronger recovery guarantees than previous studies. Moreover, we do not assume
that the spatial pattern of outliers is random, which stands in contrast to
related analyses under such assumptions via matrix completion.
| [
"['Daniel Hsu' 'Sham M. Kakade' 'Tong Zhang']",
"Daniel Hsu, Sham M. Kakade, Tong Zhang"
] |
cs.LG | null | 1011.1576 | null | null | http://arxiv.org/pdf/1011.1576v4 | 2011-06-18T20:15:10Z | 2010-11-06T18:40:15Z | Online Importance Weight Aware Updates | An importance weight quantifies the relative importance of one example over
another, coming up in applications of boosting, asymmetric classification
costs, reductions, and active learning. The standard approach for dealing with
importance weights in gradient descent is via multiplication of the gradient.
We first demonstrate the problems of this approach when importance weights are
large, and argue in favor of more sophisticated ways for dealing with them. We
then develop an approach which enjoys an invariance property: that updating
twice with importance weight $h$ is equivalent to updating once with importance
weight $2h$. For many important losses this has a closed form update which
satisfies standard regret guarantees when all examples have $h=1$. We also
briefly discuss two other reasonable approaches for handling large importance
weights. Empirically, these approaches yield substantially superior prediction
with similar computational performance while reducing the sensitivity of the
algorithm to the exact setting of the learning rate. We apply these to online
active learning yielding an extraordinarily fast active learning algorithm that
works even in the presence of adversarial noise.
| [
"Nikos Karampatziakis and John Langford",
"['Nikos Karampatziakis' 'John Langford']"
] |
cs.NA cs.LG math.NA | null | 1011.1716 | null | null | http://arxiv.org/pdf/1011.1716v4 | 2011-09-06T14:08:48Z | 2010-11-08T06:41:43Z | Least Squares Ranking on Graphs | Given a set of alternatives to be ranked, and some pairwise comparison data,
ranking is a least squares computation on a graph. The vertices are the
alternatives, and the edge values comprise the comparison data. The basic idea
is very simple and old: come up with values on vertices such that their
differences match the given edge data. Since an exact match will usually be
impossible, one settles for matching in a least squares sense. This formulation
was first described by Leake in 1976 for rankingfootball teams and appears as
an example in Professor Gilbert Strang's classic linear algebra textbook. If
one is willing to look into the residual a little further, then the problem
really comes alive, as shown effectively by the remarkable recent paper of
Jiang et al. With or without this twist, the humble least squares problem on
graphs has far-reaching connections with many current areas ofresearch. These
connections are to theoretical computer science (spectral graph theory, and
multilevel methods for graph Laplacian systems); numerical analysis (algebraic
multigrid, and finite element exterior calculus); other mathematics (Hodge
decomposition, and random clique complexes); and applications (arbitrage, and
ranking of sports teams). Not all of these connections are explored in this
paper, but many are. The underlying ideas are easy to explain, requiring only
the four fundamental subspaces from elementary linear algebra. One of our aims
is to explain these basic ideas and connections, to get researchers in many
fields interested in this topic. Another aim is to use our numerical
experiments for guidance on selecting methods and exposing the need for further
development.
| [
"Anil N. Hirani, Kaushik Kalyanaraman, Seth Watts",
"['Anil N. Hirani' 'Kaushik Kalyanaraman' 'Seth Watts']"
] |
cs.LG cs.GT | null | 1011.1936 | null | null | http://arxiv.org/pdf/1011.1936v1 | 2010-11-08T22:41:14Z | 2010-11-08T22:41:14Z | Blackwell Approachability and Low-Regret Learning are Equivalent | We consider the celebrated Blackwell Approachability Theorem for two-player
games with vector payoffs. We show that Blackwell's result is equivalent, via
efficient reductions, to the existence of "no-regret" algorithms for Online
Linear Optimization. Indeed, we show that any algorithm for one such problem
can be efficiently converted into an algorithm for the other. We provide a
useful application of this reduction: the first efficient algorithm for
calibrated forecasting.
| [
"Jacob Abernethy, Peter L. Bartlett, Elad Hazan",
"['Jacob Abernethy' 'Peter L. Bartlett' 'Elad Hazan']"
] |
cs.AI cs.LG | null | 1011.2512 | null | null | http://arxiv.org/pdf/1011.2512v2 | 2011-01-17T19:09:02Z | 2010-11-10T21:44:26Z | Extended Active Learning Method | Active Learning Method (ALM) is a soft computing method which is used for
modeling and control, based on fuzzy logic. Although ALM has shown that it acts
well in dynamic environments, its operators cannot support it very well in
complex situations due to losing data. Thus ALM can find better membership
functions if more appropriate operators be chosen for it. This paper
substituted two new operators instead of ALM original ones; which consequently
renewed finding membership functions in a way superior to conventional ALM.
This new method is called Extended Active Learning Method (EALM).
| [
"Ali Akbar Kiaei, Saeed Bagheri Shouraki, Seyed Hossein Khasteh,\n Mahmoud Khademi, and Alireza Ghatreh Samani",
"['Ali Akbar Kiaei' 'Saeed Bagheri Shouraki' 'Seyed Hossein Khasteh'\n 'Mahmoud Khademi' 'Alireza Ghatreh Samani']"
] |
stat.ME cs.LG stat.CO | null | 1011.2624 | null | null | http://arxiv.org/pdf/1011.2624v2 | 2011-10-27T14:00:46Z | 2010-11-11T12:12:56Z | Clustering using Unsupervised Binary Trees: CUBT | We herein introduce a new method of interpretable clustering that uses
unsupervised binary trees. It is a three-stage procedure, the first stage of
which entails a series of recursive binary splits to reduce the heterogeneity
of the data within the new subsamples. During the second stage (pruning),
consideration is given to whether adjacent nodes can be aggregated. Finally,
during the third stage (joining), similar clusters are joined together, even if
they do not share the same parent originally. Consistency results are obtained,
and the procedure is used on simulated and real data sets.
| [
"['Ricardo Fraiman' 'Badih Ghattas' 'Marcela Svarc']",
"Ricardo Fraiman, Badih Ghattas and Marcela Svarc"
] |
stat.ML cs.LG | null | 1011.3090 | null | null | http://arxiv.org/pdf/1011.3090v2 | 2011-03-02T08:19:07Z | 2010-11-13T02:40:14Z | Regularization Strategies and Empirical Bayesian Learning for MKL | Multiple kernel learning (MKL), structured sparsity, and multi-task learning
have recently received considerable attention. In this paper, we show how
different MKL algorithms can be understood as applications of either
regularization on the kernel weights or block-norm-based regularization, which
is more common in structured sparsity and multi-task learning. We show that
these two regularization strategies can be systematically mapped to each other
through a concave conjugate operation. When the kernel-weight-based regularizer
is separable into components, we can naturally consider a generative
probabilistic model behind MKL. Based on this model, we propose learning
algorithms for the kernel weights through the maximization of marginal
likelihood. We show through numerical experiments that $\ell_2$-norm MKL and
Elastic-net MKL achieve comparable accuracy to uniform kernel combination.
Although uniform kernel combination might be preferable from its simplicity,
$\ell_2$-norm MKL and Elastic-net MKL can learn the usefulness of the
information sources represented as kernels. In particular, Elastic-net MKL
achieves sparsity in the kernel weights.
| [
"Ryota Tomioka, Taiji Suzuki",
"['Ryota Tomioka' 'Taiji Suzuki']"
] |
stat.ML cs.GT cs.LG | null | 1011.3168 | null | null | http://arxiv.org/pdf/1011.3168v2 | 2011-03-24T15:45:21Z | 2010-11-14T00:17:02Z | Online Learning: Beyond Regret | We study online learnability of a wide class of problems, extending the
results of (Rakhlin, Sridharan, Tewari, 2010) to general notions of performance
measure well beyond external regret. Our framework simultaneously captures such
well-known notions as internal and general Phi-regret, learning with
non-additive global cost functions, Blackwell's approachability, calibration of
forecasters, adaptive regret, and more. We show that learnability in all these
situations is due to control of the same three quantities: a martingale
convergence term, a term describing the ability to perform well if future is
known, and a generalization of sequential Rademacher complexity, studied in
(Rakhlin, Sridharan, Tewari, 2010). Since we directly study complexity of the
problem instead of focusing on efficient algorithms, we are able to improve and
extend many known results which have been previously derived via an algorithmic
construction.
| [
"Alexander Rakhlin, Karthik Sridharan, Ambuj Tewari",
"['Alexander Rakhlin' 'Karthik Sridharan' 'Ambuj Tewari']"
] |
cs.AI cs.CY cs.LG | null | 1011.3557 | null | null | http://arxiv.org/pdf/1011.3557v1 | 2010-11-16T00:46:31Z | 2010-11-16T00:46:31Z | A Probabilistic Approach for Learning Folksonomies from Structured Data | Learning structured representations has emerged as an important problem in
many domains, including document and Web data mining, bioinformatics, and image
analysis. One approach to learning complex structures is to integrate many
smaller, incomplete and noisy structure fragments. In this work, we present an
unsupervised probabilistic approach that extends affinity propagation to
combine the small ontological fragments into a collection of integrated,
consistent, and larger folksonomies. This is a challenging task because the
method must aggregate similar structures while avoiding structural
inconsistencies and handling noise. We validate the approach on a real-world
social media dataset, comprised of shallow personal hierarchies specified by
many individual users, collected from the photosharing website Flickr. Our
empirical results show that our proposed approach is able to construct deeper
and denser structures, compared to an approach using only the standard affinity
propagation algorithm. Additionally, the approach yields better overall
integration quality than a state-of-the-art approach based on incremental
relational clustering.
| [
"Anon Plangprasopchok, Kristina Lerman, Lise Getoor",
"['Anon Plangprasopchok' 'Kristina Lerman' 'Lise Getoor']"
] |
cs.LG cs.IT math.IT stat.ML | null | 1011.3728 | null | null | http://arxiv.org/pdf/1011.3728v1 | 2010-11-16T15:31:25Z | 2010-11-16T15:31:25Z | PADDLE: Proximal Algorithm for Dual Dictionaries LEarning | Recently, considerable research efforts have been devoted to the design of
methods to learn from data overcomplete dictionaries for sparse coding.
However, learned dictionaries require the solution of an optimization problem
for coding new data. In order to overcome this drawback, we propose an
algorithm aimed at learning both a dictionary and its dual: a linear mapping
directly performing the coding. By leveraging on proximal methods, our
algorithm jointly minimizes the reconstruction error of the dictionary and the
coding error of its dual; the sparsity of the representation is induced by an
$\ell_1$-based penalty on its coefficients. The results obtained on synthetic
data and real images show that the algorithm is capable of recovering the
expected dictionaries. Furthermore, on a benchmark dataset, we show that the
image features obtained from the dual matrix yield state-of-the-art
classification performance while being much less computational intensive.
| [
"['Curzio Basso' 'Matteo Santoro' 'Alessandro Verri' 'Silvia Villa']",
"Curzio Basso and Matteo Santoro and Alessandro Verri and Silvia Villa"
] |
cs.LG cs.NA math.SP | null | 1011.4104 | null | null | http://arxiv.org/pdf/1011.4104v4 | 2012-11-16T04:26:29Z | 2010-11-17T23:39:12Z | Clustering and Latent Semantic Indexing Aspects of the Singular Value
Decomposition | This paper discusses clustering and latent semantic indexing (LSI) aspects of
the singular value decomposition (SVD). The purpose of this paper is twofold.
The first is to give an explanation on how and why the singular vectors can be
used in clustering. And the second is to show that the two seemingly unrelated
SVD aspects actually originate from the same source: related vertices tend to
be more clustered in the graph representation of lower rank approximate matrix
using the SVD than in the original semantic graph. Accordingly, the SVD can
improve retrieval performance of an information retrieval system since queries
made to the approximate matrix can retrieve more relevant documents and filter
out more irrelevant documents than the same queries made to the original
matrix. By utilizing this fact, we will devise an LSI algorithm that mimicks
SVD capability in clustering related vertices. Convergence analysis shows that
the algorithm is convergent and produces a unique solution for each input.
Experimental results using some standard datasets in LSI research show that
retrieval performances of the algorithm are comparable to the SVD's. In
addition, the algorithm is more practical and easier to use because there is no
need to determine decomposition rank which is crucial in driving retrieval
performance of the SVD.
| [
"Andri Mirzal",
"['Andri Mirzal']"
] |
math.OC cs.LG cs.NI math.PR | null | 1011.4748 | null | null | http://arxiv.org/pdf/1011.4748v1 | 2010-11-22T08:40:35Z | 2010-11-22T08:40:35Z | Combinatorial Network Optimization with Unknown Variables: Multi-Armed
Bandits with Linear Rewards | In the classic multi-armed bandits problem, the goal is to have a policy for
dynamically operating arms that each yield stochastic rewards with unknown
means. The key metric of interest is regret, defined as the gap between the
expected total reward accumulated by an omniscient player that knows the reward
means for each arm, and the expected total reward accumulated by the given
policy. The policies presented in prior work have storage, computation and
regret all growing linearly with the number of arms, which is not scalable when
the number of arms is large. We consider in this work a broad class of
multi-armed bandits with dependent arms that yield rewards as a linear
combination of a set of unknown parameters. For this general framework, we
present efficient policies that are shown to achieve regret that grows
logarithmically with time, and polynomially in the number of unknown parameters
(even though the number of dependent arms may grow exponentially). Furthermore,
these policies only require storage that grows linearly in the number of
unknown parameters. We show that this generalization is broadly applicable and
useful for many interesting tasks in networks that can be formulated as
tractable combinatorial optimization problems with linear objective functions,
such as maximum weight matching, shortest path, and minimum spanning tree
computations.
| [
"Yi Gai, Bhaskar Krishnamachari and Rahul Jain",
"['Yi Gai' 'Bhaskar Krishnamachari' 'Rahul Jain']"
] |
math.OC cs.LG cs.NI math.PR | null | 1011.4752 | null | null | http://arxiv.org/pdf/1011.4752v1 | 2010-11-22T09:07:55Z | 2010-11-22T09:07:55Z | The Non-Bayesian Restless Multi-Armed Bandit: a Case of Near-Logarithmic
Regret | In the classic Bayesian restless multi-armed bandit (RMAB) problem, there are
$N$ arms, with rewards on all arms evolving at each time as Markov chains with
known parameters. A player seeks to activate $K \geq 1$ arms at each time in
order to maximize the expected total reward obtained over multiple plays. RMAB
is a challenging problem that is known to be PSPACE-hard in general. We
consider in this work the even harder non-Bayesian RMAB, in which the
parameters of the Markov chain are assumed to be unknown \emph{a priori}. We
develop an original approach to this problem that is applicable when the
corresponding Bayesian problem has the structure that, depending on the known
parameter values, the optimal solution is one of a prescribed finite set of
policies. In such settings, we propose to learn the optimal policy for the
non-Bayesian RMAB by employing a suitable meta-policy which treats each policy
from this finite set as an arm in a different non-Bayesian multi-armed bandit
problem for which a single-arm selection policy is optimal. We demonstrate this
approach by developing a novel sensing policy for opportunistic spectrum access
over unknown dynamic channels. We prove that our policy achieves
near-logarithmic regret (the difference in expected reward compared to a
model-aware genie), which leads to the same average reward that can be achieved
by the optimal policy under a known model. This is the first such result in the
literature for a non-Bayesian RMAB.
| [
"Wenhan Dai, Yi Gai, Bhaskar Krishnamachari, Qing Zhao",
"['Wenhan Dai' 'Yi Gai' 'Bhaskar Krishnamachari' 'Qing Zhao']"
] |
math.OC cs.LG math.PR | null | 1011.4969 | null | null | http://arxiv.org/pdf/1011.4969v2 | 2011-12-26T03:42:59Z | 2010-11-22T22:39:47Z | Learning in A Changing World: Restless Multi-Armed Bandit with Unknown
Dynamics | We consider the restless multi-armed bandit (RMAB) problem with unknown
dynamics in which a player chooses M out of N arms to play at each time. The
reward state of each arm transits according to an unknown Markovian rule when
it is played and evolves according to an arbitrary unknown random process when
it is passive. The performance of an arm selection policy is measured by
regret, defined as the reward loss with respect to the case where the player
knows which M arms are the most rewarding and always plays the M best arms. We
construct a policy with an interleaving exploration and exploitation epoch
structure that achieves a regret with logarithmic order when arbitrary (but
nontrivial) bounds on certain system parameters are known. When no knowledge
about the system is available, we show that the proposed policy achieves a
regret arbitrarily close to the logarithmic order. We further extend the
problem to a decentralized setting where multiple distributed players share the
arms without information exchange. Under both an exogenous restless model and
an endogenous restless model, we show that a decentralized extension of the
proposed policy preserves the logarithmic regret order as in the centralized
setting. The results apply to adaptive learning in various dynamic systems and
communication networks, as well as financial investment.
| [
"['Haoyang Liu' 'Keqin Liu' 'Qing Zhao']",
"Haoyang Liu, Keqin Liu, Qing Zhao"
] |
cs.LG math.PR math.ST stat.ML stat.TH | null | 1011.5053 | null | null | http://arxiv.org/pdf/1011.5053v2 | 2012-04-05T16:40:03Z | 2010-11-23T10:44:21Z | Tight Sample Complexity of Large-Margin Learning | We obtain a tight distribution-specific characterization of the sample
complexity of large-margin classification with L_2 regularization: We introduce
the \gamma-adapted-dimension, which is a simple function of the spectrum of a
distribution's covariance matrix, and show distribution-specific upper and
lower bounds on the sample complexity, both governed by the
\gamma-adapted-dimension of the source distribution. We conclude that this new
quantity tightly characterizes the true sample complexity of large-margin
classification. The bounds hold for a rich family of sub-Gaussian
distributions.
| [
"Sivan Sabato, Nathan Srebro, Naftali Tishby",
"['Sivan Sabato' 'Nathan Srebro' 'Naftali Tishby']"
] |
stat.ML cs.LG | null | 1011.5270 | null | null | http://arxiv.org/pdf/1011.5270v2 | 2010-11-29T22:11:55Z | 2010-11-24T01:51:00Z | Classifying Clustering Schemes | Many clustering schemes are defined by optimizing an objective function
defined on the partitions of the underlying set of a finite metric space. In
this paper, we construct a framework for studying what happens when we instead
impose various structural conditions on the clustering schemes, under the
general heading of functoriality. Functoriality refers to the idea that one
should be able to compare the results of clustering algorithms as one varies
the data set, for example by adding points or by applying functions to it. We
show that within this framework, one can prove a theorems analogous to one of
J. Kleinberg, in which for example one obtains an existence and uniqueness
theorem instead of a non-existence result.
We obtain a full classification of all clustering schemes satisfying a
condition we refer to as excisiveness. The classification can be changed by
varying the notion of maps of finite metric spaces. The conditions occur
naturally when one considers clustering as the statistical version of the
geometric notion of connected components. By varying the degree of
functoriality that one requires from the schemes it is possible to construct
richer families of clustering schemes that exhibit sensitivity to density.
| [
"Gunnar Carlsson and Facundo Memoli",
"['Gunnar Carlsson' 'Facundo Memoli']"
] |
stat.ML cs.LG | 10.1016/j.specom.2013.01.005 | 1011.5395 | null | null | http://arxiv.org/abs/1011.5395v1 | 2010-11-24T15:18:42Z | 2010-11-24T15:18:42Z | The Sample Complexity of Dictionary Learning | A large set of signals can sometimes be described sparsely using a
dictionary, that is, every element can be represented as a linear combination
of few elements from the dictionary. Algorithms for various signal processing
applications, including classification, denoising and signal separation, learn
a dictionary from a set of signals to be represented. Can we expect that the
representation found by such a dictionary for a previously unseen example from
the same source will have L_2 error of the same magnitude as those for the
given examples? We assume signals are generated from a fixed distribution, and
study this questions from a statistical learning theory perspective.
We develop generalization bounds on the quality of the learned dictionary for
two types of constraints on the coefficient selection, as measured by the
expected L_2 error in representation when the dictionary is used. For the case
of l_1 regularized coefficient selection we provide a generalization bound of
the order of O(sqrt(np log(m lambda)/m)), where n is the dimension, p is the
number of elements in the dictionary, lambda is a bound on the l_1 norm of the
coefficient vector and m is the number of samples, which complements existing
results. For the case of representing a new signal as a combination of at most
k dictionary elements, we provide a bound of the order O(sqrt(np log(m k)/m))
under an assumption on the level of orthogonality of the dictionary (low Babel
function). We further show that this assumption holds for most dictionaries in
high dimensions in a strong probabilistic sense. Our results further yield fast
rates of order 1/m as opposed to 1/sqrt(m) using localized Rademacher
complexity. We provide similar results in a general setting using kernels with
weak smoothness requirements.
| [
"Daniel Vainsencher, Shie Mannor, Alfred M. Bruckstein",
"['Daniel Vainsencher' 'Shie Mannor' 'Alfred M. Bruckstein']"
] |
cs.LG | null | 1011.5668 | null | null | http://arxiv.org/pdf/1011.5668v1 | 2010-11-25T18:52:30Z | 2010-11-25T18:52:30Z | On Theorem 2.3 in "Prediction, Learning, and Games" by Cesa-Bianchi and
Lugosi | The note presents a modified proof of a loss bound for the exponentially
weighted average forecaster with time-varying potential. The regret term of the
algorithm is upper-bounded by sqrt{n ln(N)} (uniformly in n), where N is the
number of experts and n is the number of steps.
| [
"['Alexey Chernov']",
"Alexey Chernov"
] |
stat.ML cs.LG | null | 1011.6086 | null | null | http://arxiv.org/pdf/1011.6086v1 | 2010-11-28T20:54:58Z | 2010-11-28T20:54:58Z | In All Likelihood, Deep Belief Is Not Enough | Statistical models of natural stimuli provide an important tool for
researchers in the fields of machine learning and computational neuroscience. A
canonical way to quantitatively assess and compare the performance of
statistical models is given by the likelihood. One class of statistical models
which has recently gained increasing popularity and has been applied to a
variety of complex data are deep belief networks. Analyses of these models,
however, have been typically limited to qualitative analyses based on samples
due to the computationally intractable nature of the model likelihood.
Motivated by these circumstances, the present article provides a consistent
estimator for the likelihood that is both computationally tractable and simple
to apply in practice. Using this estimator, a deep belief network which has
been suggested for the modeling of natural image patches is quantitatively
investigated and compared to other models of natural image patches. Contrary to
earlier claims based on qualitative results, the results presented in this
article provide evidence that the model under investigation is not a
particularly good model for natural images
| [
"Lucas Theis, Sebastian Gerwinn, Fabian Sinz and Matthias Bethge",
"['Lucas Theis' 'Sebastian Gerwinn' 'Fabian Sinz' 'Matthias Bethge']"
] |
physics.data-an cs.LG hep-ex stat.ML | null | 1011.6224 | null | null | http://arxiv.org/pdf/1011.6224v1 | 2010-11-29T13:34:02Z | 2010-11-29T13:34:02Z | Classifying extremely imbalanced data sets | Imbalanced data sets containing much more background than signal instances
are very common in particle physics, and will also be characteristic for the
upcoming analyses of LHC data. Following up the work presented at ACAT 2008, we
use the multivariate technique presented there (a rule growing algorithm with
the meta-methods bagging and instance weighting) on much more imbalanced data
sets, especially a selection of D0 decays without the use of particle
identification. It turns out that the quality of the result strongly depends on
the number of background instances used for training. We discuss methods to
exploit this in order to improve the results significantly, and how to handle
and reduce the size of large training sets without loss of result quality in
general. We will also comment on how to take into account statistical
fluctuation in receiver operation characteristic curves (ROC) for comparing
classifier methods.
| [
"['Markward Britsch' 'Nikolai Gagunashvili' 'Michael Schmelling']",
"Markward Britsch (1), Nikolai Gagunashvili (2), Michael Schmelling (1)\n ((1) Max-Planck-Institut f\\\"ur Kernphysik, (2) University of Akureyri)"
] |
cs.LG | null | 1012.0498 | null | null | http://arxiv.org/pdf/1012.0498v1 | 2010-12-02T17:04:19Z | 2010-12-02T17:04:19Z | Estimating Probabilities in Recommendation Systems | Recommendation systems are emerging as an important business application with
significant economic impact. Currently popular systems include Amazon's book
recommendations, Netflix's movie recommendations, and Pandora's music
recommendations. In this paper we address the problem of estimating
probabilities associated with recommendation system data using non-parametric
kernel smoothing. In our estimation we interpret missing items as randomly
censored observations and obtain efficient computation schemes using
combinatorial properties of generating functions. We demonstrate our approach
with several case studies involving real world movie recommendation data. The
results are comparable with state-of-the-art techniques while also providing
probabilistic preference estimates outside the scope of traditional recommender
systems.
| [
"Mingxuan Sun, Guy Lebanon, Paul Kidwell",
"['Mingxuan Sun' 'Guy Lebanon' 'Paul Kidwell']"
] |
cs.CC cs.AI cs.LG | null | 1012.0729 | null | null | http://arxiv.org/pdf/1012.0729v1 | 2010-12-03T13:11:22Z | 2010-12-03T13:11:22Z | Agnostic Learning of Monomials by Halfspaces is Hard | We prove the following strong hardness result for learning: Given a
distribution of labeled examples from the hypercube such that there exists a
monomial consistent with $(1-\eps)$ of the examples, it is NP-hard to find a
halfspace that is correct on $(1/2+\eps)$ of the examples, for arbitrary
constants $\eps > 0$. In learning theory terms, weak agnostic learning of
monomials is hard, even if one is allowed to output a hypothesis from the much
bigger concept class of halfspaces. This hardness result subsumes a long line
of previous results, including two recent hardness results for the proper
learning of monomials and halfspaces. As an immediate corollary of our result
we show that weak agnostic learning of decision lists is NP-hard.
Our techniques are quite different from previous hardness proofs for
learning. We define distributions on positive and negative examples for
monomials whose first few moments match. We use the invariance principle to
argue that regular halfspaces (all of whose coefficients have small absolute
value relative to the total $\ell_2$ norm) cannot distinguish between
distributions whose first few moments match. For highly non-regular subspaces,
we use a structural lemma from recent work on fooling halfspaces to argue that
they are ``junta-like'' and one can zero out all but the top few coefficients
without affecting the performance of the halfspace. The top few coefficients
form the natural list decoding of a halfspace in the context of dictatorship
tests/Label Cover reductions.
We note that unlike previous invariance principle based proofs which are only
known to give Unique-Games hardness, we are able to reduce from a version of
Label Cover problem that is known to be NP-hard. This has inspired follow-up
work on bypassing the Unique Games conjecture in some optimal geometric
inapproximability results.
| [
"Vitaly Feldman, Venkatesan Guruswami, Prasad Raghavendra, Yi Wu",
"['Vitaly Feldman' 'Venkatesan Guruswami' 'Prasad Raghavendra' 'Yi Wu']"
] |
cs.LG cs.AI cs.LO math.LO | null | 1012.0735 | null | null | http://arxiv.org/pdf/1012.0735v2 | 2011-03-24T16:38:44Z | 2010-12-03T13:29:01Z | Closed-set-based Discovery of Bases of Association Rules | The output of an association rule miner is often huge in practice. This is
why several concise lossless representations have been proposed, such as the
"essential" or "representative" rules. We revisit the algorithm given by
Kryszkiewicz (Int. Symp. Intelligent Data Analysis 2001, Springer-Verlag LNCS
2189, 350-359) for mining representative rules. We show that its output is
sometimes incomplete, due to an oversight in its mathematical validation. We
propose alternative complete generators and we extend the approach to an
existing closure-aware basis similar to, and often smaller than, the
representative rules, namely the basis B*.
| [
"['José L. Balcázar' 'Diego García-Saiz' 'Domingo Gómez-Pérez'\n 'Cristina Tîrnăucă']",
"Jos\\'e L. Balc\\'azar, Diego Garc\\'ia-Saiz, Domingo G\\'omez-P\\'erez,\n Cristina T\\^irn\\u{a}uc\\u{a}"
] |
cs.AI cs.LG math.LO | null | 1012.0742 | null | null | http://arxiv.org/pdf/1012.0742v1 | 2010-12-03T13:57:32Z | 2010-12-03T13:57:32Z | Border Algorithms for Computing Hasse Diagrams of Arbitrary Lattices | The Border algorithm and the iPred algorithm find the Hasse diagrams of FCA
lattices. We show that they can be generalized to arbitrary lattices. In the
case of iPred, this requires the identification of a join-semilattice
homomorphism into a distributive lattice.
| [
"Jos\\'e L. Balc\\'azar, Cristina T\\^irn\\u{a}uc\\u{a}",
"['José L. Balcázar' 'Cristina Tîrnăucă']"
] |
cs.LG math.OC stat.ML | null | 1012.0774 | null | null | http://arxiv.org/pdf/1012.0774v1 | 2010-12-03T15:58:47Z | 2010-12-03T15:58:47Z | An Inverse Power Method for Nonlinear Eigenproblems with Applications in
1-Spectral Clustering and Sparse PCA | Many problems in machine learning and statistics can be formulated as
(generalized) eigenproblems. In terms of the associated optimization problem,
computing linear eigenvectors amounts to finding critical points of a quadratic
function subject to quadratic constraints. In this paper we show that a certain
class of constrained optimization problems with nonquadratic objective and
constraints can be understood as nonlinear eigenproblems. We derive a
generalization of the inverse power method which is guaranteed to converge to a
nonlinear eigenvector. We apply the inverse power method to 1-spectral
clustering and sparse PCA which can naturally be formulated as nonlinear
eigenproblems. In both applications we achieve state-of-the-art results in
terms of solution quality and runtime. Moving beyond the standard eigenproblem
should be useful also in many other applications and our inverse power method
can be easily adapted to new problems.
| [
"['Matthias Hein' 'Thomas Bühler']",
"Matthias Hein and Thomas B\\\"uhler"
] |
cs.AI cs.IR cs.LG cs.NE | null | 1012.0841 | null | null | http://arxiv.org/pdf/1012.0841v1 | 2010-12-03T20:53:36Z | 2010-12-03T20:53:36Z | Automated Query Learning with Wikipedia and Genetic Programming | Most of the existing information retrieval systems are based on bag of words
model and are not equipped with common world knowledge. Work has been done
towards improving the efficiency of such systems by using intelligent
algorithms to generate search queries, however, not much research has been done
in the direction of incorporating human-and-society level knowledge in the
queries. This paper is one of the first attempts where such information is
incorporated into the search queries using Wikipedia semantics. The paper
presents an essential shift from conventional token based queries to concept
based queries, leading to an enhanced efficiency of information retrieval
systems. To efficiently handle the automated query learning problem, we propose
Wikipedia-based Evolutionary Semantics (Wiki-ES) framework where concept based
queries are learnt using a co-evolving evolutionary procedure. Learning concept
based queries using an intelligent evolutionary procedure yields significant
improvement in performance which is shown through an extensive study using
Reuters newswire documents. Comparison of the proposed framework is performed
with other information retrieval systems. Concept based approach has also been
implemented on other information retrieval systems to justify the effectiveness
of a transition from token based queries to concept based queries.
| [
"['Pekka Malo' 'Pyry Siitari' 'Ankur Sinha']",
"Pekka Malo and Pyry Siitari and Ankur Sinha"
] |
math.ST cs.LG stat.ME stat.TH | null | 1012.0866 | null | null | http://arxiv.org/pdf/1012.0866v4 | 2014-08-01T20:20:34Z | 2010-12-04T00:03:16Z | Generalized Species Sampling Priors with Latent Beta reinforcements | Many popular Bayesian nonparametric priors can be characterized in terms of
exchangeable species sampling sequences. However, in some applications,
exchangeability may not be appropriate. We introduce a {novel and
probabilistically coherent family of non-exchangeable species sampling
sequences characterized by a tractable predictive probability function with
weights driven by a sequence of independent Beta random variables. We compare
their theoretical clustering properties with those of the Dirichlet Process and
the two parameters Poisson-Dirichlet process. The proposed construction
provides a complete characterization of the joint process, differently from
existing work. We then propose the use of such process as prior distribution in
a hierarchical Bayes modeling framework, and we describe a Markov Chain Monte
Carlo sampler for posterior inference. We evaluate the performance of the prior
and the robustness of the resulting inference in a simulation study, providing
a comparison with popular Dirichlet Processes mixtures and Hidden Markov
Models. Finally, we develop an application to the detection of chromosomal
aberrations in breast cancer by leveraging array CGH data.
| [
"Edoardo M. Airoldi, Thiago Costa, Federico Bassetti, Fabrizio Leisen\n and Michele Guindani",
"['Edoardo M. Airoldi' 'Thiago Costa' 'Federico Bassetti' 'Fabrizio Leisen'\n 'Michele Guindani']"
] |
cs.LG cs.AI | 10.1109/TPAMI.2012.172 | 1012.0930 | null | null | http://arxiv.org/abs/1012.0930v3 | 2012-08-02T11:27:42Z | 2010-12-04T16:08:08Z | Efficient Optimization of Performance Measures by Classifier Adaptation | In practical applications, machine learning algorithms are often needed to
learn classifiers that optimize domain specific performance measures.
Previously, the research has focused on learning the needed classifier in
isolation, yet learning nonlinear classifier for nonlinear and nonsmooth
performance measures is still hard. In this paper, rather than learning the
needed classifier by optimizing specific performance measure directly, we
circumvent this problem by proposing a novel two-step approach called as CAPO,
namely to first train nonlinear auxiliary classifiers with existing learning
methods, and then to adapt auxiliary classifiers for specific performance
measures. In the first step, auxiliary classifiers can be obtained efficiently
by taking off-the-shelf learning algorithms. For the second step, we show that
the classifier adaptation problem can be reduced to a quadratic program
problem, which is similar to linear SVMperf and can be efficiently solved. By
exploiting nonlinear auxiliary classifiers, CAPO can generate nonlinear
classifier which optimizes a large variety of performance measures including
all the performance measure based on the contingency table and AUC, whilst
keeping high computational efficiency. Empirical studies show that CAPO is
effective and of high computational efficiency, and even it is more efficient
than linear SVMperf.
| [
"['Nan Li' 'Ivor W. Tsang' 'Zhi-Hua Zhou']",
"Nan Li and Ivor W. Tsang and Zhi-Hua Zhou"
] |
stat.ML cs.LG | null | 1012.0975 | null | null | http://arxiv.org/pdf/1012.0975v2 | 2010-12-23T23:02:28Z | 2010-12-05T07:27:42Z | Split Bregman Method for Sparse Inverse Covariance Estimation with
Matrix Iteration Acceleration | We consider the problem of estimating the inverse covariance matrix by
maximizing the likelihood function with a penalty added to encourage the
sparsity of the resulting matrix. We propose a new approach based on the split
Bregman method to solve the regularized maximum likelihood estimation problem.
We show that our method is significantly faster than the widely used graphical
lasso method, which is based on blockwise coordinate descent, on both
artificial and real-world data. More importantly, different from the graphical
lasso, the split Bregman based method is much more general, and can be applied
to a class of regularization terms other than the $\ell_1$ norm
| [
"Gui-Bo Ye, Jian-Feng Cai, Xiaohui Xie",
"['Gui-Bo Ye' 'Jian-Feng Cai' 'Xiaohui Xie']"
] |
cs.LG cs.DC math.OC | null | 1012.1367 | null | null | http://arxiv.org/pdf/1012.1367v2 | 2012-01-31T18:12:21Z | 2010-12-07T00:00:22Z | Optimal Distributed Online Prediction using Mini-Batches | Online prediction methods are typically presented as serial algorithms
running on a single processor. However, in the age of web-scale prediction
problems, it is increasingly common to encounter situations where a single
processor cannot keep up with the high rate at which inputs arrive. In this
work, we present the \emph{distributed mini-batch} algorithm, a method of
converting many serial gradient-based online prediction algorithms into
distributed algorithms. We prove a regret bound for this method that is
asymptotically optimal for smooth convex loss functions and stochastic inputs.
Moreover, our analysis explicitly takes into account communication latencies
between nodes in the distributed environment. We show how our method can be
used to solve the closely-related distributed stochastic optimization problem,
achieving an asymptotically linear speed-up over multiple processors. Finally,
we demonstrate the merits of our approach on a web-scale online prediction
problem.
| [
"['Ofer Dekel' 'Ran Gilad-Bachrach' 'Ohad Shamir' 'Lin Xiao']",
"Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir and Lin Xiao"
] |
cs.LG math.OC | null | 1012.1370 | null | null | http://arxiv.org/pdf/1012.1370v1 | 2010-12-07T00:12:25Z | 2010-12-07T00:12:25Z | Robust Distributed Online Prediction | The standard model of online prediction deals with serial processing of
inputs by a single processor. However, in large-scale online prediction
problems, where inputs arrive at a high rate, an increasingly common necessity
is to distribute the computation across several processors. A non-trivial
challenge is to design distributed algorithms for online prediction, which
maintain good regret guarantees. In \cite{DMB}, we presented the DMB algorithm,
which is a generic framework to convert any serial gradient-based online
prediction algorithm into a distributed algorithm. Moreover, its regret
guarantee is asymptotically optimal for smooth convex loss functions and
stochastic inputs. On the flip side, it is fragile to many types of failures
that are common in distributed environments. In this companion paper, we
present variants of the DMB algorithm, which are resilient to many types of
network failures, and tolerant to varying performance of the computing nodes.
| [
"['Ofer Dekel' 'Ran Gilad-Bachrach' 'Ohad Shamir' 'Lin Xiao']",
"Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir and Lin Xiao"
] |
cs.LG stat.ML | null | 1012.1501 | null | null | http://arxiv.org/pdf/1012.1501v2 | 2011-06-10T14:12:14Z | 2010-12-07T13:34:44Z | Shaping Level Sets with Submodular Functions | We consider a class of sparsity-inducing regularization terms based on
submodular functions. While previous work has focused on non-decreasing
functions, we explore symmetric submodular functions and their \lova
extensions. We show that the Lovasz extension may be seen as the convex
envelope of a function that depends on level sets (i.e., the set of indices
whose corresponding components of the underlying predictor are greater than a
given constant): this leads to a class of convex structured regularization
terms that impose prior knowledge on the level sets, and not only on the
supports of the underlying predictors. We provide a unified set of optimization
algorithms, such as proximal operators, and theoretical guarantees (allowed
level sets and recovery conditions). By selecting specific submodular
functions, we give a new interpretation to known norms, such as the total
variation; we also define new norms, in particular ones that are based on order
statistics with application to clustering and outlier detection, and on noisy
cuts in graphs with application to change point detection in the presence of
outliers.
| [
"Francis Bach (LIENS, INRIA Paris - Rocquencourt)",
"['Francis Bach']"
] |
cs.AI cs.LG cs.LO | null | 1012.1552 | null | null | http://arxiv.org/pdf/1012.1552v1 | 2010-12-07T16:57:54Z | 2010-12-07T16:57:54Z | Bridging the Gap between Reinforcement Learning and Knowledge
Representation: A Logical Off- and On-Policy Framework | Knowledge Representation is important issue in reinforcement learning. In
this paper, we bridge the gap between reinforcement learning and knowledge
representation, by providing a rich knowledge representation framework, based
on normal logic programs with answer set semantics, that is capable of solving
model-free reinforcement learning problems for more complex do-mains and
exploits the domain-specific knowledge. We prove the correctness of our
approach. We show that the complexity of finding an offline and online policy
for a model-free reinforcement learning problem in our approach is NP-complete.
Moreover, we show that any model-free reinforcement learning problem in MDP
environment can be encoded as a SAT problem. The importance of that is
model-free reinforcement
| [
"Emad Saad",
"['Emad Saad']"
] |
cs.NA cs.IT cs.LG math.IT | 10.1109/TNNLS.2012.2235082 | 1012.1919 | null | null | http://arxiv.org/abs/1012.1919v3 | 2012-03-24T15:37:12Z | 2010-12-09T03:54:44Z | Low-Rank Structure Learning via Log-Sum Heuristic Recovery | Recovering intrinsic data structure from corrupted observations plays an
important role in various tasks in the communities of machine learning and
signal processing. In this paper, we propose a novel model, named log-sum
heuristic recovery (LHR), to learn the essential low-rank structure from
corrupted data. Different from traditional approaches, which directly utilize
$\ell_1$ norm to measure the sparseness, LHR introduces a more reasonable
log-sum measurement to enhance the sparsity in both the intrinsic low-rank
structure and in the sparse corruptions. Although the proposed LHR optimization
is no longer convex, it still can be effectively solved by a
majorization-minimization (MM) type algorithm, with which the non-convex
objective function is iteratively replaced by its convex surrogate and LHR
finally falls into the general framework of reweighed approaches. We prove that
the MM-type algorithm can converge to a stationary point after successive
iteration. We test the performance of our proposed model by applying it to
solve two typical problems: robust principal component analysis (RPCA) and
low-rank representation (LRR).
For RPCA, we compare LHR with the benchmark Principal Component Pursuit (PCP)
method from both the perspectives of simulations and practical applications.
For LRR, we apply LHR to compute the low-rank representation matrix for motion
segmentation and stock clustering. Experimental results on low rank structure
learning demonstrate that the proposed Log-sum based model performs much better
than the $\ell_1$-based method on for data with higher rank and with denser
corruptions.
| [
"Yue Deng, Qionghai Dai, Risheng Liu, Zengke Zhang and Sanqing Hu",
"['Yue Deng' 'Qionghai Dai' 'Risheng Liu' 'Zengke Zhang' 'Sanqing Hu']"
] |
cs.LG cs.NI | 10.13140/RG.2.1.3436.8247 | 1012.2514 | null | null | http://arxiv.org/abs/1012.2514v1 | 2010-12-12T07:22:05Z | 2010-12-12T07:22:05Z | Context Aware End-to-End Connectivity Management | In a dynamic heterogeneous environment, such as pervasive and ubiquitous
computing, context-aware adaptation is a key concept to meet the varying
requirements of different users. Connectivity is an important context source
that can be utilized for optimal management of diverse networking resources.
Application QoS (Quality of service) is another important issue that should be
taken into consideration for design of a context-aware system. This paper
presents connectivity from the view point of context awareness, identifies
various relevant raw connectivity contexts, and discusses how high-level
context information can be abstracted from the raw context information.
Further, rich context information is utilized in various policy representation
with respect to user profile and preference, application characteristics,
device capability, and network QoS conditions. Finally, a context-aware
end-to-end evaluation algorithm is presented for adaptive connectivity
management in a multi-access wireless network. Unlike the currently existing
algorithms, the proposed algorithm takes into account user QoS parameters, and
therefore, it is more practical.
| [
"['Jaydip Sen' 'P. Balamuralidhar' 'M. Girish Chandra' 'Harihara S. G.'\n 'Harish Reddy']",
"Jaydip Sen, P. Balamuralidhar, M. Girish Chandra, Harihara S.G., and\n Harish Reddy"
] |
cs.LG | null | 1012.2599 | null | null | http://arxiv.org/pdf/1012.2599v1 | 2010-12-12T22:53:04Z | 2010-12-12T22:53:04Z | A Tutorial on Bayesian Optimization of Expensive Cost Functions, with
Application to Active User Modeling and Hierarchical Reinforcement Learning | We present a tutorial on Bayesian optimization, a method of finding the
maximum of expensive cost functions. Bayesian optimization employs the Bayesian
technique of setting a prior over the objective function and combining it with
evidence to get a posterior function. This permits a utility-based selection of
the next observation to make on the objective function, which must take into
account both exploration (sampling from areas of high uncertainty) and
exploitation (sampling areas likely to offer improvement over the current best
observation). We also present two detailed extensions of Bayesian optimization,
with experiments---active user modelling with preferences, and hierarchical
reinforcement learning---and a discussion of the pros and cons of Bayesian
optimization based on our experiences.
| [
"['Eric Brochu' 'Vlad M. Cora' 'Nando de Freitas']",
"Eric Brochu and Vlad M. Cora and Nando de Freitas"
] |
cs.LG cs.AI | null | 1012.2609 | null | null | http://arxiv.org/pdf/1012.2609v4 | 2012-06-06T03:29:13Z | 2010-12-13T01:22:36Z | Inverse-Category-Frequency based supervised term weighting scheme for
text categorization | Term weighting schemes often dominate the performance of many classifiers,
such as kNN, centroid-based classifier and SVMs. The widely used term weighting
scheme in text categorization, i.e., tf.idf, is originated from information
retrieval (IR) field. The intuition behind idf for text categorization seems
less reasonable than IR. In this paper, we introduce inverse category frequency
(icf) into term weighting scheme and propose two novel approaches, i.e., tf.icf
and icf-based supervised term weighting schemes. The tf.icf adopts icf to
substitute idf factor and favors terms occurring in fewer categories, rather
than fewer documents. And the icf-based approach combines icf and relevance
frequency (rf) to weight terms in a supervised way. Our cross-classifier and
cross-corpus experiments have shown that our proposed approaches are superior
or comparable to six supervised term weighting schemes and three traditional
schemes in terms of macro-F1 and micro-F1.
| [
"['Deqing Wang' 'Hui Zhang']",
"Deqing Wang, Hui Zhang"
] |
math.OC cs.LG cs.NI cs.SY math.PR | null | 1012.3005 | null | null | http://arxiv.org/pdf/1012.3005v2 | 2011-03-20T01:50:09Z | 2010-12-14T12:29:43Z | On the Combinatorial Multi-Armed Bandit Problem with Markovian Rewards | We consider a combinatorial generalization of the classical multi-armed
bandit problem that is defined as follows. There is a given bipartite graph of
$M$ users and $N \geq M$ resources. For each user-resource pair $(i,j)$, there
is an associated state that evolves as an aperiodic irreducible finite-state
Markov chain with unknown parameters, with transitions occurring each time the
particular user $i$ is allocated resource $j$. The user $i$ receives a reward
that depends on the corresponding state each time it is allocated the resource
$j$. The system objective is to learn the best matching of users to resources
so that the long-term sum of the rewards received by all users is maximized.
This corresponds to minimizing regret, defined here as the gap between the
expected total reward that can be obtained by the best-possible static matching
and the expected total reward that can be achieved by a given algorithm. We
present a polynomial-storage and polynomial-complexity-per-step
matching-learning algorithm for this problem. We show that this algorithm can
achieve a regret that is uniformly arbitrarily close to logarithmic in time and
polynomial in the number of users and resources. This formulation is broadly
applicable to scheduling and switching problems in networks and significantly
extends prior results in the area.
| [
"['Yi Gai' 'Bhaskar Krishnamachari' 'Mingyan Liu']",
"Yi Gai, Bhaskar Krishnamachari and Mingyan Liu"
] |
cs.DS cs.CG cs.LG | 10.1007/s00453-012-9717-4 | 1012.3697 | null | null | http://arxiv.org/abs/1012.3697v4 | 2014-03-07T19:37:47Z | 2010-12-16T17:46:07Z | Analysis of Agglomerative Clustering | The diameter $k$-clustering problem is the problem of partitioning a finite
subset of $\mathbb{R}^d$ into $k$ subsets called clusters such that the maximum
diameter of the clusters is minimized. One early clustering algorithm that
computes a hierarchy of approximate solutions to this problem (for all values
of $k$) is the agglomerative clustering algorithm with the complete linkage
strategy. For decades, this algorithm has been widely used by practitioners.
However, it is not well studied theoretically. In this paper, we analyze the
agglomerative complete linkage clustering algorithm. Assuming that the
dimension $d$ is a constant, we show that for any $k$ the solution computed by
this algorithm is an $O(\log k)$-approximation to the diameter $k$-clustering
problem. Our analysis does not only hold for the Euclidean distance but for any
metric that is based on a norm. Furthermore, we analyze the closely related
$k$-center and discrete $k$-center problem. For the corresponding agglomerative
algorithms, we deduce an approximation factor of $O(\log k)$ as well.
| [
"['Marcel R. Ackermann' 'Johannes Blömer' 'Daniel Kuntze'\n 'Christian Sohler']",
"Marcel R. Ackermann, Johannes Bl\\\"omer, Daniel Kuntze and Christian\n Sohler"
] |
cs.LG | 10.1109/TSP.2010.2097253 | 1012.3877 | null | null | http://arxiv.org/abs/1012.3877v1 | 2010-12-17T13:16:07Z | 2010-12-17T13:16:07Z | Queue-Aware Dynamic Clustering and Power Allocation for Network MIMO
Systems via Distributive Stochastic Learning | In this paper, we propose a two-timescale delay-optimal dynamic clustering
and power allocation design for downlink network MIMO systems. The dynamic
clustering control is adaptive to the global queue state information (GQSI)
only and computed at the base station controller (BSC) over a longer time
scale. On the other hand, the power allocations of all the BSs in one cluster
are adaptive to both intra-cluster channel state information (CCSI) and
intra-cluster queue state information (CQSI), and computed at the cluster
manager (CM) over a shorter time scale. We show that the two-timescale
delay-optimal control can be formulated as an infinite-horizon average cost
Constrained Partially Observed Markov Decision Process (CPOMDP). By exploiting
the special problem structure, we shall derive an equivalent Bellman equation
in terms of Pattern Selection Q-factor to solve the CPOMDP. To address the
distributive requirement and the issue of exponential memory requirement and
computational complexity, we approximate the Pattern Selection Q-factor by the
sum of Per-cluster Potential functions and propose a novel distributive online
learning algorithm to estimate the Per-cluster Potential functions (at each CM)
as well as the Lagrange multipliers (LM) (at each BS). We show that the
proposed distributive online learning algorithm converges almost surely (with
probability 1). By exploiting the birth-death structure of the queue dynamics,
we further decompose the Per-cluster Potential function into sum of Per-cluster
Per-user Potential functions and formulate the instantaneous power allocation
as a Per-stage QSI-aware Interference Game played among all the CMs. We also
propose a QSI-aware Simultaneous Iterative Water-filling Algorithm (QSIWFA) and
show that it can achieve the Nash Equilibrium (NE).
| [
"['Ying Cui' 'Qingqing Huang' 'Vincent K. N. Lau']",
"Ying Cui, Qingqing Huang, Vincent K.N.Lau"
] |
cs.LG | null | 1012.4051 | null | null | http://arxiv.org/pdf/1012.4051v1 | 2010-12-18T03:25:44Z | 2010-12-18T03:25:44Z | Survey & Experiment: Towards the Learning Accuracy | To attain the best learning accuracy, people move on with difficulties and
frustrations. Though one can optimize the empirical objective using a given set
of samples, its generalization ability to the entire sample distribution
remains questionable. Even if a fair generalization guarantee is offered, one
still wants to know what is to happen if the regularizer is removed, and/or how
well the artificial loss (like the hinge loss) relates to the accuracy.
For such reason, this report surveys four different trials towards the
learning accuracy, embracing the major advances in supervised learning theory
in the past four years. Starting from the generic setting of learning, the
first two trials introduce the best optimization and generalization bounds for
convex learning, and the third trial gets rid of the regularizer. As an
innovative attempt, the fourth trial studies the optimization when the
objective is exactly the accuracy, in the special case of binary
classification. This report also analyzes the last trial through experiments.
| [
"Zeyuan Allen Zhu",
"['Zeyuan Allen Zhu']"
] |
cs.LG | null | 1012.4249 | null | null | http://arxiv.org/pdf/1012.4249v1 | 2010-12-20T07:36:42Z | 2010-12-20T07:36:42Z | Travel Time Estimation Using Floating Car Data | This report explores the use of machine learning techniques to accurately
predict travel times in city streets and highways using floating car data
(location information of user vehicles on a road network). The aim of this
report is twofold, first we present a general architecture of solving this
problem, then present and evaluate few techniques on real floating car data
gathered over a month on a 5 Km highway in New Delhi.
| [
"Raffi Sevlian, Ram Rajagopal",
"['Raffi Sevlian' 'Ram Rajagopal']"
] |
cs.LG | null | 1012.4571 | null | null | http://arxiv.org/pdf/1012.4571v1 | 2010-12-21T09:11:53Z | 2010-12-21T09:11:53Z | How I won the "Chess Ratings - Elo vs the Rest of the World" Competition | This article discusses in detail the rating system that won the kaggle
competition "Chess Ratings: Elo vs the rest of the world". The competition
provided a historical dataset of outcomes for chess games, and aimed to
discover whether novel approaches can predict the outcomes of future games,
more accurately than the well-known Elo rating system. The winning rating
system, called Elo++ in the rest of the article, builds upon the Elo rating
system. Like Elo, Elo++ uses a single rating per player and predicts the
outcome of a game, by using a logistic curve over the difference in ratings of
the players. The major component of Elo++ is a regularization technique that
avoids overfitting these ratings. The dataset of chess games and outcomes is
relatively small and one has to be careful not to draw "too many conclusions"
out of the limited data. Many approaches tested in the competition showed signs
of such an overfitting. The leader-board was dominated by attempts that did a
very good job on a small test dataset, but couldn't generalize well on the
private hold-out dataset. The Elo++ regularization takes into account the
number of games per player, the recency of these games and the ratings of the
opponents. Finally, Elo++ employs a stochastic gradient descent scheme for
training the ratings, and uses only two global parameters (white's advantage
and regularization constant) that are optimized using cross-validation.
| [
"['Yannis Sismanis']",
"Yannis Sismanis"
] |
cs.LG cs.IT math.IT | 10.1109/TSP.2013.2272925 | 1012.4928 | null | null | http://arxiv.org/abs/1012.4928v2 | 2011-10-11T23:42:37Z | 2010-12-22T10:30:26Z | Calibration Using Matrix Completion with Application to Ultrasound
Tomography | We study the calibration process in circular ultrasound tomography devices
where the sensor positions deviate from the circumference of a perfect circle.
This problem arises in a variety of applications in signal processing ranging
from breast imaging to sensor network localization. We introduce a novel method
of calibration/localization based on the time-of-flight (ToF) measurements
between sensors when the enclosed medium is homogeneous. In the presence of all
the pairwise ToFs, one can easily estimate the sensor positions using
multi-dimensional scaling (MDS) method. In practice however, due to the
transitional behaviour of the sensors and the beam form of the transducers, the
ToF measurements for close-by sensors are unavailable. Further, random
malfunctioning of the sensors leads to random missing ToF measurements. On top
of the missing entries, in practice an unknown time delay is also added to the
measurements. In this work, we incorporate the fact that a matrix defined from
all the ToF measurements is of rank at most four. In order to estimate the
missing ToFs, we apply a state-of-the-art low-rank matrix completion algorithm,
OPTSPACE . To find the correct positions of the sensors (our ultimate goal) we
then apply MDS. We show analytic bounds on the overall error of the whole
process in the presence of noise and hence deduce its robustness. Finally, we
confirm the functionality of our method in practice by simulations mimicking
the measurements of a circular ultrasound tomography device.
| [
"['Reza Parhizkar' 'Amin Karbasi' 'Sewoong Oh' 'Martin Vetterli']",
"Reza Parhizkar, Amin Karbasi, Sewoong Oh, Martin Vetterli"
] |
null | null | 1012.5754 | null | null | http://arxiv.org/pdf/1012.5754v1 | 2010-12-28T13:11:51Z | 2010-12-28T13:11:51Z | Software Effort Estimation with Ridge Regression and Evolutionary
Attribute Selection | Software cost estimation is one of the prerequisite managerial activities carried out at the software development initiation stages and also repeated throughout the whole software life-cycle so that amendments to the total cost are made. In software cost estimation typically, a selection of project attributes is employed to produce effort estimations of the expected human resources to deliver a software product. However, choosing the appropriate project cost drivers in each case requires a lot of experience and knowledge on behalf of the project manager which can only be obtained through years of software engineering practice. A number of studies indicate that popular methods applied in the literature for software cost estimation, such as linear regression, are not robust enough and do not yield accurate predictions. Recently the dual variables Ridge Regression (RR) technique has been used for effort estimation yielding promising results. In this work we show that results may be further improved if an AI method is used to automatically select appropriate project cost drivers (inputs) for the technique. We propose a hybrid approach combining RR with a Genetic Algorithm, the latter evolving the subset of attributes for approximating effort more accurately. The proposed hybrid cost model has been applied on a widely known high-dimensional dataset of software project samples and the results obtained show that accuracy may be increased if redundant attributes are eliminated. | [
"['Efi Papatheocharous' 'Harris Papadopoulos' 'Andreas S. Andreou']"
] |
math.ST cs.LG stat.TH | null | 1101.0255 | null | null | http://arxiv.org/pdf/1101.0255v1 | 2010-12-31T13:33:14Z | 2010-12-31T13:33:14Z | Conditional information and definition of neighbor in categorical random
fields | We show that the definition of neighbor in Markov random fields as defined by
Besag (1974) when the joint distribution of the sites is not positive is not
well-defined. In a random field with finite number of sites we study the
conditions under which giving the value at extra sites will change the belief
of an agent about one site. Also the conditions under which the information
from some sites is equivalent to giving the value at all other sites is
studied. These concepts provide an alternative to the concept of neighbor for
general case where the positivity condition of the joint does not hold.
| [
"Reza Hosseini",
"['Reza Hosseini']"
] |
cs.LG cs.AI | null | 1101.0428 | null | null | http://arxiv.org/pdf/1101.0428v1 | 2011-01-02T20:20:27Z | 2011-01-02T20:20:27Z | The Local Optimality of Reinforcement Learning by Value Gradients, and
its Relationship to Policy Gradient Learning | In this theoretical paper we are concerned with the problem of learning a
value function by a smooth general function approximator, to solve a
deterministic episodic control problem in a large continuous state space. It is
shown that learning the gradient of the value-function at every point along a
trajectory generated by a greedy policy is a sufficient condition for the
trajectory to be locally extremal, and often locally optimal, and we argue that
this brings greater efficiency to value-function learning. This contrasts to
traditional value-function learning in which the value-function must be learnt
over the whole of state space.
It is also proven that policy-gradient learning applied to a greedy policy on
a value-function produces a weight update equivalent to a value-gradient weight
update, which provides a surprising connection between these two alternative
paradigms of reinforcement learning, and a convergence proof for control
problems with a value function represented by a general smooth function
approximator.
| [
"['Michael Fairbank' 'Eduardo Alonso']",
"Michael Fairbank and Eduardo Alonso"
] |
stat.ML cs.LG math.ST stat.TH | null | 1101.1057 | null | null | http://arxiv.org/pdf/1101.1057v3 | 2013-04-12T10:33:39Z | 2011-01-05T19:43:37Z | Sparsity regret bounds for individual sequences in online linear
regression | We consider the problem of online linear regression on arbitrary
deterministic sequences when the ambient dimension d can be much larger than
the number of time rounds T. We introduce the notion of sparsity regret bound,
which is a deterministic online counterpart of recent risk bounds derived in
the stochastic setting under a sparsity scenario. We prove such regret bounds
for an online-learning algorithm called SeqSEW and based on exponential
weighting and data-driven truncation. In a second part we apply a
parameter-free version of this algorithm to the stochastic setting (regression
model with random design). This yields risk bounds of the same flavor as in
Dalalyan and Tsybakov (2011) but which solve two questions left open therein.
In particular our risk bounds are adaptive (up to a logarithmic factor) to the
unknown variance of the noise if the latter is Gaussian. We also address the
regression model with fixed design.
| [
"S\\'ebastien Gerchinovitz (DMA, INRIA Paris - Rocquencourt)",
"['Sébastien Gerchinovitz']"
] |
Subsets and Splits