categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
cs.CC cs.DS cs.LG | null | 1211.1722 | null | null | http://arxiv.org/pdf/1211.1722v1 | 2012-11-07T23:12:00Z | 2012-11-07T23:12:00Z | Inverse problems in approximate uniform generation | We initiate the study of \emph{inverse} problems in approximate uniform
generation, focusing on uniform generation of satisfying assignments of various
types of Boolean functions. In such an inverse problem, the algorithm is given
uniform random satisfying assignments of an unknown function $f$ belonging to a
class $\C$ of Boolean functions, and the goal is to output a probability
distribution $D$ which is $\epsilon$-close, in total variation distance, to the
uniform distribution over $f^{-1}(1)$.
Positive results: We prove a general positive result establishing sufficient
conditions for efficient inverse approximate uniform generation for a class
$\C$. We define a new type of algorithm called a \emph{densifier} for $\C$, and
show (roughly speaking) how to combine (i) a densifier, (ii) an approximate
counting / uniform generation algorithm, and (iii) a Statistical Query learning
algorithm, to obtain an inverse approximate uniform generation algorithm. We
apply this general result to obtain a poly$(n,1/\eps)$-time algorithm for the
class of halfspaces; and a quasipoly$(n,1/\eps)$-time algorithm for the class
of $\poly(n)$-size DNF formulas.
Negative results: We prove a general negative result establishing that the
existence of certain types of signature schemes in cryptography implies the
hardness of certain inverse approximate uniform generation problems. This
implies that there are no {subexponential}-time inverse approximate uniform
generation algorithms for 3-CNF formulas; for intersections of two halfspaces;
for degree-2 polynomial threshold functions; and for monotone 2-CNF formulas.
Finally, we show that there is no general relationship between the complexity
of the "forward" approximate uniform generation problem and the complexity of
the inverse problem for a class $\C$ -- it is possible for either one to be
easy while the other is hard.
| [
"Anindya De, Ilias Diakonikolas, Rocco A. Servedio",
"['Anindya De' 'Ilias Diakonikolas' 'Rocco A. Servedio']"
] |
cs.LG | null | 1211.1799 | null | null | http://arxiv.org/pdf/1211.1799v1 | 2012-11-08T09:22:11Z | 2012-11-08T09:22:11Z | Algorithm for Missing Values Imputation in Categorical Data with Use of
Association Rules | This paper presents algorithm for missing values imputation in categorical
data. The algorithm is based on using association rules and is presented in
three variants. Experimental shows better accuracy of missing values imputation
using the algorithm then using most common attribute value.
| [
"['Jiří Kaiser']",
"Ji\\v{r}\\'i Kaiser"
] |
cs.LG cs.CV | 10.1016/j.sigpro.2014.03.047 | 1211.1893 | null | null | http://arxiv.org/abs/1211.1893v1 | 2012-11-06T19:13:21Z | 2012-11-06T19:13:21Z | Tangent-based manifold approximation with locally linear models | In this paper, we consider the problem of manifold approximation with affine
subspaces. Our objective is to discover a set of low dimensional affine
subspaces that represents manifold data accurately while preserving the
manifold's structure. For this purpose, we employ a greedy technique that
partitions manifold samples into groups that can be each approximated by a low
dimensional subspace. We start by considering each manifold sample as a
different group and we use the difference of tangents to determine appropriate
group mergings. We repeat this procedure until we reach the desired number of
sample groups. The best low dimensional affine subspaces corresponding to the
final groups constitute our approximate manifold representation. Our
experiments verify the effectiveness of the proposed scheme and show its
superior performance compared to state-of-the-art methods for manifold
approximation.
| [
"['Sofia Karygianni' 'Pascal Frossard']",
"Sofia Karygianni and Pascal Frossard"
] |
cs.LG cs.CE q-bio.QM stat.ML | null | 1211.2073 | null | null | http://arxiv.org/pdf/1211.2073v1 | 2012-11-09T08:34:25Z | 2012-11-09T08:34:25Z | LAGE: A Java Framework to reconstruct Gene Regulatory Networks from
Large-Scale Continues Expression Data | LAGE is a systematic framework developed in Java. The motivation of LAGE is
to provide a scalable and parallel solution to reconstruct Gene Regulatory
Networks (GRNs) from continuous gene expression data for very large amount of
genes. The basic idea of our framework is motivated by the philosophy of
divideand-conquer. Specifically, LAGE recursively partitions genes into
multiple overlapping communities with much smaller sizes, learns
intra-community GRNs respectively before merge them altogether. Besides, the
complete information of overlapping communities serves as the byproduct, which
could be used to mine meaningful functional modules in biological networks.
| [
"['Yang Lu' 'Mengying Wang' 'Kenny Q. Zhu' 'Bo Yuan']",
"Yang Lu and Mengying Wang and Kenny Q. Zhu and Bo Yuan"
] |
cs.LG stat.CO stat.ML | 10.1016/j.patcog.2013.10.006 | 1211.2190 | null | null | http://arxiv.org/abs/1211.2190v4 | 2013-09-07T13:10:06Z | 2012-11-09T17:21:48Z | Efficient Monte Carlo Methods for Multi-Dimensional Learning with
Classifier Chains | Multi-dimensional classification (MDC) is the supervised learning problem
where an instance is associated with multiple classes, rather than with a
single class, as in traditional classification problems. Since these classes
are often strongly correlated, modeling the dependencies between them allows
MDC methods to improve their performance - at the expense of an increased
computational cost. In this paper we focus on the classifier chains (CC)
approach for modeling dependencies, one of the most popular and highest-
performing methods for multi-label classification (MLC), a particular case of
MDC which involves only binary classes (i.e., labels). The original CC
algorithm makes a greedy approximation, and is fast but tends to propagate
errors along the chain. Here we present novel Monte Carlo schemes, both for
finding a good chain sequence and performing efficient inference. Our
algorithms remain tractable for high-dimensional data sets and obtain the best
predictive performance across several real data sets.
| [
"['Jesse Read' 'Luca Martino' 'David Luengo']",
"Jesse Read, Luca Martino, David Luengo"
] |
cs.LG cs.DS stat.ML | null | 1211.2227 | null | null | http://arxiv.org/pdf/1211.2227v3 | 2013-06-06T02:52:50Z | 2012-11-09T20:47:23Z | Efficient learning of simplices | We show an efficient algorithm for the following problem: Given uniformly
random points from an arbitrary n-dimensional simplex, estimate the simplex.
The size of the sample and the number of arithmetic operations of our algorithm
are polynomial in n. This answers a question of Frieze, Jerrum and Kannan
[FJK]. Our result can also be interpreted as efficiently learning the
intersection of n+1 half-spaces in R^n in the model where the intersection is
bounded and we are given polynomially many uniform samples from it. Our proof
uses the local search technique from Independent Component Analysis (ICA), also
used by [FJK]. Unlike these previous algorithms, which were based on analyzing
the fourth moment, ours is based on the third moment.
We also show a direct connection between the problem of learning a simplex
and ICA: a simple randomized reduction to ICA from the problem of learning a
simplex. The connection is based on a known representation of the uniform
measure on a simplex. Similar representations lead to a reduction from the
problem of learning an affine transformation of an n-dimensional l_p ball to
ICA.
| [
"['Joseph Anderson' 'Navin Goyal' 'Luis Rademacher']",
"Joseph Anderson, Navin Goyal, Luis Rademacher"
] |
cs.LG | null | 1211.2260 | null | null | http://arxiv.org/pdf/1211.2260v1 | 2012-11-09T22:13:10Z | 2012-11-09T22:13:10Z | No-Regret Algorithms for Unconstrained Online Convex Optimization | Some of the most compelling applications of online convex optimization,
including online prediction and classification, are unconstrained: the natural
feasible set is R^n. Existing algorithms fail to achieve sub-linear regret in
this setting unless constraints on the comparator point x^* are known in
advance. We present algorithms that, without such prior knowledge, offer
near-optimal regret bounds with respect to any choice of x^*. In particular,
regret with respect to x^* = 0 is constant. We then prove lower bounds showing
that our guarantees are near-optimal in this setting.
| [
"['Matthew Streeter' 'H. Brendan McMahan']",
"Matthew Streeter and H. Brendan McMahan"
] |
cs.LG stat.ML | null | 1211.2304 | null | null | http://arxiv.org/pdf/1211.2304v1 | 2012-11-10T07:37:44Z | 2012-11-10T07:37:44Z | Probabilistic Combination of Classifier and Cluster Ensembles for
Non-transductive Learning | Unsupervised models can provide supplementary soft constraints to help
classify new target data under the assumption that similar objects in the
target set are more likely to share the same class label. Such models can also
help detect possible differences between training and target distributions,
which is useful in applications where concept drift may take place. This paper
describes a Bayesian framework that takes as input class labels from existing
classifiers (designed based on labeled data from the source domain), as well as
cluster labels from a cluster ensemble operating solely on the target data to
be classified, and yields a consensus labeling of the target data. This
framework is particularly useful when the statistics of the target data drift
or change from those of the training data. We also show that the proposed
framework is privacy-aware and allows performing distributed learning when
data/models have sharing restrictions. Experiments show that our framework can
yield superior results to those provided by applying classifier ensembles only.
| [
"Ayan Acharya, Eduardo R. Hruschka, Joydeep Ghosh, Badrul Sarwar,\n Jean-David Ruvini",
"['Ayan Acharya' 'Eduardo R. Hruschka' 'Joydeep Ghosh' 'Badrul Sarwar'\n 'Jean-David Ruvini']"
] |
cs.NE cs.LG physics.ao-ph stat.AP | 10.1016/j.renene.2012.10.049 | 1211.2378 | null | null | http://arxiv.org/abs/1211.2378v1 | 2012-11-11T07:16:56Z | 2012-11-11T07:16:56Z | Hybrid methodology for hourly global radiation forecasting in
Mediterranean area | The renewable energies prediction and particularly global radiation
forecasting is a challenge studied by a growing number of research teams. This
paper proposes an original technique to model the insolation time series based
on combining Artificial Neural Network (ANN) and Auto-Regressive and Moving
Average (ARMA) model. While ANN by its non-linear nature is effective to
predict cloudy days, ARMA techniques are more dedicated to sunny days without
cloud occurrences. Thus, three hybrids models are suggested: the first proposes
simply to use ARMA for 6 months in spring and summer and to use an optimized
ANN for the other part of the year; the second model is equivalent to the first
but with a seasonal learning; the last model depends on the error occurred the
previous hour. These models were used to forecast the hourly global radiation
for five places in Mediterranean area. The forecasting performance was compared
among several models: the 3 above mentioned models, the best ANN and ARMA for
each location. In the best configuration, the coupling of ANN and ARMA allows
an improvement of more than 1%, with a maximum in autumn (3.4%) and a minimum
in winter (0.9%) where ANN alone is the best.
| [
"['Cyril Voyant' 'Marc Muselli' 'Christophe Paoli' 'Marie Laure Nivet']",
"Cyril Voyant (SPE, CHD Castellucio), Marc Muselli (SPE), Christophe\n Paoli (SPE), Marie Laure Nivet (SPE)"
] |
cs.LG cs.IT math.IT stat.ML | null | 1211.2459 | null | null | http://arxiv.org/pdf/1211.2459v3 | 2014-09-01T21:52:55Z | 2012-11-11T20:49:28Z | Measures of Entropy from Data Using Infinitely Divisible Kernels | Information theory provides principled ways to analyze different inference
and learning problems such as hypothesis testing, clustering, dimensionality
reduction, classification, among others. However, the use of information
theoretic quantities as test statistics, that is, as quantities obtained from
empirical data, poses a challenging estimation problem that often leads to
strong simplifications such as Gaussian models, or the use of plug in density
estimators that are restricted to certain representation of the data. In this
paper, a framework to non-parametrically obtain measures of entropy directly
from data using operators in reproducing kernel Hilbert spaces defined by
infinitely divisible kernels is presented. The entropy functionals, which bear
resemblance with quantum entropies, are defined on positive definite matrices
and satisfy similar axioms to those of Renyi's definition of entropy.
Convergence of the proposed estimators follows from concentration results on
the difference between the ordered spectrum of the Gram matrices and the
integral operators associated to the population quantities. In this way,
capitalizing on both the axiomatic definition of entropy and on the
representation power of positive definite kernels, the proposed measure of
entropy avoids the estimation of the probability distribution underlying the
data. Moreover, estimators of kernel-based conditional entropy and mutual
information are also defined. Numerical experiments on independence tests
compare favourably with state of the art.
| [
"Luis G. Sanchez Giraldo and Murali Rao and Jose C. Principe",
"['Luis G. Sanchez Giraldo' 'Murali Rao' 'Jose C. Principe']"
] |
cs.MA cs.LG stat.ML | null | 1211.2476 | null | null | http://arxiv.org/pdf/1211.2476v1 | 2012-11-11T23:09:02Z | 2012-11-11T23:09:02Z | Random Utility Theory for Social Choice | Random utility theory models an agent's preferences on alternatives by
drawing a real-valued score on each alternative (typically independently) from
a parameterized distribution, and then ranking the alternatives according to
scores. A special case that has received significant attention is the
Plackett-Luce model, for which fast inference methods for maximum likelihood
estimators are available. This paper develops conditions on general random
utility models that enable fast inference within a Bayesian framework through
MC-EM, providing concave loglikelihood functions and bounded sets of global
maxima solutions. Results on both real-world and simulated data provide support
for the scalability of the approach and capability for model selection among
general random utility models including Plackett-Luce.
| [
"['Hossein Azari Soufiani' 'David C. Parkes' 'Lirong Xia']",
"Hossein Azari Soufiani, David C. Parkes, Lirong Xia"
] |
cs.AI cs.LG | null | 1211.2512 | null | null | http://arxiv.org/pdf/1211.2512v2 | 2013-06-03T02:43:45Z | 2012-11-12T05:26:20Z | Minimal cost feature selection of data with normal distribution
measurement errors | Minimal cost feature selection is devoted to obtain a trade-off between test
costs and misclassification costs. This issue has been addressed recently on
nominal data. In this paper, we consider numerical data with measurement errors
and study minimal cost feature selection in this model. First, we build a data
model with normal distribution measurement errors. Second, the neighborhood of
each data item is constructed through the confidence interval. Comparing with
discretized intervals, neighborhoods are more reasonable to maintain the
information of data. Third, we define a new minimal total cost feature
selection problem through considering the trade-off between test costs and
misclassification costs. Fourth, we proposed a backtracking algorithm with
three effective pruning techniques to deal with this problem. The algorithm is
tested on four UCI data sets. Experimental results indicate that the pruning
techniques are effective, and the algorithm is efficient for data sets with
nearly one thousand objects.
| [
"['Hong Zhao' 'Fan Min' 'William Zhu']",
"Hong Zhao, Fan Min and William Zhu"
] |
stat.CO cs.LG stat.ML | null | 1211.2532 | null | null | http://arxiv.org/pdf/1211.2532v3 | 2012-11-27T04:48:51Z | 2012-11-12T08:35:26Z | Iterative Thresholding Algorithm for Sparse Inverse Covariance
Estimation | The L1-regularized maximum likelihood estimation problem has recently become
a topic of great interest within the machine learning, statistics, and
optimization communities as a method for producing sparse inverse covariance
estimators. In this paper, a proximal gradient method (G-ISTA) for performing
L1-regularized covariance matrix estimation is presented. Although numerous
algorithms have been proposed for solving this problem, this simple proximal
gradient method is found to have attractive theoretical and numerical
properties. G-ISTA has a linear rate of convergence, resulting in an O(log e)
iteration complexity to reach a tolerance of e. This paper gives eigenvalue
bounds for the G-ISTA iterates, providing a closed-form linear convergence
rate. The rate is shown to be closely related to the condition number of the
optimal point. Numerical convergence results and timing comparisons for the
proposed method are presented. G-ISTA is shown to perform very well, especially
when the optimal point is well-conditioned.
| [
"Dominique Guillot and Bala Rajaratnam and Benjamin T. Rolfs and Arian\n Maleki and Ian Wong",
"['Dominique Guillot' 'Bala Rajaratnam' 'Benjamin T. Rolfs' 'Arian Maleki'\n 'Ian Wong']"
] |
cs.LG cs.CV stat.ML | null | 1211.2556 | null | null | http://arxiv.org/pdf/1211.2556v1 | 2012-11-12T10:42:58Z | 2012-11-12T10:42:58Z | A Comparative Study of Gaussian Mixture Model and Radial Basis Function
for Voice Recognition | A comparative study of the application of Gaussian Mixture Model (GMM) and
Radial Basis Function (RBF) in biometric recognition of voice has been carried
out and presented. The application of machine learning techniques to biometric
authentication and recognition problems has gained a widespread acceptance. In
this research, a GMM model was trained, using Expectation Maximization (EM)
algorithm, on a dataset containing 10 classes of vowels and the model was used
to predict the appropriate classes using a validation dataset. For experimental
validity, the model was compared to the performance of two different versions
of RBF model using the same learning and validation datasets. The results
showed very close recognition accuracy between the GMM and the standard RBF
model, but with GMM performing better than the standard RBF by less than 1% and
the two models outperformed similar models reported in literature. The DTREG
version of RBF outperformed the other two models by producing 94.8% recognition
accuracy. In terms of recognition time, the standard RBF was found to be the
fastest among the three models.
| [
"['Fatai Adesina Anifowose']",
"Fatai Adesina Anifowose"
] |
stat.ML cs.LG math.OC | null | 1211.2717 | null | null | http://arxiv.org/pdf/1211.2717v1 | 2012-11-12T18:08:34Z | 2012-11-12T18:08:34Z | Proximal Stochastic Dual Coordinate Ascent | We introduce a proximal version of dual coordinate ascent method. We
demonstrate how the derived algorithmic framework can be used for numerous
regularized loss minimization problems, including $\ell_1$ regularization and
structured output SVM. The convergence rates we obtain match, and sometimes
improve, state-of-the-art results.
| [
"Shai Shalev-Shwartz and Tong Zhang",
"['Shai Shalev-Shwartz' 'Tong Zhang']"
] |
cs.CV cs.LG stat.ML | null | 1211.2881 | null | null | http://arxiv.org/pdf/1211.2881v3 | 2012-11-28T08:39:03Z | 2012-11-13T03:41:31Z | Deep Attribute Networks | Obtaining compact and discriminative features is one of the major challenges
in many of the real-world image classification tasks such as face verification
and object recognition. One possible approach is to represent input image on
the basis of high-level features that carry semantic meaning which humans can
understand. In this paper, a model coined deep attribute network (DAN) is
proposed to address this issue. For an input image, the model outputs the
attributes of the input image without performing any classification. The
efficacy of the proposed model is evaluated on unconstrained face verification
and real-world object recognition tasks using the LFW and the a-PASCAL
datasets. We demonstrate the potential of deep learning for attribute-based
classification by showing comparable results with existing state-of-the-art
results. Once properly trained, the DAN is fast and does away with calculating
low-level features which are maybe unreliable and computationally expensive.
| [
"['Junyoung Chung' 'Donghoon Lee' 'Youngjoo Seo' 'Chang D. Yoo']",
"Junyoung Chung, Donghoon Lee, Youngjoo Seo, and Chang D. Yoo"
] |
cs.IR cs.LG stat.ML | null | 1211.2891 | null | null | http://arxiv.org/pdf/1211.2891v1 | 2012-11-13T05:30:36Z | 2012-11-13T05:30:36Z | Boosting Simple Collaborative Filtering Models Using Ensemble Methods | In this paper we examine the effect of applying ensemble learning to the
performance of collaborative filtering methods. We present several systematic
approaches for generating an ensemble of collaborative filtering models based
on a single collaborative filtering algorithm (single-model or homogeneous
ensemble). We present an adaptation of several popular ensemble techniques in
machine learning for the collaborative filtering domain, including bagging,
boosting, fusion and randomness injection. We evaluate the proposed approach on
several types of collaborative filtering base models: k- NN, matrix
factorization and a neighborhood matrix factorization model. Empirical
evaluation shows a prediction improvement compared to all base CF algorithms.
In particular, we show that the performance of an ensemble of simple (weak) CF
models such as k-NN is competitive compared with a single strong CF model (such
as matrix factorization) while requiring an order of magnitude less
computational cost.
| [
"['Ariel Bar' 'Lior Rokach' 'Guy Shani' 'Bracha Shapira' 'Alon Schclar']",
"Ariel Bar, Lior Rokach, Guy Shani, Bracha Shapira, Alon Schclar"
] |
math.CO cs.CG cs.DM cs.LG | null | 1211.2980 | null | null | http://arxiv.org/pdf/1211.2980v1 | 2012-11-13T13:16:48Z | 2012-11-13T13:16:48Z | Shattering-Extremal Systems | The Shatters relation and the VC dimension have been investigated since the
early seventies. These concepts have found numerous applications in statistics,
combinatorics, learning theory and computational geometry. Shattering extremal
systems are set-systems with a very rich structure and many different
characterizations. The goal of this thesis is to elaborate on the structure of
these systems.
| [
"['Shay Moran']",
"Shay Moran"
] |
stat.ML cs.LG stat.AP | null | 1211.3010 | null | null | http://arxiv.org/pdf/1211.3010v1 | 2012-11-13T14:54:47Z | 2012-11-13T14:54:47Z | Time-series Scenario Forecasting | Many applications require the ability to judge uncertainty of time-series
forecasts. Uncertainty is often specified as point-wise error bars around a
mean or median forecast. Due to temporal dependencies, such a method obscures
some information. We would ideally have a way to query the posterior
probability of the entire time-series given the predictive variables, or at a
minimum, be able to draw samples from this distribution. We use a Bayesian
dictionary learning algorithm to statistically generate an ensemble of
forecasts. We show that the algorithm performs as well as a physics-based
ensemble method for temperature forecasts for Houston. We conclude that the
method shows promise for scenario forecasting where physics-based methods are
absent.
| [
"Sriharsha Veeramachaneni",
"['Sriharsha Veeramachaneni']"
] |
cs.LG | null | 1211.3046 | null | null | http://arxiv.org/pdf/1211.3046v4 | 2014-02-21T20:57:42Z | 2012-11-13T16:39:45Z | Recovering the Optimal Solution by Dual Random Projection | Random projection has been widely used in data classification. It maps
high-dimensional data into a low-dimensional subspace in order to reduce the
computational cost in solving the related optimization problem. While previous
studies are focused on analyzing the classification performance of using random
projection, in this work, we consider the recovery problem, i.e., how to
accurately recover the optimal solution to the original optimization problem in
the high-dimensional space based on the solution learned from the subspace
spanned by random projections. We present a simple algorithm, termed Dual
Random Projection, that uses the dual solution of the low-dimensional
optimization problem to recover the optimal solution to the original problem.
Our theoretical analysis shows that with a high probability, the proposed
algorithm is able to accurately recover the optimal solution to the original
problem, provided that the data matrix is of low rank or can be well
approximated by a low rank matrix.
| [
"['Lijun Zhang' 'Mehrdad Mahdavi' 'Rong Jin' 'Tianbao Yang' 'Shenghuo Zhu']",
"Lijun Zhang, Mehrdad Mahdavi, Rong Jin, Tianbao Yang, Shenghuo Zhu"
] |
cs.LG cs.AI | null | 1211.3212 | null | null | http://arxiv.org/pdf/1211.3212v1 | 2012-11-14T06:45:38Z | 2012-11-14T06:45:38Z | Distributed Non-Stochastic Experts | We consider the online distributed non-stochastic experts problem, where the
distributed system consists of one coordinator node that is connected to $k$
sites, and the sites are required to communicate with each other via the
coordinator. At each time-step $t$, one of the $k$ site nodes has to pick an
expert from the set ${1, ..., n}$, and the same site receives information about
payoffs of all experts for that round. The goal of the distributed system is to
minimize regret at time horizon $T$, while simultaneously keeping communication
to a minimum.
The two extreme solutions to this problem are: (i) Full communication: This
essentially simulates the non-distributed setting to obtain the optimal
$O(\sqrt{\log(n)T})$ regret bound at the cost of $T$ communication. (ii) No
communication: Each site runs an independent copy : the regret is
$O(\sqrt{log(n)kT})$ and the communication is 0. This paper shows the
difficulty of simultaneously achieving regret asymptotically better than
$\sqrt{kT}$ and communication better than $T$. We give a novel algorithm that
for an oblivious adversary achieves a non-trivial trade-off: regret
$O(\sqrt{k^{5(1+\epsilon)/6} T})$ and communication $O(T/k^{\epsilon})$, for
any value of $\epsilon \in (0, 1/5)$. We also consider a variant of the model,
where the coordinator picks the expert. In this model, we show that the
label-efficient forecaster of Cesa-Bianchi et al. (2005) already gives us
strategy that is near optimal in regret vs communication trade-off.
| [
"Varun Kanade, Zhenming Liu, Bozidar Radunovic",
"['Varun Kanade' 'Zhenming Liu' 'Bozidar Radunovic']"
] |
stat.ML cs.LG | null | 1211.3295 | null | null | http://arxiv.org/pdf/1211.3295v2 | 2013-09-27T15:56:21Z | 2012-11-14T12:56:06Z | Order-independent constraint-based causal structure learning | We consider constraint-based methods for causal structure learning, such as
the PC-, FCI-, RFCI- and CCD- algorithms (Spirtes et al. (2000, 1993),
Richardson (1996), Colombo et al. (2012), Claassen et al. (2013)). The first
step of all these algorithms consists of the PC-algorithm. This algorithm is
known to be order-dependent, in the sense that the output can depend on the
order in which the variables are given. This order-dependence is a minor issue
in low-dimensional settings. We show, however, that it can be very pronounced
in high-dimensional settings, where it can lead to highly variable results. We
propose several modifications of the PC-algorithm (and hence also of the other
algorithms) that remove part or all of this order-dependence. All proposed
modifications are consistent in high-dimensional settings under the same
conditions as their original counterparts. We compare the PC-, FCI-, and
RFCI-algorithms and their modifications in simulation studies and on a yeast
gene expression data set. We show that our modifications yield similar
performance in low-dimensional settings and improved performance in
high-dimensional settings. All software is implemented in the R-package pcalg.
| [
"Diego Colombo and Marloes H. Maathuis",
"['Diego Colombo' 'Marloes H. Maathuis']"
] |
cs.SI cs.DS cs.LG physics.soc-ph stat.ML | null | 1211.3412 | null | null | http://arxiv.org/pdf/1211.3412v1 | 2012-11-14T01:48:37Z | 2012-11-14T01:48:37Z | Network Sampling: From Static to Streaming Graphs | Network sampling is integral to the analysis of social, information, and
biological networks. Since many real-world networks are massive in size,
continuously evolving, and/or distributed in nature, the network structure is
often sampled in order to facilitate study. For these reasons, a more thorough
and complete understanding of network sampling is critical to support the field
of network science. In this paper, we outline a framework for the general
problem of network sampling, by highlighting the different objectives,
population and units of interest, and classes of network sampling methods. In
addition, we propose a spectrum of computational models for network sampling
methods, ranging from the traditionally studied model based on the assumption
of a static domain to a more challenging model that is appropriate for
streaming domains. We design a family of sampling methods based on the concept
of graph induction that generalize across the full spectrum of computational
models (from static to streaming) while efficiently preserving many of the
topological properties of the input graphs. Furthermore, we demonstrate how
traditional static sampling algorithms can be modified for graph streams for
each of the three main classes of sampling methods: node, edge, and
topology-based sampling. Our experimental results indicate that our proposed
family of sampling methods more accurately preserves the underlying properties
of the graph for both static and streaming graphs. Finally, we study the impact
of network sampling algorithms on the parameter estimation and performance
evaluation of relational classification algorithms.
| [
"Nesreen K. Ahmed and Jennifer Neville and Ramana Kompella",
"['Nesreen K. Ahmed' 'Jennifer Neville' 'Ramana Kompella']"
] |
cs.LG math.NA stat.ML | null | 1211.3444 | null | null | http://arxiv.org/pdf/1211.3444v1 | 2012-11-14T22:05:09Z | 2012-11-14T22:05:09Z | Spectral Clustering: An empirical study of Approximation Algorithms and
its Application to the Attrition Problem | Clustering is the problem of separating a set of objects into groups (called
clusters) so that objects within the same cluster are more similar to each
other than to those in different clusters. Spectral clustering is a now
well-known method for clustering which utilizes the spectrum of the data
similarity matrix to perform this separation. Since the method relies on
solving an eigenvector problem, it is computationally expensive for large
datasets. To overcome this constraint, approximation methods have been
developed which aim to reduce running time while maintaining accurate
classification. In this article, we summarize and experimentally evaluate
several approximation methods for spectral clustering. From an applications
standpoint, we employ spectral clustering to solve the so-called attrition
problem, where one aims to identify from a set of employees those who are
likely to voluntarily leave the company from those who are not. Our study sheds
light on the empirical performance of existing approximate spectral clustering
methods and shows the applicability of these methods in an important business
optimization related problem.
| [
"['B. Cung' 'T. Jin' 'J. Ramirez' 'A. Thompson' 'C. Boutsidis' 'D. Needell']",
"B. Cung, T. Jin, J. Ramirez, A. Thompson, C. Boutsidis and D. Needell"
] |
cs.NA cs.LG math.NA | 10.1109/TNNLS.2013.2271507 | 1211.3500 | null | null | http://arxiv.org/abs/1211.3500v2 | 2013-06-25T03:06:52Z | 2012-11-15T05:50:30Z | Accelerated Canonical Polyadic Decomposition by Using Mode Reduction | Canonical Polyadic (or CANDECOMP/PARAFAC, CP) decompositions (CPD) are widely
applied to analyze high order tensors. Existing CPD methods use alternating
least square (ALS) iterations and hence need to unfold tensors to each of the
$N$ modes frequently, which is one major bottleneck of efficiency for
large-scale data and especially when $N$ is large. To overcome this problem, in
this paper we proposed a new CPD method which converts the original $N$th
($N>3$) order tensor to a 3rd-order tensor first. Then the full CPD is realized
by decomposing this mode reduced tensor followed by a Khatri-Rao product
projection procedure. This way is quite efficient as unfolding to each of the
$N$ modes are avoided, and dimensionality reduction can also be easily
incorporated to further improve the efficiency. We show that, under mild
conditions, any $N$th-order CPD can be converted into a 3rd-order case but
without destroying the essential uniqueness, and theoretically gives the same
results as direct $N$-way CPD methods. Simulations show that, compared with
state-of-the-art CPD methods, the proposed method is more efficient and escape
from local solutions more easily.
| [
"['Guoxu Zhou' 'Andrzej Cichocki' 'Shengli Xie']",
"Guoxu Zhou, Andrzej Cichocki, and Shengli Xie"
] |
cs.NE cs.LG stat.ML | null | 1211.3711 | null | null | http://arxiv.org/pdf/1211.3711v1 | 2012-11-14T19:25:21Z | 2012-11-14T19:25:21Z | Sequence Transduction with Recurrent Neural Networks | Many machine learning tasks can be expressed as the transformation---or
\emph{transduction}---of input sequences into output sequences: speech
recognition, machine translation, protein secondary structure prediction and
text-to-speech to name but a few. One of the key challenges in sequence
transduction is learning to represent both the input and output sequences in a
way that is invariant to sequential distortions such as shrinking, stretching
and translating. Recurrent neural networks (RNNs) are a powerful sequence
learning architecture that has proven capable of learning such representations.
However RNNs traditionally require a pre-defined alignment between the input
and output sequences to perform transduction. This is a severe limitation since
\emph{finding} the alignment is the most difficult aspect of many sequence
transduction problems. Indeed, even determining the length of the output
sequence is often challenging. This paper introduces an end-to-end,
probabilistic sequence transduction system, based entirely on RNNs, that is in
principle able to transform any input sequence into any finite, discrete output
sequence. Experimental results for phoneme recognition are provided on the
TIMIT speech corpus.
| [
"Alex Graves",
"['Alex Graves']"
] |
cs.LG cs.AI math.OC stat.ML | null | 1211.3831 | null | null | http://arxiv.org/pdf/1211.3831v3 | 2013-03-07T13:36:08Z | 2012-11-16T08:54:08Z | Objective Improvement in Information-Geometric Optimization | Information-Geometric Optimization (IGO) is a unified framework of stochastic
algorithms for optimization problems. Given a family of probability
distributions, IGO turns the original optimization problem into a new
maximization problem on the parameter space of the probability distributions.
IGO updates the parameter of the probability distribution along the natural
gradient, taken with respect to the Fisher metric on the parameter manifold,
aiming at maximizing an adaptive transform of the objective function. IGO
recovers several known algorithms as particular instances: for the family of
Bernoulli distributions IGO recovers PBIL, for the family of Gaussian
distributions the pure rank-mu CMA-ES update is recovered, and for exponential
families in expectation parametrization the cross-entropy/ML method is
recovered. This article provides a theoretical justification for the IGO
framework, by proving that any step size not greater than 1 guarantees monotone
improvement over the course of optimization, in terms of q-quantile values of
the objective function f. The range of admissible step sizes is independent of
f and its domain. We extend the result to cover the case of different step
sizes for blocks of the parameters in the IGO algorithm. Moreover, we prove
that expected fitness improves over time when fitness-proportional selection is
applied, in which case the RPP algorithm is recovered.
| [
"['Youhei Akimoto' 'Yann Ollivier']",
"Youhei Akimoto (INRIA Saclay - Ile de France), Yann Ollivier (LRI)"
] |
cs.GT cs.LG | null | 1211.3955 | null | null | http://arxiv.org/pdf/1211.3955v1 | 2012-11-16T17:07:33Z | 2012-11-16T17:07:33Z | On Calibrated Predictions for Auction Selection Mechanisms | Calibration is a basic property for prediction systems, and algorithms for
achieving it are well-studied in both statistics and machine learning. In many
applications, however, the predictions are used to make decisions that select
which observations are made. This makes calibration difficult, as adjusting
predictions to achieve calibration changes future data. We focus on
click-through-rate (CTR) prediction for search ad auctions. Here, CTR
predictions are used by an auction that determines which ads are shown, and we
want to maximize the value generated by the auction.
We show that certain natural notions of calibration can be impossible to
achieve, depending on the details of the auction. We also show that it can be
impossible to maximize auction efficiency while using calibrated predictions.
Finally, we give conditions under which calibration is achievable and
simultaneously maximizes auction efficiency: roughly speaking, bids and queries
must not contain information about CTRs that is not already captured by the
predictions.
| [
"H. Brendan McMahan and Omkar Muralidharan",
"['H. Brendan McMahan' 'Omkar Muralidharan']"
] |
cs.LG stat.ML | null | 1211.3966 | null | null | http://arxiv.org/pdf/1211.3966v3 | 2014-10-15T20:18:33Z | 2012-11-16T17:48:42Z | Lasso Screening Rules via Dual Polytope Projection | Lasso is a widely used regression technique to find sparse representations.
When the dimension of the feature space and the number of samples are extremely
large, solving the Lasso problem remains challenging. To improve the efficiency
of solving large-scale Lasso problems, El Ghaoui and his colleagues have
proposed the SAFE rules which are able to quickly identify the inactive
predictors, i.e., predictors that have $0$ components in the solution vector.
Then, the inactive predictors or features can be removed from the optimization
problem to reduce its scale. By transforming the standard Lasso to its dual
form, it can be shown that the inactive predictors include the set of inactive
constraints on the optimal dual solution. In this paper, we propose an
efficient and effective screening rule via Dual Polytope Projections (DPP),
which is mainly based on the uniqueness and nonexpansiveness of the optimal
dual solution due to the fact that the feasible set in the dual space is a
convex and closed polytope. Moreover, we show that our screening rule can be
extended to identify inactive groups in group Lasso. To the best of our
knowledge, there is currently no "exact" screening rule for group Lasso. We
have evaluated our screening rule using synthetic and real data sets. Results
show that our rule is more effective in identifying inactive predictors than
existing state-of-the-art screening rules for Lasso.
| [
"['Jie Wang' 'Peter Wonka' 'Jieping Ye']",
"Jie Wang, Peter Wonka, Jieping Ye"
] |
cs.LG cs.NA math.AG math.CO stat.ML | null | 1211.4116 | null | null | http://arxiv.org/pdf/1211.4116v4 | 2014-08-19T15:00:30Z | 2012-11-17T12:23:36Z | The Algebraic Combinatorial Approach for Low-Rank Matrix Completion | We present a novel algebraic combinatorial view on low-rank matrix completion
based on studying relations between a few entries with tools from algebraic
geometry and matroid theory. The intrinsic locality of the approach allows for
the treatment of single entries in a closed theoretical and practical
framework. More specifically, apart from introducing an algebraic combinatorial
theory of low-rank matrix completion, we present probability-one algorithms to
decide whether a particular entry of the matrix can be completed. We also
describe methods to complete that entry from a few others, and to estimate the
error which is incurred by any method completing that entry. Furthermore, we
show how known results on matrix completion and their sampling assumptions can
be related to our new perspective and interpreted in terms of a completability
phase transition.
| [
"Franz J. Kir\\'aly, Louis Theran, Ryota Tomioka",
"['Franz J. Király' 'Louis Theran' 'Ryota Tomioka']"
] |
stat.ML cs.LG | null | 1211.4142 | null | null | http://arxiv.org/pdf/1211.4142v1 | 2012-11-17T18:28:30Z | 2012-11-17T18:28:30Z | Data Clustering via Principal Direction Gap Partitioning | We explore the geometrical interpretation of the PCA based clustering
algorithm Principal Direction Divisive Partitioning (PDDP). We give several
examples where this algorithm breaks down, and suggest a new method, gap
partitioning, which takes into account natural gaps in the data between
clusters. Geometric features of the PCA space are derived and illustrated and
experimental results are given which show our method is comparable on the
datasets used in the original paper on PDDP.
| [
"['Ralph Abbey' 'Jeremy Diepenbrock' 'Amy Langville' 'Carl Meyer'\n 'Shaina Race' 'Dexin Zhou']",
"Ralph Abbey, Jeremy Diepenbrock, Amy Langville, Carl Meyer, Shaina\n Race, Dexin Zhou"
] |
cs.GT cs.DS cs.LG | null | 1211.4150 | null | null | http://arxiv.org/pdf/1211.4150v1 | 2012-11-17T19:30:52Z | 2012-11-17T19:30:52Z | Efficiently Learning from Revealed Preference | In this paper, we consider the revealed preferences problem from a learning
perspective. Every day, a price vector and a budget is drawn from an unknown
distribution, and a rational agent buys his most preferred bundle according to
some unknown utility function, subject to the given prices and budget
constraint. We wish not only to find a utility function which rationalizes a
finite set of observations, but to produce a hypothesis valuation function
which accurately predicts the behavior of the agent in the future. We give
efficient algorithms with polynomial sample-complexity for agents with linear
valuation functions, as well as for agents with linearly separable, concave
valuation functions with bounded second derivative.
| [
"['Morteza Zadimoghaddam' 'Aaron Roth']",
"Morteza Zadimoghaddam and Aaron Roth"
] |
cs.LG stat.ML | null | 1211.4246 | null | null | http://arxiv.org/pdf/1211.4246v5 | 2014-08-19T15:12:19Z | 2012-11-18T19:06:37Z | What Regularized Auto-Encoders Learn from the Data Generating
Distribution | What do auto-encoders learn about the underlying data generating
distribution? Recent work suggests that some auto-encoder variants do a good
job of capturing the local manifold structure of data. This paper clarifies
some of these previous observations by showing that minimizing a particular
form of regularized reconstruction error yields a reconstruction function that
locally characterizes the shape of the data generating density. We show that
the auto-encoder captures the score (derivative of the log-density with respect
to the input). It contradicts previous interpretations of reconstruction error
as an energy function. Unlike previous results, the theorems provided here are
completely generic and do not depend on the parametrization of the
auto-encoder: they show what the auto-encoder would tend to if given enough
capacity and examples. These results are for a contractive training criterion
we show to be similar to the denoising auto-encoder training criterion with
small corruption noise, but with contraction applied on the whole
reconstruction function rather than just encoder. Similarly to score matching,
one can consider the proposed training criterion as a convenient alternative to
maximum likelihood because it does not involve a partition function. Finally,
we show how an approximate Metropolis-Hastings MCMC can be setup to recover
samples from the estimated distribution, and this is confirmed in sampling
experiments.
| [
"['Guillaume Alain' 'Yoshua Bengio']",
"Guillaume Alain and Yoshua Bengio"
] |
cs.LG cs.CE q-bio.QM stat.ML | 10.5121/ijbb.2013.3202 | 1211.4289 | null | null | http://arxiv.org/abs/1211.4289v3 | 2013-07-11T10:29:29Z | 2012-11-19T02:59:14Z | Application of three graph Laplacian based semi-supervised learning
methods to protein function prediction problem | Protein function prediction is the important problem in modern biology. In
this paper, the un-normalized, symmetric normalized, and random walk graph
Laplacian based semi-supervised learning methods will be applied to the
integrated network combined from multiple networks to predict the functions of
all yeast proteins in these multiple networks. These multiple networks are
network created from Pfam domain structure, co-participation in a protein
complex, protein-protein interaction network, genetic interaction network, and
network created from cell cycle gene expression measurements. Multiple networks
are combined with fixed weights instead of using convex optimization to
determine the combination weights due to high time complexity of convex
optimization method. This simple combination method will not affect the
accuracy performance measures of the three semi-supervised learning methods.
Experiment results show that the un-normalized and symmetric normalized graph
Laplacian based methods perform slightly better than random walk graph
Laplacian based method for integrated network. Moreover, the accuracy
performance measures of these three semi-supervised learning methods for
integrated network are much better than the best accuracy performance measures
of these three methods for the individual network.
| [
"['Loc Tran']",
"Loc Tran"
] |
stat.ML cs.LG stat.ME | null | 1211.4321 | null | null | http://arxiv.org/pdf/1211.4321v1 | 2012-11-19T07:40:51Z | 2012-11-19T07:40:51Z | Bayesian nonparametric models for ranked data | We develop a Bayesian nonparametric extension of the popular Plackett-Luce
choice model that can handle an infinite number of choice items. Our framework
is based on the theory of random atomic measures, with the prior specified by a
gamma process. We derive a posterior characterization and a simple and
effective Gibbs sampler for posterior simulation. We develop a time-varying
extension of our model, and apply it to the New York Times lists of weekly
bestselling books.
| [
"Francois Caron (INRIA Bordeaux - Sud-Ouest, IMB), Yee Whye Teh",
"['Francois Caron' 'Yee Whye Teh']"
] |
cs.IT cs.LG math.IT | null | 1211.4384 | null | null | http://arxiv.org/pdf/1211.4384v1 | 2012-11-19T12:19:45Z | 2012-11-19T12:19:45Z | A Sensing Policy Based on Confidence Bounds and a Restless Multi-Armed
Bandit Model | A sensing policy for the restless multi-armed bandit problem with stationary
but unknown reward distributions is proposed. The work is presented in the
context of cognitive radios in which the bandit problem arises when deciding
which parts of the spectrum to sense and exploit. It is shown that the proposed
policy attains asymptotically logarithmic weak regret rate when the rewards are
bounded independent and identically distributed or finite state Markovian.
Simulation results verifying uniformly logarithmic weak regret are also
presented. The proposed policy is a centrally coordinated index policy, in
which the index of a frequency band is comprised of a sample mean term and a
confidence term. The sample mean term promotes spectrum exploitation whereas
the confidence term encourages exploration. The confidence term is designed
such that the time interval between consecutive sensing instances of any
suboptimal band grows exponentially. This exponential growth between suboptimal
sensing time instances leads to logarithmically growing weak regret. Simulation
results demonstrate that the proposed policy performs better than other similar
methods in the literature.
| [
"['Jan Oksanen' 'Visa Koivunen' 'H. Vincent Poor']",
"Jan Oksanen, Visa Koivunen, H. Vincent Poor"
] |
cs.LG stat.ML | null | 1211.4410 | null | null | http://arxiv.org/pdf/1211.4410v4 | 2013-01-25T22:03:25Z | 2012-11-19T13:33:55Z | Mixture Gaussian Process Conditional Heteroscedasticity | Generalized autoregressive conditional heteroscedasticity (GARCH) models have
long been considered as one of the most successful families of approaches for
volatility modeling in financial return series. In this paper, we propose an
alternative approach based on methodologies widely used in the field of
statistical machine learning. Specifically, we propose a novel nonparametric
Bayesian mixture of Gaussian process regression models, each component of which
models the noise variance process that contaminates the observed data as a
separate latent Gaussian process driven by the observed data. This way, we
essentially obtain a mixture Gaussian process conditional heteroscedasticity
(MGPCH) model for volatility modeling in financial return series. We impose a
nonparametric prior with power-law nature over the distribution of the model
mixture components, namely the Pitman-Yor process prior, to allow for better
capturing modeled data distributions with heavy tails and skewness. Finally, we
provide a copula- based approach for obtaining a predictive posterior for the
covariances over the asset returns modeled by means of a postulated MGPCH
model. We evaluate the efficacy of our approach in a number of benchmark
scenarios, and compare its performance to state-of-the-art methodologies.
| [
"['Emmanouil A. Platanios' 'Sotirios P. Chatzis']",
"Emmanouil A. Platanios and Sotirios P. Chatzis"
] |
cs.IT cs.LG math.IT | 10.1109/JSTSP.2013.2258657 | 1211.4518 | null | null | http://arxiv.org/abs/1211.4518v3 | 2013-03-25T21:29:44Z | 2012-11-19T17:40:54Z | Hypothesis Testing in Feedforward Networks with Broadcast Failures | Consider a countably infinite set of nodes, which sequentially make decisions
between two given hypotheses. Each node takes a measurement of the underlying
truth, observes the decisions from some immediate predecessors, and makes a
decision between the given hypotheses. We consider two classes of broadcast
failures: 1) each node broadcasts a decision to the other nodes, subject to
random erasure in the form of a binary erasure channel; 2) each node broadcasts
a randomly flipped decision to the other nodes in the form of a binary
symmetric channel. We are interested in whether there exists a decision
strategy consisting of a sequence of likelihood ratio tests such that the node
decisions converge in probability to the underlying truth. In both cases, we
show that if each node only learns from a bounded number of immediate
predecessors, then there does not exist a decision strategy such that the
decisions converge in probability to the underlying truth. However, in case 1,
we show that if each node learns from an unboundedly growing number of
predecessors, then the decisions converge in probability to the underlying
truth, even when the erasure probabilities converge to 1. We also derive the
convergence rate of the error probability. In case 2, we show that if each node
learns from all of its previous predecessors, then the decisions converge in
probability to the underlying truth when the flipping probabilities of the
binary symmetric channels are bounded away from 1/2. In the case where the
flipping probabilities converge to 1/2, we derive a necessary condition on the
convergence rate of the flipping probabilities such that the decisions still
converge to the underlying truth. We also explicitly characterize the
relationship between the convergence rate of the error probability and the
convergence rate of the flipping probabilities.
| [
"['Zhenliang Zhang' 'Edwin K. P. Chong' 'Ali Pezeshki' 'William Moran']",
"Zhenliang Zhang, Edwin K. P. Chong, Ali Pezeshki, and William Moran"
] |
cs.LG cs.CV cs.IT math.IT stat.ML | 10.1109/TSP.2014.2318138 | 1211.4657 | null | null | http://arxiv.org/abs/1211.4657v2 | 2014-05-01T15:56:00Z | 2012-11-20T03:22:45Z | Forest Sparsity for Multi-channel Compressive Sensing | In this paper, we investigate a new compressive sensing model for
multi-channel sparse data where each channel can be represented as a
hierarchical tree and different channels are highly correlated. Therefore, the
full data could follow the forest structure and we call this property as
\emph{forest sparsity}. It exploits both intra- and inter- channel correlations
and enriches the family of existing model-based compressive sensing theories.
The proposed theory indicates that only $\mathcal{O}(Tk+\log(N/k))$
measurements are required for multi-channel data with forest sparsity, where
$T$ is the number of channels, $N$ and $k$ are the length and sparsity number
of each channel respectively. This result is much better than
$\mathcal{O}(Tk+T\log(N/k))$ of tree sparsity, $\mathcal{O}(Tk+k\log(N/k))$ of
joint sparsity, and far better than $\mathcal{O}(Tk+Tk\log(N/k))$ of standard
sparsity. In addition, we extend the forest sparsity theory to the multiple
measurement vectors problem, where the measurement matrix is a block-diagonal
matrix. The result shows that the required measurement bound can be the same as
that for dense random measurement matrix, when the data shares equal energy in
each channel. A new algorithm is developed and applied on four example
applications to validate the benefit of the proposed model. Extensive
experiments demonstrate the effectiveness and efficiency of the proposed theory
and algorithm.
| [
"['Chen Chen' 'Yeqing Li' 'Junzhou Huang']",
"Chen Chen and Yeqing Li and Junzhou Huang"
] |
stat.ML cs.LG | null | 1211.4753 | null | null | http://arxiv.org/pdf/1211.4753v1 | 2012-11-20T14:22:07Z | 2012-11-20T14:22:07Z | A unifying representation for a class of dependent random measures | We present a general construction for dependent random measures based on
thinning Poisson processes on an augmented space. The framework is not
restricted to dependent versions of a specific nonparametric model, but can be
applied to all models that can be represented using completely random measures.
Several existing dependent random measures can be seen as specific cases of
this framework. Interesting properties of the resulting measures are derived
and the efficacy of the framework is demonstrated by constructing a
covariate-dependent latent feature model and topic model that obtain superior
predictive performance.
| [
"Nicholas J. Foti, Joseph D. Futoma, Daniel N. Rockmore, Sinead\n Williamson",
"['Nicholas J. Foti' 'Joseph D. Futoma' 'Daniel N. Rockmore'\n 'Sinead Williamson']"
] |
stat.ML cs.LG | null | 1211.4798 | null | null | http://arxiv.org/pdf/1211.4798v1 | 2012-11-20T16:29:13Z | 2012-11-20T16:29:13Z | A survey of non-exchangeable priors for Bayesian nonparametric models | Dependent nonparametric processes extend distributions over measures, such as
the Dirichlet process and the beta process, to give distributions over
collections of measures, typically indexed by values in some covariate space.
Such models are appropriate priors when exchangeability assumptions do not
hold, and instead we want our model to vary fluidly with some set of
covariates. Since the concept of dependent nonparametric processes was
formalized by MacEachern [1], there have been a number of models proposed and
used in the statistics and machine learning literatures. Many of these models
exhibit underlying similarities, an understanding of which, we hope, will help
in selecting an appropriate prior, developing new models, and leveraging
inference techniques.
| [
"Nicholas J. Foti, Sinead Williamson",
"['Nicholas J. Foti' 'Sinead Williamson']"
] |
cs.CV cs.LG stat.ML | null | 1211.4860 | null | null | http://arxiv.org/pdf/1211.4860v1 | 2012-11-20T20:54:30Z | 2012-11-20T20:54:30Z | Domain Adaptations for Computer Vision Applications | A basic assumption of statistical learning theory is that train and test data
are drawn from the same underlying distribution. Unfortunately, this assumption
doesn't hold in many applications. Instead, ample labeled data might exist in a
particular `source' domain while inference is needed in another, `target'
domain. Domain adaptation methods leverage labeled data from both domains to
improve classification on unseen data in the target domain. In this work we
survey domain transfer learning methods for various application domains with
focus on recent work in Computer Vision.
| [
"Oscar Beijbom",
"['Oscar Beijbom']"
] |
cs.LG stat.ML | null | 1211.4888 | null | null | http://arxiv.org/pdf/1211.4888v1 | 2012-11-20T21:50:22Z | 2012-11-20T21:50:22Z | A Traveling Salesman Learns Bayesian Networks | Structure learning of Bayesian networks is an important problem that arises
in numerous machine learning applications. In this work, we present a novel
approach for learning the structure of Bayesian networks using the solution of
an appropriately constructed traveling salesman problem. In our approach, one
computes an optimal ordering (partially ordered set) of random variables using
methods for the traveling salesman problem. This ordering significantly reduces
the search space for the subsequent greedy optimization that computes the final
structure of the Bayesian network. We demonstrate our approach of learning
Bayesian networks on real world census and weather datasets. In both cases, we
demonstrate that the approach very accurately captures dependencies between
random variables. We check the accuracy of the predictions based on independent
studies in both application domains.
| [
"Tuhin Sahai, Stefan Klus and Michael Dellnitz",
"['Tuhin Sahai' 'Stefan Klus' 'Michael Dellnitz']"
] |
cs.IT cs.LG math.IT stat.ML | null | 1211.4909 | null | null | http://arxiv.org/pdf/1211.4909v7 | 2013-09-29T15:56:47Z | 2012-11-21T01:06:49Z | Fast Marginalized Block Sparse Bayesian Learning Algorithm | The performance of sparse signal recovery from noise corrupted,
underdetermined measurements can be improved if both sparsity and correlation
structure of signals are exploited. One typical correlation structure is the
intra-block correlation in block sparse signals. To exploit this structure, a
framework, called block sparse Bayesian learning (BSBL), has been proposed
recently. Algorithms derived from this framework showed superior performance
but they are not very fast, which limits their applications. This work derives
an efficient algorithm from this framework, using a marginalized likelihood
maximization method. Compared to existing BSBL algorithms, it has close
recovery performance but is much faster. Therefore, it is more suitable for
large scale datasets and applications requiring real-time implementation.
| [
"['Benyuan Liu' 'Zhilin Zhang' 'Hongqi Fan' 'Qiang Fu']",
"Benyuan Liu, Zhilin Zhang, Hongqi Fan, Qiang Fu"
] |
stat.ML cs.LG stat.ME | 10.1214/14-AOAS717 | 1211.5037 | null | null | http://arxiv.org/abs/1211.5037v3 | 2014-08-01T06:34:00Z | 2012-11-21T14:09:56Z | Bayesian nonparametric Plackett-Luce models for the analysis of
preferences for college degree programmes | In this paper we propose a Bayesian nonparametric model for clustering
partial ranking data. We start by developing a Bayesian nonparametric extension
of the popular Plackett-Luce choice model that can handle an infinite number of
choice items. Our framework is based on the theory of random atomic measures,
with the prior specified by a completely random measure. We characterise the
posterior distribution given data, and derive a simple and effective Gibbs
sampler for posterior simulation. We then develop a Dirichlet process mixture
extension of our model and apply it to investigate the clustering of
preferences for college degree programmes amongst Irish secondary school
graduates. The existence of clusters of applicants who have similar preferences
for degree programmes is established and we determine that subject matter and
geographical location of the third level institution characterise these
clusters.
| [
"Fran\\c{c}ois Caron, Yee Whye Teh, Thomas Brendan Murphy",
"['François Caron' 'Yee Whye Teh' 'Thomas Brendan Murphy']"
] |
cs.LG | null | 1211.5063 | null | null | http://arxiv.org/pdf/1211.5063v2 | 2013-02-16T00:35:48Z | 2012-11-21T15:40:11Z | On the difficulty of training Recurrent Neural Networks | There are two widely known issues with properly training Recurrent Neural
Networks, the vanishing and the exploding gradient problems detailed in Bengio
et al. (1994). In this paper we attempt to improve the understanding of the
underlying issues by exploring these problems from an analytical, a geometric
and a dynamical systems perspective. Our analysis is used to justify a simple
yet effective solution. We propose a gradient norm clipping strategy to deal
with exploding gradients and a soft constraint for the vanishing gradients
problem. We validate empirically our hypothesis and proposed solutions in the
experimental section.
| [
"['Razvan Pascanu' 'Tomas Mikolov' 'Yoshua Bengio']",
"Razvan Pascanu and Tomas Mikolov and Yoshua Bengio"
] |
cs.AI cs.LG | null | 1211.5189 | null | null | http://arxiv.org/pdf/1211.5189v2 | 2013-10-22T21:51:42Z | 2012-11-22T02:38:16Z | Optimally fuzzy temporal memory | Any learner with the ability to predict the future of a structured
time-varying signal must maintain a memory of the recent past. If the signal
has a characteristic timescale relevant to future prediction, the memory can be
a simple shift register---a moving window extending into the past, requiring
storage resources that linearly grows with the timescale to be represented.
However, an independent general purpose learner cannot a priori know the
characteristic prediction-relevant timescale of the signal. Moreover, many
naturally occurring signals show scale-free long range correlations implying
that the natural prediction-relevant timescale is essentially unbounded. Hence
the learner should maintain information from the longest possible timescale
allowed by resource availability. Here we construct a fuzzy memory system that
optimally sacrifices the temporal accuracy of information in a scale-free
fashion in order to represent prediction-relevant information from
exponentially long timescales. Using several illustrative examples, we
demonstrate the advantage of the fuzzy memory system over a shift register in
time series forecasting of natural signals. When the available storage
resources are limited, we suggest that a general purpose learner would be
better off committing to such a fuzzy memory system.
| [
"['Karthik H. Shankar' 'Marc W. Howard']",
"Karthik H. Shankar and Marc W. Howard"
] |
cs.SE cs.DC cs.LG | null | 1211.5227 | null | null | http://arxiv.org/pdf/1211.5227v1 | 2012-11-22T08:33:09Z | 2012-11-22T08:33:09Z | Service Composition Design Pattern for Autonomic Computing Systems using
Association Rule based Learning and Service-Oriented Architecture | In this paper we present a Service Injection and composition Design Pattern
for Unstructured Peer-to-Peer networks, which is designed with Aspect-oriented
design patterns, and amalgamation of the Strategy, Worker Object, and
Check-List Design Patterns used to design the Self-Adaptive Systems. It will
apply self reconfiguration planes dynamically without the interruption or
intervention of the administrator for handling service failures at the servers.
When a client requests for a complex service, Service Composition should be
done to fulfil the request. If a service is not available in the memory, it
will be injected as Aspectual Feature Module code. We used Service Oriented
Architecture (SOA) with Web Services in Java to Implement the composite Design
Pattern. As far as we know, there are no studies on composition of design
patterns for Peer-to-peer computing domain. The pattern is described using a
java-like notation for the classes and interfaces. A simple UML class and
Sequence diagrams are depicted.
| [
"Vishnuvardhan Mannava and T. Ramesh",
"['Vishnuvardhan Mannava' 'T. Ramesh']"
] |
cs.DS cs.LG cs.NA stat.ML | null | 1211.5414 | null | null | http://arxiv.org/pdf/1211.5414v1 | 2012-11-23T06:11:54Z | 2012-11-23T06:11:54Z | Analysis of a randomized approximation scheme for matrix multiplication | This note gives a simple analysis of a randomized approximation scheme for
matrix multiplication proposed by Sarlos (2006) based on a random rotation
followed by uniform column sampling. The result follows from a matrix version
of Bernstein's inequality and a tail inequality for quadratic forms in
subgaussian random vectors.
| [
"Daniel Hsu and Sham M. Kakade and Tong Zhang",
"['Daniel Hsu' 'Sham M. Kakade' 'Tong Zhang']"
] |
cs.SC cs.LG | null | 1211.5590 | null | null | http://arxiv.org/pdf/1211.5590v1 | 2012-11-23T20:42:41Z | 2012-11-23T20:42:41Z | Theano: new features and speed improvements | Theano is a linear algebra compiler that optimizes a user's
symbolically-specified mathematical computations to produce efficient low-level
implementations. In this paper, we present new features and efficiency
improvements to Theano, and benchmarks demonstrating Theano's performance
relative to Torch7, a recently introduced machine learning library, and to
RNNLM, a C++ library targeted at recurrent neural networks.
| [
"['Frédéric Bastien' 'Pascal Lamblin' 'Razvan Pascanu' 'James Bergstra'\n 'Ian Goodfellow' 'Arnaud Bergeron' 'Nicolas Bouchard'\n 'David Warde-Farley' 'Yoshua Bengio']",
"Fr\\'ed\\'eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra,\n Ian Goodfellow, Arnaud Bergeron, Nicolas Bouchard, David Warde-Farley, Yoshua\n Bengio"
] |
cs.LG stat.ML | null | 1211.5687 | null | null | http://arxiv.org/pdf/1211.5687v1 | 2012-11-24T17:51:57Z | 2012-11-24T17:51:57Z | Texture Modeling with Convolutional Spike-and-Slab RBMs and Deep
Extensions | We apply the spike-and-slab Restricted Boltzmann Machine (ssRBM) to texture
modeling. The ssRBM with tiled-convolution weight sharing (TssRBM) achieves or
surpasses the state-of-the-art on texture synthesis and inpainting by
parametric models. We also develop a novel RBM model with a spike-and-slab
visible layer and binary variables in the hidden layer. This model is designed
to be stacked on top of the TssRBM. We show the resulting deep belief network
(DBN) is a powerful generative model that improves on single-layer models and
is capable of modeling not only single high-resolution and challenging textures
but also multiple textures.
| [
"Heng Luo, Pierre Luc Carrier, Aaron Courville, Yoshua Bengio",
"['Heng Luo' 'Pierre Luc Carrier' 'Aaron Courville' 'Yoshua Bengio']"
] |
stat.ML cs.LG stat.CO | null | 1211.5901 | null | null | http://arxiv.org/pdf/1211.5901v1 | 2012-11-26T09:55:27Z | 2012-11-26T09:55:27Z | Bayesian learning of noisy Markov decision processes | We consider the inverse reinforcement learning problem, that is, the problem
of learning from, and then predicting or mimicking a controller based on
state/action data. We propose a statistical model for such data, derived from
the structure of a Markov decision process. Adopting a Bayesian approach to
inference, we show how latent variables of the model can be estimated, and how
predictions about actions can be made, in a unified framework. A new Markov
chain Monte Carlo (MCMC) sampler is devised for simulation from the posterior
distribution. This step includes a parameter expansion step, which is shown to
be essential for good convergence properties of the MCMC sampler. As an
illustration, the method is applied to learning a human controller.
| [
"['Sumeetpal S. Singh' 'Nicolas Chopin' 'Nick Whiteley']",
"Sumeetpal S. Singh and Nicolas Chopin and Nick Whiteley"
] |
cs.LG math.OC | null | 1211.6013 | null | null | http://arxiv.org/pdf/1211.6013v2 | 2013-07-14T00:09:14Z | 2012-11-26T16:27:18Z | Online Stochastic Optimization with Multiple Objectives | In this paper we propose a general framework to characterize and solve the
stochastic optimization problems with multiple objectives underlying many real
world learning applications. We first propose a projection based algorithm
which attains an $O(T^{-1/3})$ convergence rate. Then, by leveraging on the
theory of Lagrangian in constrained optimization, we devise a novel primal-dual
stochastic approximation algorithm which attains the optimal convergence rate
of $O(T^{-1/2})$ for general Lipschitz continuous objectives.
| [
"['Mehrdad Mahdavi' 'Tianbao Yang' 'Rong Jin']",
"Mehrdad Mahdavi, Tianbao Yang, Rong Jin"
] |
cs.LG stat.ML | null | 1211.6085 | null | null | http://arxiv.org/pdf/1211.6085v5 | 2014-04-17T19:07:11Z | 2012-11-26T20:35:12Z | Random Projections for Linear Support Vector Machines | Let X be a data matrix of rank \rho, whose rows represent n points in
d-dimensional space. The linear support vector machine constructs a hyperplane
separator that maximizes the 1-norm soft margin. We develop a new oblivious
dimension reduction technique which is precomputed and can be applied to any
input matrix X. We prove that, with high probability, the margin and minimum
enclosing ball in the feature space are preserved to within \epsilon-relative
error, ensuring comparable generalization as in the original space in the case
of classification. For regression, we show that the margin is preserved to
\epsilon-relative error with high probability. We present extensive experiments
with real and synthetic data to support our theory.
| [
"['Saurabh Paul' 'Christos Boutsidis' 'Malik Magdon-Ismail'\n 'Petros Drineas']",
"Saurabh Paul, Christos Boutsidis, Malik Magdon-Ismail, Petros Drineas"
] |
cs.LG stat.ML | null | 1211.6158 | null | null | http://arxiv.org/pdf/1211.6158v1 | 2012-11-26T23:13:23Z | 2012-11-26T23:13:23Z | The Interplay Between Stability and Regret in Online Learning | This paper considers the stability of online learning algorithms and its
implications for learnability (bounded regret). We introduce a novel quantity
called {\em forward regret} that intuitively measures how good an online
learning algorithm is if it is allowed a one-step look-ahead into the future.
We show that given stability, bounded forward regret is equivalent to bounded
regret. We also show that the existence of an algorithm with bounded regret
implies the existence of a stable algorithm with bounded regret and bounded
forward regret. The equivalence results apply to general, possibly non-convex
problems. To the best of our knowledge, our analysis provides the first general
connection between stability and regret in the online setting that is not
restricted to a particular class of algorithms. Our stability-regret connection
provides a simple recipe for analyzing regret incurred by any online learning
algorithm. Using our framework, we analyze several existing online learning
algorithms as well as the "approximate" versions of algorithms like RDA that
solve an optimization problem at each iteration. Our proofs are simpler than
existing analysis for the respective algorithms, show a clear trade-off between
stability and forward regret, and provide tighter regret bounds in some cases.
Furthermore, using our recipe, we analyze "approximate" versions of several
algorithms such as follow-the-regularized-leader (FTRL) that requires solving
an optimization problem at each step.
| [
"['Ankan Saha' 'Prateek Jain' 'Ambuj Tewari']",
"Ankan Saha and Prateek Jain and Ambuj Tewari"
] |
cs.LG stat.ML | null | 1211.6248 | null | null | http://arxiv.org/pdf/1211.6248v2 | 2012-12-04T13:50:19Z | 2012-11-27T09:36:22Z | A simple non-parametric Topic Mixture for Authors and Documents | This article reviews the Author-Topic Model and presents a new non-parametric
extension based on the Hierarchical Dirichlet Process. The extension is
especially suitable when no prior information about the number of components
necessary is available. A blocked Gibbs sampler is described and focus put on
staying as close as possible to the original model with only the minimum of
theoretical and implementation overhead necessary.
| [
"['Arnim Bleier']",
"Arnim Bleier"
] |
cs.LG math.OC stat.ML | null | 1211.6302 | null | null | http://arxiv.org/pdf/1211.6302v3 | 2013-10-18T17:02:13Z | 2012-11-27T13:46:59Z | Duality between subgradient and conditional gradient methods | Given a convex optimization problem and its dual, there are many possible
first-order algorithms. In this paper, we show the equivalence between mirror
descent algorithms and algorithms generalizing the conditional gradient method.
This is done through convex duality, and implies notably that for certain
problems, such as for supervised machine learning problems with non-smooth
losses or problems regularized by non-smooth regularizers, the primal
subgradient method and the dual conditional gradient method are formally
equivalent. The dual interpretation leads to a form of line search for mirror
descent, as well as guarantees of convergence for primal-dual certificates.
| [
"Francis Bach (INRIA Paris - Rocquencourt, LIENS)",
"['Francis Bach']"
] |
cs.LG | null | 1211.6340 | null | null | http://arxiv.org/pdf/1211.6340v1 | 2012-11-09T09:54:29Z | 2012-11-09T09:54:29Z | An Approach of Improving Students Academic Performance by using k means
clustering algorithm and Decision tree | Improving students academic performance is not an easy task for the academic
community of higher learning. The academic performance of engineering and
science students during their first year at university is a turning point in
their educational path and usually encroaches on their General Point
Average,GPA in a decisive manner. The students evaluation factors like class
quizzes mid and final exam assignment lab work are studied. It is recommended
that all these correlated information should be conveyed to the class teacher
before the conduction of final exam. This study will help the teachers to
reduce the drop out ratio to a significant level and improve the performance of
students. In this paper, we present a hybrid procedure based on Decision Tree
of Data mining method and Data Clustering that enables academicians to predict
students GPA and based on that instructor can take necessary step to improve
student academic performance.
| [
"Md. Hedayetul Islam Shovon, Mahfuza Haque",
"['Md. Hedayetul Islam Shovon' 'Mahfuza Haque']"
] |
cs.LG | 10.1007/s10994-016-5546-z | 1211.6581 | null | null | http://arxiv.org/abs/1211.6581v5 | 2016-01-27T20:24:53Z | 2012-11-28T11:42:36Z | Multi-Target Regression via Input Space Expansion: Treating Targets as
Inputs | In many practical applications of supervised learning the task involves the
prediction of multiple target variables from a common set of input variables.
When the prediction targets are binary the task is called multi-label
classification, while when the targets are continuous the task is called
multi-target regression. In both tasks, target variables often exhibit
statistical dependencies and exploiting them in order to improve predictive
accuracy is a core challenge. A family of multi-label classification methods
address this challenge by building a separate model for each target on an
expanded input space where other targets are treated as additional input
variables. Despite the success of these methods in the multi-label
classification domain, their applicability and effectiveness in multi-target
regression has not been studied until now. In this paper, we introduce two new
methods for multi-target regression, called Stacked Single-Target and Ensemble
of Regressor Chains, by adapting two popular multi-label classification methods
of this family. Furthermore, we highlight an inherent problem of these methods
- a discrepancy of the values of the additional input variables between
training and prediction - and develop extensions that use out-of-sample
estimates of the target variables during training in order to tackle this
problem. The results of an extensive experimental evaluation carried out on a
large and diverse collection of datasets show that, when the discrepancy is
appropriately mitigated, the proposed methods attain consistent improvements
over the independent regressions baseline. Moreover, two versions of Ensemble
of Regression Chains perform significantly better than four state-of-the-art
methods including regularization-based multi-task learning methods and a
multi-objective random forest approach.
| [
"['Eleftherios Spyromitros-Xioufis' 'Grigorios Tsoumakas' 'William Groves'\n 'Ioannis Vlahavas']",
"Eleftherios Spyromitros-Xioufis, Grigorios Tsoumakas, William Groves,\n Ioannis Vlahavas"
] |
cs.NI cs.AI cs.IT cs.LG math.IT | 10.1109/TWC.2014.022014.130840 | 1211.6616 | null | null | http://arxiv.org/abs/1211.6616v3 | 2014-04-04T07:37:14Z | 2012-11-28T14:48:36Z | TACT: A Transfer Actor-Critic Learning Framework for Energy Saving in
Cellular Radio Access Networks | Recent works have validated the possibility of improving energy efficiency in
radio access networks (RANs), achieved by dynamically turning on/off some base
stations (BSs). In this paper, we extend the research over BS switching
operations, which should match up with traffic load variations. Instead of
depending on the dynamic traffic loads which are still quite challenging to
precisely forecast, we firstly formulate the traffic variations as a Markov
decision process. Afterwards, in order to foresightedly minimize the energy
consumption of RANs, we design a reinforcement learning framework based BS
switching operation scheme. Furthermore, to avoid the underlying curse of
dimensionality in reinforcement learning, a transfer actor-critic algorithm
(TACT), which utilizes the transferred learning expertise in historical periods
or neighboring regions, is proposed and provably converges. In the end, we
evaluate our proposed scheme by extensive simulations under various practical
configurations and show that the proposed TACT algorithm contributes to a
performance jumpstart and demonstrates the feasibility of significant energy
efficiency improvement at the expense of tolerable delay performance.
| [
"Rongpeng Li, Zhifeng Zhao, Xianfu Chen, Jacques Palicot, Honggang\n Zhang",
"['Rongpeng Li' 'Zhifeng Zhao' 'Xianfu Chen' 'Jacques Palicot'\n 'Honggang Zhang']"
] |
cs.LG stat.ML | 10.1007/978-3-642-33460-3_51 | 1211.6653 | null | null | http://arxiv.org/abs/1211.6653v1 | 2012-11-28T16:50:23Z | 2012-11-28T16:50:23Z | Nonparametric Bayesian Mixed-effect Model: a Sparse Gaussian Process
Approach | Multi-task learning models using Gaussian processes (GP) have been developed
and successfully applied in various applications. The main difficulty with this
approach is the computational cost of inference using the union of examples
from all tasks. Therefore sparse solutions, that avoid using the entire data
directly and instead use a set of informative "representatives" are desirable.
The paper investigates this problem for the grouped mixed-effect GP model where
each individual response is given by a fixed-effect, taken from one of a set of
unknown groups, plus a random individual effect function that captures
variations among individuals. Such models have been widely used in previous
work but no sparse solutions have been developed. The paper presents the first
sparse solution for such problems, showing how the sparse approximation can be
obtained by maximizing a variational lower bound on the marginal likelihood,
generalizing ideas from single-task Gaussian processes to handle the
mixed-effect model as well as grouping. Experiments using artificial and real
data validate the approach showing that it can recover the performance of
inference with the full sample, that it outperforms baseline methods, and that
it outperforms state of the art sparse solutions for other multi-task GP
formulations.
| [
"['Yuyang Wang' 'Roni Khardon']",
"Yuyang Wang, Roni Khardon"
] |
stat.ML cs.LG cs.NA math.OC | 10.1137/120900629 | 1211.6687 | null | null | http://arxiv.org/abs/1211.6687v4 | 2013-05-31T15:06:57Z | 2012-11-28T18:05:56Z | Robustness Analysis of Hottopixx, a Linear Programming Model for
Factoring Nonnegative Matrices | Although nonnegative matrix factorization (NMF) is NP-hard in general, it has
been shown very recently that it is tractable under the assumption that the
input nonnegative data matrix is close to being separable (separability
requires that all columns of the input matrix belongs to the cone spanned by a
small subset of these columns). Since then, several algorithms have been
designed to handle this subclass of NMF problems. In particular, Bittorf,
Recht, R\'e and Tropp (`Factoring nonnegative matrices with linear programs',
NIPS 2012) proposed a linear programming model, referred to as Hottopixx. In
this paper, we provide a new and more general robustness analysis of their
method. In particular, we design a provably more robust variant using a
post-processing strategy which allows us to deal with duplicates and near
duplicates in the dataset.
| [
"['Nicolas Gillis']",
"Nicolas Gillis"
] |
cs.AI cs.CG cs.LG | null | 1211.6727 | null | null | http://arxiv.org/pdf/1211.6727v1 | 2012-11-28T20:10:42Z | 2012-11-28T20:10:42Z | Graph Laplacians on Singular Manifolds: Toward understanding complex
spaces: graph Laplacians on manifolds with singularities and boundaries | Recently, much of the existing work in manifold learning has been done under
the assumption that the data is sampled from a manifold without boundaries and
singularities or that the functions of interest are evaluated away from such
points. At the same time, it can be argued that singularities and boundaries
are an important aspect of the geometry of realistic data.
In this paper we consider the behavior of graph Laplacians at points at or
near boundaries and two main types of other singularities: intersections, where
different manifolds come together and sharp "edges", where a manifold sharply
changes direction. We show that the behavior of graph Laplacian near these
singularities is quite different from that in the interior of the manifolds. In
fact, a phenomenon somewhat reminiscent of the Gibbs effect in the analysis of
Fourier series, can be observed in the behavior of graph Laplacian near such
points. Unlike in the interior of the domain, where graph Laplacian converges
to the Laplace-Beltrami operator, near singularities graph Laplacian tends to a
first-order differential operator, which exhibits different scaling behavior as
a function of the kernel width. One important implication is that while points
near the singularities occupy only a small part of the total volume, the
difference in scaling results in a disproportionately large contribution to the
total behavior. Another significant finding is that while the scaling behavior
of the operator is the same near different types of singularities, they are
very distinct at a more refined level of analysis.
We believe that a comprehensive understanding of these structures in addition
to the standard case of a smooth manifold can take us a long way toward better
methods for analysis of complex non-linear data and can lead to significant
progress in algorithm design.
| [
"Mikhail Belkin and Qichao Que and Yusu Wang and Xueyuan Zhou",
"['Mikhail Belkin' 'Qichao Que' 'Yusu Wang' 'Xueyuan Zhou']"
] |
stat.AP cs.LG q-bio.QM | null | 1211.6834 | null | null | http://arxiv.org/pdf/1211.6834v1 | 2012-11-29T07:54:45Z | 2012-11-29T07:54:45Z | On unbiased performance evaluation for protein inference | This letter is a response to the comments of Serang (2012) on Huang and He
(2012) in Bioinformatics. Serang (2012) claimed that the parameters for the
Fido algorithm should be specified using the grid search method in Serang et
al. (2010) so as to generate a deserved accuracy in performance comparison. It
seems that it is an argument on parameter tuning. However, it is indeed the
issue of how to conduct an unbiased performance evaluation for comparing
different protein inference algorithms. In this letter, we would explain why we
don't use the grid search for parameter selection in Huang and He (2012) and
show that this procedure may result in an over-estimated performance that is
unfair to competing algorithms. In fact, this issue has also been pointed out
by Li and Radivojac (2012).
| [
"Zengyou He, Ting Huang, Peijun Zhu",
"['Zengyou He' 'Ting Huang' 'Peijun Zhu']"
] |
cs.LG stat.CO stat.ME stat.ML | null | 1211.6851 | null | null | http://arxiv.org/pdf/1211.6851v1 | 2012-11-29T09:22:19Z | 2012-11-29T09:22:19Z | Classification Recouvrante Bas\'ee sur les M\'ethodes \`a Noyau | Overlapping clustering problem is an important learning issue in which
clusters are not mutually exclusive and each object may belongs simultaneously
to several clusters. This paper presents a kernel based method that produces
overlapping clusters on a high feature space using mercer kernel techniques to
improve separability of input patterns. The proposed method, called
OKM-K(Overlapping $k$-means based kernel method), extends OKM (Overlapping
$k$-means) method to produce overlapping schemes. Experiments are performed on
overlapping dataset and empirical results obtained with OKM-K outperform
results obtained with OKM.
| [
"[\"Chiheb-Eddine Ben N'Cir\" 'Nadia Essoussi']",
"Chiheb-Eddine Ben N'Cir and Nadia Essoussi"
] |
stat.ML cs.LG stat.ME | null | 1211.6859 | null | null | http://arxiv.org/pdf/1211.6859v1 | 2012-11-29T09:35:30Z | 2012-11-29T09:35:30Z | Overlapping clustering based on kernel similarity metric | Producing overlapping schemes is a major issue in clustering. Recent proposed
overlapping methods relies on the search of an optimal covering and are based
on different metrics, such as Euclidean distance and I-Divergence, used to
measure closeness between observations. In this paper, we propose the use of
another measure for overlapping clustering based on a kernel similarity metric
.We also estimate the number of overlapped clusters using the Gram matrix.
Experiments on both Iris and EachMovie datasets show the correctness of the
estimation of number of clusters and show that measure based on kernel
similarity metric improves the precision, recall and f-measure in overlapping
clustering.
| [
"Chiheb-Eddine Ben N'Cir and Nadia Essoussi and Patrice Bertrand",
"[\"Chiheb-Eddine Ben N'Cir\" 'Nadia Essoussi' 'Patrice Bertrand']"
] |
cs.CL cs.LG | null | 1211.6887 | null | null | http://arxiv.org/pdf/1211.6887v1 | 2012-11-29T11:35:25Z | 2012-11-29T11:35:25Z | Automating rule generation for grammar checkers | In this paper, I describe several approaches to automatic or semi-automatic
development of symbolic rules for grammar checkers from the information
contained in corpora. The rules obtained this way are an important addition to
manually-created rules that seem to dominate in rule-based checkers. However,
the manual process of creation of rules is costly, time-consuming and
error-prone. It seems therefore advisable to use machine-learning algorithms to
create the rules automatically or semi-automatically. The results obtained seem
to corroborate my initial hypothesis that symbolic machine learning algorithms
can be useful for acquiring new rules for grammar checking. It turns out,
however, that for practical uses, error corpora cannot be the sole source of
information used in grammar checking. I suggest therefore that only by using
different approaches, grammar-checkers, or more generally, computer-aided
proofreading tools, will be able to cover most frequent and severe mistakes and
avoid false alarms that seem to distract users.
| [
"Marcin Mi{\\l}kowski",
"['Marcin Miłkowski']"
] |
cs.LG cs.AI | null | 1211.6898 | null | null | http://arxiv.org/pdf/1211.6898v1 | 2012-11-29T12:54:58Z | 2012-11-29T12:54:58Z | On the Use of Non-Stationary Policies for Stationary Infinite-Horizon
Markov Decision Processes | We consider infinite-horizon stationary $\gamma$-discounted Markov Decision
Processes, for which it is known that there exists a stationary optimal policy.
Using Value and Policy Iteration with some error $\epsilon$ at each iteration,
it is well-known that one can compute stationary policies that are
$\frac{2\gamma}{(1-\gamma)^2}\epsilon$-optimal. After arguing that this
guarantee is tight, we develop variations of Value and Policy Iteration for
computing non-stationary policies that can be up to
$\frac{2\gamma}{1-\gamma}\epsilon$-optimal, which constitutes a significant
improvement in the usual situation when $\gamma$ is close to 1. Surprisingly,
this shows that the problem of "computing near-optimal non-stationary policies"
is much simpler than that of "computing near-optimal stationary policies".
| [
"Bruno Scherrer (INRIA Nancy - Grand Est / LORIA), Boris Lesner (INRIA\n Nancy - Grand Est / LORIA)",
"['Bruno Scherrer' 'Boris Lesner']"
] |
cs.AI cs.DL cs.LG cs.LO | 10.1007/s10817-014-9303-3 | 1211.7012 | null | null | http://arxiv.org/abs/1211.7012v3 | 2014-10-26T15:02:54Z | 2012-11-29T18:15:10Z | Learning-Assisted Automated Reasoning with Flyspeck | The considerable mathematical knowledge encoded by the Flyspeck project is
combined with external automated theorem provers (ATPs) and machine-learning
premise selection methods trained on the proofs, producing an AI system capable
of answering a wide range of mathematical queries automatically. The
performance of this architecture is evaluated in a bootstrapping scenario
emulating the development of Flyspeck from axioms to the last theorem, each
time using only the previous theorems and proofs. It is shown that 39% of the
14185 theorems could be proved in a push-button mode (without any high-level
advice and user interaction) in 30 seconds of real time on a fourteen-CPU
workstation. The necessary work involves: (i) an implementation of sound
translations of the HOL Light logic to ATP formalisms: untyped first-order,
polymorphic typed first-order, and typed higher-order, (ii) export of the
dependency information from HOL Light and ATP proofs for the machine learners,
and (iii) choice of suitable representations and methods for learning from
previous proofs, and their integration as advisors with HOL Light. This work is
described and discussed here, and an initial analysis of the body of proofs
that were found fully automatically is provided.
| [
"['Cezary Kaliszyk' 'Josef Urban']",
"Cezary Kaliszyk and Josef Urban"
] |
cs.LG math.NA math.OC q-bio.BM | null | 1211.7045 | null | null | http://arxiv.org/pdf/1211.7045v2 | 2013-04-10T13:35:21Z | 2012-11-29T20:39:41Z | Orientation Determination from Cryo-EM images Using Least Unsquared
Deviation | A major challenge in single particle reconstruction from cryo-electron
microscopy is to establish a reliable ab-initio three-dimensional model using
two-dimensional projection images with unknown orientations. Common-lines based
methods estimate the orientations without additional geometric information.
However, such methods fail when the detection rate of common-lines is too low
due to the high level of noise in the images. An approximation to the least
squares global self consistency error was obtained using convex relaxation by
semidefinite programming. In this paper we introduce a more robust global self
consistency error and show that the corresponding optimization problem can be
solved via semidefinite relaxation. In order to prevent artificial clustering
of the estimated viewing directions, we further introduce a spectral norm term
that is added as a constraint or as a regularization term to the relaxed
minimization problem. The resulted problems are solved by using either the
alternating direction method of multipliers or an iteratively reweighted least
squares procedure. Numerical experiments with both simulated and real images
demonstrate that the proposed methods significantly reduce the orientation
estimation error when the detection rate of common-lines is low.
| [
"Lanhui Wang, Amit Singer, Zaiwen Wen",
"['Lanhui Wang' 'Amit Singer' 'Zaiwen Wen']"
] |
cs.CV cs.LG stat.ML | null | 1211.7219 | null | null | http://arxiv.org/pdf/1211.7219v1 | 2012-11-30T11:50:21Z | 2012-11-30T11:50:21Z | A recursive divide-and-conquer approach for sparse principal component
analysis | In this paper, a new method is proposed for sparse PCA based on the recursive
divide-and-conquer methodology. The main idea is to separate the original
sparse PCA problem into a series of much simpler sub-problems, each having a
closed-form solution. By recursively solving these sub-problems in an
analytical way, an efficient algorithm is constructed to solve the sparse PCA
problem. The algorithm only involves simple computations and is thus easy to
implement. The proposed method can also be very easily extended to other sparse
PCA problems with certain constraints, such as the nonnegative sparse PCA
problem. Furthermore, we have shown that the proposed algorithm converges to a
stationary point of the problem, and its computational complexity is
approximately linear in both data size and dimensionality. The effectiveness of
the proposed method is substantiated by extensive experiments implemented on a
series of synthetic and real data in both reconstruction-error-minimization and
data-variance-maximization viewpoints.
| [
"Qian Zhao and Deyu Meng and Zongben Xu",
"['Qian Zhao' 'Deyu Meng' 'Zongben Xu']"
] |
cs.IT cs.LG math.IT stat.ML | null | 1211.7276 | null | null | http://arxiv.org/pdf/1211.7276v1 | 2012-11-26T15:01:15Z | 2012-11-26T15:01:15Z | Efficient algorithms for robust recovery of images from compressed data | Compressed sensing (CS) is an important theory for sub-Nyquist sampling and
recovery of compressible data. Recently, it has been extended by Pham and
Venkatesh to cope with the case where corruption to the CS data is modeled as
impulsive noise. The new formulation, termed as robust CS, combines robust
statistics and CS into a single framework to suppress outliers in the CS
recovery. To solve the newly formulated robust CS problem, Pham and Venkatesh
suggested a scheme that iteratively solves a number of CS problems, the
solutions from which converge to the true robust compressed sensing solution.
However, this scheme is rather inefficient as it has to use existing CS solvers
as a proxy. To overcome limitation with the original robust CS algorithm, we
propose to solve the robust CS problem directly in this paper and drive more
computationally efficient algorithms by following latest advances in
large-scale convex optimization for non-smooth regularization. Furthermore, we
also extend the robust CS formulation to various settings, including additional
affine constraints, $\ell_1$-norm loss function, mixed-norm regularization, and
multi-tasking, so as to further improve robust CS. We also derive simple but
effective algorithms to solve these extensions. We demonstrate that the new
algorithms provide much better computational advantage over the original robust
CS formulation, and effectively solve more sophisticated extensions where the
original methods simply cannot. We demonstrate the usefulness of the extensions
on several CS imaging tasks.
| [
"Duc Son Pham and Svetha Venkatesh",
"['Duc Son Pham' 'Svetha Venkatesh']"
] |
stat.ML cs.LG math.NA | null | 1211.7369 | null | null | http://arxiv.org/pdf/1211.7369v1 | 2012-11-30T20:50:40Z | 2012-11-30T20:50:40Z | Approximate Rank-Detecting Factorization of Low-Rank Tensors | We present an algorithm, AROFAC2, which detects the (CP-)rank of a degree 3
tensor and calculates its factorization into rank-one components. We provide
generative conditions for the algorithm to work and demonstrate on both
synthetic and real world data that AROFAC2 is a potentially outperforming
alternative to the gold standard PARAFAC over which it has the advantages that
it can intrinsically detect the true rank, avoids spurious components, and is
stable with respect to outliers and non-Gaussian noise.
| [
"Franz J. Kir\\'aly and Andreas Ziehe",
"['Franz J. Király' 'Andreas Ziehe']"
] |
cs.LG stat.ML | null | 1212.0139 | null | null | http://arxiv.org/pdf/1212.0139v1 | 2012-12-01T17:46:34Z | 2012-12-01T17:46:34Z | Cumulative Step-size Adaptation on Linear Functions | The CSA-ES is an Evolution Strategy with Cumulative Step size Adaptation,
where the step size is adapted measuring the length of a so-called cumulative
path. The cumulative path is a combination of the previous steps realized by
the algorithm, where the importance of each step decreases with time. This
article studies the CSA-ES on composites of strictly increasing functions with
affine linear functions through the investigation of its underlying Markov
chains. Rigorous results on the change and the variation of the step size are
derived with and without cumulation. The step-size diverges geometrically fast
in most cases. Furthermore, the influence of the cumulation parameter is
studied.
| [
"['Alexandre Chotard' 'Anne Auger' 'Nikolaus Hansen']",
"Alexandre Chotard (INRIA Saclay - Ile de France, LRI), Anne Auger\n (INRIA Saclay - Ile de France), Nikolaus Hansen (INRIA Saclay - Ile de\n France)"
] |
cs.CV cs.LG | null | 1212.0142 | null | null | http://arxiv.org/pdf/1212.0142v2 | 2013-04-02T18:05:46Z | 2012-12-01T18:13:03Z | Pedestrian Detection with Unsupervised Multi-Stage Feature Learning | Pedestrian detection is a problem of considerable practical interest. Adding
to the list of successful applications of deep learning methods to vision, we
report state-of-the-art and competitive results on all major pedestrian
datasets with a convolutional network model. The model uses a few new twists,
such as multi-stage features, connections that skip layers to integrate global
shape information with local distinctive motif information, and an unsupervised
method based on convolutional sparse coding to pre-train the filters at each
stage.
| [
"Pierre Sermanet and Koray Kavukcuoglu and Soumith Chintala and Yann\n LeCun",
"['Pierre Sermanet' 'Koray Kavukcuoglu' 'Soumith Chintala' 'Yann LeCun']"
] |
cs.IT cs.LG math.IT stat.ML | null | 1212.0171 | null | null | http://arxiv.org/pdf/1212.0171v1 | 2012-12-02T00:34:04Z | 2012-12-02T00:34:04Z | Message-Passing Algorithms for Quadratic Minimization | Gaussian belief propagation (GaBP) is an iterative algorithm for computing
the mean of a multivariate Gaussian distribution, or equivalently, the minimum
of a multivariate positive definite quadratic function. Sufficient conditions,
such as walk-summability, that guarantee the convergence and correctness of
GaBP are known, but GaBP may fail to converge to the correct solution given an
arbitrary positive definite quadratic function. As was observed in previous
work, the GaBP algorithm fails to converge if the computation trees produced by
the algorithm are not positive definite. In this work, we will show that the
failure modes of the GaBP algorithm can be understood via graph covers, and we
prove that a parameterized generalization of the min-sum algorithm can be used
to ensure that the computation trees remain positive definite whenever the
input matrix is positive definite. We demonstrate that the resulting algorithm
is closely related to other iterative schemes for quadratic minimization such
as the Gauss-Seidel and Jacobi algorithms. Finally, we observe, empirically,
that there always exists a choice of parameters such that the above
generalization of the GaBP algorithm converges.
| [
"['Nicholas Ruozzi' 'Sekhar Tatikonda']",
"Nicholas Ruozzi and Sekhar Tatikonda"
] |
stat.ML cs.LG q-bio.QM | null | 1212.0388 | null | null | http://arxiv.org/pdf/1212.0388v1 | 2012-12-03T13:53:39Z | 2012-12-03T13:53:39Z | Hypergraph and protein function prediction with gene expression data | Most network-based protein (or gene) function prediction methods are based on
the assumption that the labels of two adjacent proteins in the network are
likely to be the same. However, assuming the pairwise relationship between
proteins or genes is not complete, the information a group of genes that show
very similar patterns of expression and tend to have similar functions (i.e.
the functional modules) is missed. The natural way overcoming the information
loss of the above assumption is to represent the gene expression data as the
hypergraph. Thus, in this paper, the three un-normalized, random walk, and
symmetric normalized hypergraph Laplacian based semi-supervised learning
methods applied to hypergraph constructed from the gene expression data in
order to predict the functions of yeast proteins are introduced. Experiment
results show that the average accuracy performance measures of these three
hypergraph Laplacian based semi-supervised learning methods are the same.
However, their average accuracy performance measures of these three methods are
much greater than the average accuracy performance measures of un-normalized
graph Laplacian based semi-supervised learning method (i.e. the baseline method
of this paper) applied to gene co-expression network created from the gene
expression data.
| [
"['Loc Tran']",
"Loc Tran"
] |
math.ST cs.LG stat.ML stat.TH | null | 1212.0463 | null | null | http://arxiv.org/pdf/1212.0463v2 | 2016-09-10T20:05:05Z | 2012-12-03T17:42:45Z | Nonparametric risk bounds for time-series forecasting | We derive generalization error bounds for traditional time-series forecasting
models. Our results hold for many standard forecasting tools including
autoregressive models, moving average models, and, more generally, linear
state-space models. These non-asymptotic bounds need only weak assumptions on
the data-generating process, yet allow forecasters to select among competing
models and to guarantee, with high probability, that their chosen model will
perform well. We motivate our techniques with and apply them to standard
economic and financial forecasting tools---a GARCH model for predicting equity
volatility and a dynamic stochastic general equilibrium model (DSGE), the
standard tool in macroeconomic forecasting. We demonstrate in particular how
our techniques can aid forecasters and policy makers in choosing models which
behave well under uncertainty and mis-specification.
| [
"Daniel J. McDonald and Cosma Rohilla Shalizi and Mark Schervish",
"['Daniel J. McDonald' 'Cosma Rohilla Shalizi' 'Mark Schervish']"
] |
stat.ML cs.LG math.OC | null | 1212.0467 | null | null | http://arxiv.org/pdf/1212.0467v1 | 2012-12-03T17:57:50Z | 2012-12-03T17:57:50Z | Low-rank Matrix Completion using Alternating Minimization | Alternating minimization represents a widely applicable and empirically
successful approach for finding low-rank matrices that best fit the given data.
For example, for the problem of low-rank matrix completion, this method is
believed to be one of the most accurate and efficient, and formed a major
component of the winning entry in the Netflix Challenge.
In the alternating minimization approach, the low-rank target matrix is
written in a bi-linear form, i.e. $X = UV^\dag$; the algorithm then alternates
between finding the best $U$ and the best $V$. Typically, each alternating step
in isolation is convex and tractable. However the overall problem becomes
non-convex and there has been almost no theoretical understanding of when this
approach yields a good result.
In this paper we present first theoretical analysis of the performance of
alternating minimization for matrix completion, and the related problem of
matrix sensing. For both these problems, celebrated recent results have shown
that they become well-posed and tractable once certain (now standard)
conditions are imposed on the problem. We show that alternating minimization
also succeeds under similar conditions. Moreover, compared to existing results,
our paper shows that alternating minimization guarantees faster (in particular,
geometric) convergence to the true matrix, while allowing a simpler analysis.
| [
"['Prateek Jain' 'Praneeth Netrapalli' 'Sujay Sanghavi']",
"Prateek Jain, Praneeth Netrapalli and Sujay Sanghavi"
] |
q-bio.GN cs.CE cs.LG q-bio.CB | 10.1371/journal.pone.0061318 | 1212.0504 | null | null | http://arxiv.org/abs/1212.0504v3 | 2013-03-18T18:07:47Z | 2012-12-03T19:38:09Z | Machine learning prediction of cancer cell sensitivity to drugs based on
genomic and chemical properties | Predicting the response of a specific cancer to a therapy is a major goal in
modern oncology that should ultimately lead to a personalised treatment.
High-throughput screenings of potentially active compounds against a panel of
genomically heterogeneous cancer cell lines have unveiled multiple
relationships between genomic alterations and drug responses. Various
computational approaches have been proposed to predict sensitivity based on
genomic features, while others have used the chemical properties of the drugs
to ascertain their effect. In an effort to integrate these complementary
approaches, we developed machine learning models to predict the response of
cancer cell lines to drug treatment, quantified through IC50 values, based on
both the genomic features of the cell lines and the chemical properties of the
considered drugs. Models predicted IC50 values in a 8-fold cross-validation and
an independent blind test with coefficient of determination R2 of 0.72 and 0.64
respectively. Furthermore, models were able to predict with comparable accuracy
(R2 of 0.61) IC50s of cell lines from a tissue not used in the training stage.
Our in silico models can be used to optimise the experimental design of
drug-cell screenings by estimating a large proportion of missing IC50 values
rather than experimentally measure them. The implications of our results go
beyond virtual drug screening design: potentially thousands of drugs could be
probed in silico to systematically test their potential efficacy as anti-tumour
agents based on their structure, thus providing a computational framework to
identify new drug repositioning opportunities as well as ultimately be useful
for personalized medicine by linking the genomic traits of patients to drug
sensitivity.
| [
"['Michael P. Menden' 'Francesco Iorio' 'Mathew Garnett' 'Ultan McDermott'\n 'Cyril Benes' 'Pedro J. Ballester' 'Julio Saez-Rodriguez']",
"Michael P. Menden, Francesco Iorio, Mathew Garnett, Ultan McDermott,\n Cyril Benes, Pedro J. Ballester, Julio Saez-Rodriguez"
] |
cs.AI cs.LG | null | 1212.0692 | null | null | http://arxiv.org/pdf/1212.0692v2 | 2014-01-05T02:25:04Z | 2012-12-04T12:00:54Z | An Empirical Evaluation of Portfolios Approaches for solving CSPs | Recent research in areas such as SAT solving and Integer Linear Programming
has shown that the performances of a single arbitrarily efficient solver can be
significantly outperformed by a portfolio of possibly slower on-average
solvers. We report an empirical evaluation and comparison of portfolio
approaches applied to Constraint Satisfaction Problems (CSPs). We compared
models developed on top of off-the-shelf machine learning algorithms with
respect to approaches used in the SAT field and adapted for CSPs, considering
different portfolio sizes and using as evaluation metrics the number of solved
problems and the time taken to solve them. Results indicate that the best SAT
approaches have top performances also in the CSP field and are slightly more
competitive than simple models built on top of classification algorithms.
| [
"Roberto Amadini, Maurizio Gabbrielli, Jacopo Mauro",
"['Roberto Amadini' 'Maurizio Gabbrielli' 'Jacopo Mauro']"
] |
cs.LG cs.CV math.OC stat.ML | 10.1142/S0218001413600033 | 1212.0695 | null | null | http://arxiv.org/abs/1212.0695v1 | 2012-12-04T12:05:31Z | 2012-12-04T12:05:31Z | Training Support Vector Machines Using Frank-Wolfe Optimization Methods | Training a Support Vector Machine (SVM) requires the solution of a quadratic
programming problem (QP) whose computational complexity becomes prohibitively
expensive for large scale datasets. Traditional optimization methods cannot be
directly applied in these cases, mainly due to memory restrictions.
By adopting a slightly different objective function and under mild conditions
on the kernel used within the model, efficient algorithms to train SVMs have
been devised under the name of Core Vector Machines (CVMs). This framework
exploits the equivalence of the resulting learning problem with the task of
building a Minimal Enclosing Ball (MEB) problem in a feature space, where data
is implicitly embedded by a kernel function.
In this paper, we improve on the CVM approach by proposing two novel methods
to build SVMs based on the Frank-Wolfe algorithm, recently revisited as a fast
method to approximate the solution of a MEB problem. In contrast to CVMs, our
algorithms do not require to compute the solutions of a sequence of
increasingly complex QPs and are defined by using only analytic optimization
steps. Experiments on a large collection of datasets show that our methods
scale better than CVMs in most cases, sometimes at the price of a slightly
lower accuracy. As CVMs, the proposed methods can be easily extended to machine
learning problems other than binary classification. However, effective
classifiers are also obtained using kernels which do not satisfy the condition
required by CVMs and can thus be used for a wider set of problems.
| [
"Emanuele Frandi, Ricardo Nanculef, Maria Grazia Gasparo, Stefano Lodi,\n Claudio Sartori",
"['Emanuele Frandi' 'Ricardo Nanculef' 'Maria Grazia Gasparo'\n 'Stefano Lodi' 'Claudio Sartori']"
] |
cs.LG cs.DB cs.IR | null | 1212.0763 | null | null | http://arxiv.org/pdf/1212.0763v1 | 2012-12-03T13:00:27Z | 2012-12-03T13:00:27Z | Dynamic recommender system : using cluster-based biases to improve the
accuracy of the predictions | It is today accepted that matrix factorization models allow a high quality of
rating prediction in recommender systems. However, a major drawback of matrix
factorization is its static nature that results in a progressive declining of
the accuracy of the predictions after each factorization. This is due to the
fact that the new obtained ratings are not taken into account until a new
factorization is computed, which can not be done very often because of the high
cost of matrix factorization.
In this paper, aiming at improving the accuracy of recommender systems, we
propose a cluster-based matrix factorization technique that enables online
integration of new ratings. Thus, we significantly enhance the obtained
predictions between two matrix factorizations. We use finer-grained user biases
by clustering similar items into groups, and allocating in these groups a bias
to each user. The experiments we did on large datasets demonstrated the
efficiency of our approach.
| [
"['Modou Gueye' 'Talel Abdessalem' 'Hubert Naacke']",
"Modou Gueye, Talel Abdessalem, Hubert Naacke"
] |
cs.LG | null | 1212.0901 | null | null | http://arxiv.org/pdf/1212.0901v2 | 2012-12-14T01:44:53Z | 2012-12-04T23:25:34Z | Advances in Optimizing Recurrent Networks | After a more than decade-long period of relatively little research activity
in the area of recurrent neural networks, several new developments will be
reviewed here that have allowed substantial progress both in understanding and
in technical solutions towards more efficient training of recurrent networks.
These advances have been motivated by and related to the optimization issues
surrounding deep learning. Although recurrent networks are extremely powerful
in what they can in principle represent in terms of modelling sequences,their
training is plagued by two aspects of the same issue regarding the learning of
long-term dependencies. Experiments reported here evaluate the use of clipping
gradients, spanning longer time ranges with leaky integration, advanced
momentum techniques, using more powerful output probability models, and
encouraging sparser gradients to help symmetry breaking and credit assignment.
The experiments are performed on text and music data and show off the combined
effects of these techniques in generally improving both training and test
error.
| [
"Yoshua Bengio, Nicolas Boulanger-Lewandowski and Razvan Pascanu",
"['Yoshua Bengio' 'Nicolas Boulanger-Lewandowski' 'Razvan Pascanu']"
] |
stat.ML cs.LG math.ST physics.data-an stat.TH | null | 1212.0945 | null | null | http://arxiv.org/pdf/1212.0945v1 | 2012-12-05T07:13:54Z | 2012-12-05T07:13:54Z | Multiclass Diffuse Interface Models for Semi-Supervised Learning on
Graphs | We present a graph-based variational algorithm for multiclass classification
of high-dimensional data, motivated by total variation techniques. The energy
functional is based on a diffuse interface model with a periodic potential. We
augment the model by introducing an alternative measure of smoothness that
preserves symmetry among the class labels. Through this modification of the
standard Laplacian, we construct an efficient multiclass method that allows for
sharp transitions between classes. The experimental results demonstrate that
our approach is competitive with the state of the art among other graph-based
algorithms.
| [
"['Cristina Garcia-Cardona' 'Arjuna Flenner' 'Allon G. Percus']",
"Cristina Garcia-Cardona, Arjuna Flenner and Allon G. Percus"
] |
cs.LG cs.IR stat.ML | null | 1212.0960 | null | null | http://arxiv.org/pdf/1212.0960v1 | 2012-12-05T08:15:36Z | 2012-12-05T08:15:36Z | Evaluating Classifiers Without Expert Labels | This paper considers the challenge of evaluating a set of classifiers, as
done in shared task evaluations like the KDD Cup or NIST TREC, without expert
labels. While expert labels provide the traditional cornerstone for evaluating
statistical learners, limited or expensive access to experts represents a
practical bottleneck. Instead, we seek methodology for estimating performance
of the classifiers which is more scalable than expert labeling yet preserves
high correlation with evaluation based on expert labels. We consider both: 1)
using only labels automatically generated by the classifiers (blind
evaluation); and 2) using labels obtained via crowdsourcing. While
crowdsourcing methods are lauded for scalability, using such data for
evaluation raises serious concerns given the prevalence of label noise. In
regard to blind evaluation, two broad strategies are investigated: combine &
score and score & combine methods infer a single pseudo-gold label set by
aggregating classifier labels; classifiers are then evaluated based on this
single pseudo-gold label set. On the other hand, score & combine methods: 1)
sample multiple label sets from classifier outputs, 2) evaluate classifiers on
each label set, and 3) average classifier performance across label sets. When
additional crowd labels are also collected, we investigate two alternative
avenues for exploiting them: 1) direct evaluation of classifiers; or 2)
supervision of combine & score methods. To assess generality of our techniques,
classifier performance is measured using four common classification metrics,
with statistical significance tests. Finally, we measure both score and rank
correlations between estimated classifier performance vs. actual performance
according to expert judgments. Rigorous evaluation of classifiers from the TREC
2011 Crowdsourcing Track shows reliable evaluation can be achieved without
reliance on expert labels.
| [
"Hyun Joon Jung and Matthew Lease",
"['Hyun Joon Jung' 'Matthew Lease']"
] |
cs.AI cs.DB cs.LG stat.ML | null | 1212.0967 | null | null | http://arxiv.org/pdf/1212.0967v1 | 2012-12-05T08:52:33Z | 2012-12-05T08:52:33Z | Compiling Relational Database Schemata into Probabilistic Graphical
Models | Instead of requiring a domain expert to specify the probabilistic
dependencies of the data, in this work we present an approach that uses the
relational DB schema to automatically construct a Bayesian graphical model for
a database. This resulting model contains customized distributions for columns,
latent variables that cluster the data, and factors that reflect and represent
the foreign key links. Experiments demonstrate the accuracy of the model and
the scalability of inference on synthetic and real-world data.
| [
"['Sameer Singh' 'Thore Graepel']",
"Sameer Singh and Thore Graepel"
] |
cs.LG stat.ML | null | 1212.0975 | null | null | http://arxiv.org/pdf/1212.0975v2 | 2015-02-15T11:17:57Z | 2012-12-05T09:24:11Z | Cost-Sensitive Support Vector Machines | A new procedure for learning cost-sensitive SVM(CS-SVM) classifiers is
proposed. The SVM hinge loss is extended to the cost sensitive setting, and the
CS-SVM is derived as the minimizer of the associated risk. The extension of the
hinge loss draws on recent connections between risk minimization and
probability elicitation. These connections are generalized to cost-sensitive
classification, in a manner that guarantees consistency with the cost-sensitive
Bayes risk, and associated Bayes decision rule. This ensures that optimal
decision rules, under the new hinge loss, implement the Bayes-optimal
cost-sensitive classification boundary. Minimization of the new hinge loss is
shown to be a generalization of the classic SVM optimization problem, and can
be solved by identical procedures. The dual problem of CS-SVM is carefully
scrutinized by means of regularization theory and sensitivity analysis and the
CS-SVM algorithm is substantiated. The proposed algorithm is also extended to
cost-sensitive learning with example dependent costs. The minimum cost
sensitive risk is proposed as the performance measure and is connected to ROC
analysis through vector optimization. The resulting algorithm avoids the
shortcomings of previous approaches to cost-sensitive SVM design, and is shown
to have superior experimental performance on a large number of cost sensitive
and imbalanced datasets.
| [
"Hamed Masnadi-Shirazi, Nuno Vasconcelos and Arya Iranmehr",
"['Hamed Masnadi-Shirazi' 'Nuno Vasconcelos' 'Arya Iranmehr']"
] |
cs.LG cs.AI stat.ML | null | 1212.1100 | null | null | http://arxiv.org/pdf/1212.1100v1 | 2012-12-05T17:07:39Z | 2012-12-05T17:07:39Z | Making Early Predictions of the Accuracy of Machine Learning
Applications | The accuracy of machine learning systems is a widely studied research topic.
Established techniques such as cross-validation predict the accuracy on unseen
data of the classifier produced by applying a given learning method to a given
training data set. However, they do not predict whether incurring the cost of
obtaining more data and undergoing further training will lead to higher
accuracy. In this paper we investigate techniques for making such early
predictions. We note that when a machine learning algorithm is presented with a
training set the classifier produced, and hence its error, will depend on the
characteristics of the algorithm, on training set's size, and also on its
specific composition. In particular we hypothesise that if a number of
classifiers are produced, and their observed error is decomposed into bias and
variance terms, then although these components may behave differently, their
behaviour may be predictable.
We test our hypothesis by building models that, given a measurement taken
from the classifier created from a limited number of samples, predict the
values that would be measured from the classifier produced when the full data
set is presented. We create separate models for bias, variance and total error.
Our models are built from the results of applying ten different machine
learning algorithms to a range of data sets, and tested with "unseen"
algorithms and datasets. We analyse the results for various numbers of initial
training samples, and total dataset sizes. Results show that our predictions
are very highly correlated with the values observed after undertaking the extra
training. Finally we consider the more complex case where an ensemble of
heterogeneous classifiers is trained, and show how we can accurately estimate
an upper bound on the accuracy achievable after further training.
| [
"['J. E. Smith' 'P. Caleb-Solly' 'M. A. Tahir' 'D. Sannen' 'H. van-Brussel']",
"J. E. Smith, P. Caleb-Solly, M. A. Tahir, D. Sannen, H. van-Brussel"
] |
cs.LG cs.AI stat.ML | null | 1212.1108 | null | null | null | null | null | On the Convergence Properties of Optimal AdaBoost | AdaBoost is one of the most popular ML algorithms. It is simple to implement
and often found very effective by practitioners, while still being
mathematically elegant and theoretically sound. AdaBoost's interesting behavior
in practice still puzzles the ML community. We address the algorithm's
stability and establish multiple convergence properties of "Optimal AdaBoost,"
a term coined by Rudin, Daubechies, and Schapire in 2004. We prove, in a
reasonably strong computational sense, the almost universal existence of time
averages, and with that, the convergence of the classifier itself, its
generalization error, and its resulting margins, among many other objects, for
fixed data sets under arguably reasonable conditions. Specifically, we frame
Optimal AdaBoost as a dynamical system and, employing tools from ergodic
theory, prove that, under a condition that Optimal AdaBoost does not have ties
for best weak classifier eventually, a condition for which we provide empirical
evidence from high dimensional real-world datasets, the algorithm's update
behaves like a continuous map. We provide constructive proofs of several
arbitrarily accurate approximations of Optimal AdaBoost; prove that they
exhibit certain cycling behavior in finite time, and that the resulting
dynamical system is ergodic; and establish sufficient conditions for the same
to hold for the actual Optimal-AdaBoost update. We believe that our results
provide reasonably strong evidence for the affirmative answer to two open
conjectures, at least from a broad computational-theory perspective: AdaBoost
always cycles and is an ergodic dynamical system. We present empirical evidence
that cycles are hard to detect while time averages stabilize quickly. Our
results ground future convergence-rate analysis and may help optimize
generalization ability and alleviate a practitioner's burden of deciding how
long to run the algorithm.
| [
"Joshua Belanich and Luis E. Ortiz"
] |
cs.LG cs.IR stat.ML | null | 1212.1131 | null | null | http://arxiv.org/pdf/1212.1131v1 | 2012-12-05T19:03:39Z | 2012-12-05T19:03:39Z | Using Wikipedia to Boost SVD Recommender Systems | Singular Value Decomposition (SVD) has been used successfully in recent years
in the area of recommender systems. In this paper we present how this model can
be extended to consider both user ratings and information from Wikipedia. By
mapping items to Wikipedia pages and quantifying their similarity, we are able
to use this information in order to improve recommendation accuracy, especially
when the sparsity is high. Another advantage of the proposed approach is the
fact that it can be easily integrated into any other SVD implementation,
regardless of additional parameters that may have been added to it. Preliminary
experimental results on the MovieLens dataset are encouraging.
| [
"['Gilad Katz' 'Guy Shani' 'Bracha Shapira' 'Lior Rokach']",
"Gilad Katz, Guy Shani, Bracha Shapira, Lior Rokach"
] |
stat.ML cs.LG | null | 1212.1180 | null | null | http://arxiv.org/pdf/1212.1180v1 | 2012-12-05T21:19:35Z | 2012-12-05T21:19:35Z | On Some Integrated Approaches to Inference | We present arguments for the formulation of unified approach to different
standard continuous inference methods from partial information. It is claimed
that an explicit partition of information into a priori (prior knowledge) and a
posteriori information (data) is an important way of standardizing inference
approaches so that they can be compared on a normative scale, and so that
notions of optimal algorithms become farther-reaching. The inference methods
considered include neural network approaches, information-based complexity, and
Monte Carlo, spline, and regularization methods. The model is an extension of
currently used continuous complexity models, with a class of algorithms in the
form of optimization methods, in which an optimization functional (involving
the data) is minimized. This extends the family of current approaches in
continuous complexity theory, which include the use of interpolatory algorithms
in worst and average case settings.
| [
"Mark A. Kon and Leszek Plaskota",
"['Mark A. Kon' 'Leszek Plaskota']"
] |
cs.GT cs.LG | 10.1109/TSP.2013.2280444 | 1212.1245 | null | null | http://arxiv.org/abs/1212.1245v2 | 2013-09-11T19:12:25Z | 2012-12-06T06:47:55Z | Distributed Adaptive Networks: A Graphical Evolutionary Game-Theoretic
View | Distributed adaptive filtering has been considered as an effective approach
for data processing and estimation over distributed networks. Most existing
distributed adaptive filtering algorithms focus on designing different
information diffusion rules, regardless of the nature evolutionary
characteristic of a distributed network. In this paper, we study the adaptive
network from the game theoretic perspective and formulate the distributed
adaptive filtering problem as a graphical evolutionary game. With the proposed
formulation, the nodes in the network are regarded as players and the local
combiner of estimation information from different neighbors is regarded as
different strategies selection. We show that this graphical evolutionary game
framework is very general and can unify the existing adaptive network
algorithms. Based on this framework, as examples, we further propose two
error-aware adaptive filtering algorithms. Moreover, we use graphical
evolutionary game theory to analyze the information diffusion process over the
adaptive networks and evolutionarily stable strategy of the system. Finally,
simulation results are shown to verify the effectiveness of our analysis and
proposed methods.
| [
"Chunxiao Jiang and Yan Chen and K. J. Ray Liu",
"['Chunxiao Jiang' 'Yan Chen' 'K. J. Ray Liu']"
] |
stat.ML cs.LG | null | 1212.1496 | null | null | http://arxiv.org/pdf/1212.1496v2 | 2013-01-14T17:55:24Z | 2012-12-06T23:06:32Z | Excess risk bounds for multitask learning with trace norm regularization | Trace norm regularization is a popular method of multitask learning. We give
excess risk bounds with explicit dependence on the number of tasks, the number
of examples per task and properties of the data distribution. The bounds are
independent of the dimension of the input space, which may be infinite as in
the case of reproducing kernel Hilbert spaces. A byproduct of the proof are
bounds on the expected norm of sums of random positive semidefinite matrices
with subexponential moments.
| [
"Andreas Maurer and Massimiliano Pontil",
"['Andreas Maurer' 'Massimiliano Pontil']"
] |
cs.NE cs.LG stat.ML | null | 1212.1524 | null | null | http://arxiv.org/pdf/1212.1524v2 | 2013-02-16T13:24:46Z | 2012-12-07T03:14:50Z | Layer-wise learning of deep generative models | When using deep, multi-layered architectures to build generative models of
data, it is difficult to train all layers at once. We propose a layer-wise
training procedure admitting a performance guarantee compared to the global
optimum. It is based on an optimistic proxy of future performance, the best
latent marginal. We interpret auto-encoders in this setting as generative
models, by showing that they train a lower bound of this criterion. We test the
new learning procedure against a state of the art method (stacked RBMs), and
find it to improve performance. Both theory and experiments highlight the
importance, when training deep architectures, of using an inference model (from
data to hidden variables) richer than the generative model (from hidden
variables to data).
| [
"['Ludovic Arnold' 'Yann Ollivier']",
"Ludovic Arnold and Yann Ollivier"
] |
cs.LG cs.DS | null | 1212.1527 | null | null | http://arxiv.org/pdf/1212.1527v3 | 2013-09-18T04:18:49Z | 2012-12-07T04:03:06Z | Learning Mixtures of Arbitrary Distributions over Large Discrete Domains | We give an algorithm for learning a mixture of {\em unstructured}
distributions. This problem arises in various unsupervised learning scenarios,
for example in learning {\em topic models} from a corpus of documents spanning
several topics. We show how to learn the constituents of a mixture of $k$
arbitrary distributions over a large discrete domain $[n]=\{1,2,\dots,n\}$ and
the mixture weights, using $O(n\polylog n)$ samples. (In the topic-model
learning setting, the mixture constituents correspond to the topic
distributions.) This task is information-theoretically impossible for $k>1$
under the usual sampling process from a mixture distribution. However, there
are situations (such as the above-mentioned topic model case) in which each
sample point consists of several observations from the same mixture
constituent. This number of observations, which we call the {\em "sampling
aperture"}, is a crucial parameter of the problem. We obtain the {\em first}
bounds for this mixture-learning problem {\em without imposing any assumptions
on the mixture constituents.} We show that efficient learning is possible
exactly at the information-theoretically least-possible aperture of $2k-1$.
Thus, we achieve near-optimal dependence on $n$ and optimal aperture. While the
sample-size required by our algorithm depends exponentially on $k$, we prove
that such a dependence is {\em unavoidable} when one considers general
mixtures. A sequence of tools contribute to the algorithm, such as
concentration results for random matrices, dimension reduction, moment
estimations, and sensitivity analysis.
| [
"Yuval Rabani, Leonard Schulman, Chaitanya Swamy",
"['Yuval Rabani' 'Leonard Schulman' 'Chaitanya Swamy']"
] |
cs.LG math.OC stat.ML | null | 1212.1824 | null | null | http://arxiv.org/pdf/1212.1824v2 | 2012-12-28T10:58:48Z | 2012-12-08T18:22:42Z | Stochastic Gradient Descent for Non-smooth Optimization: Convergence
Results and Optimal Averaging Schemes | Stochastic Gradient Descent (SGD) is one of the simplest and most popular
stochastic optimization methods. While it has already been theoretically
studied for decades, the classical analysis usually required non-trivial
smoothness assumptions, which do not apply to many modern applications of SGD
with non-smooth objective functions such as support vector machines. In this
paper, we investigate the performance of SGD without such smoothness
assumptions, as well as a running average scheme to convert the SGD iterates to
a solution with optimal optimization accuracy. In this framework, we prove that
after T rounds, the suboptimality of the last SGD iterate scales as
O(log(T)/\sqrt{T}) for non-smooth convex objective functions, and O(log(T)/T)
in the non-smooth strongly convex case. To the best of our knowledge, these are
the first bounds of this kind, and almost match the minimax-optimal rates
obtainable by appropriate averaging schemes. We also propose a new and simple
averaging scheme, which not only attains optimal rates, but can also be easily
computed on-the-fly (in contrast, the suffix averaging scheme proposed in
Rakhlin et al. (2011) is not as simple to implement). Finally, we provide some
experimental illustrations.
| [
"Ohad Shamir and Tong Zhang",
"['Ohad Shamir' 'Tong Zhang']"
] |
cs.LG | null | 1212.1936 | null | null | http://arxiv.org/pdf/1212.1936v1 | 2012-12-09T23:28:02Z | 2012-12-09T23:28:02Z | High-dimensional sequence transduction | We investigate the problem of transforming an input sequence into a
high-dimensional output sequence in order to transcribe polyphonic audio music
into symbolic notation. We introduce a probabilistic model based on a recurrent
neural network that is able to learn realistic output distributions given the
input and we devise an efficient algorithm to search for the global mode of
that distribution. The resulting method produces musically plausible
transcriptions even under high levels of noise and drastically outperforms
previous state-of-the-art approaches on five datasets of synthesized sounds and
real recordings, approximately halving the test error rate.
| [
"Nicolas Boulanger-Lewandowski, Yoshua Bengio and Pascal Vincent",
"['Nicolas Boulanger-Lewandowski' 'Yoshua Bengio' 'Pascal Vincent']"
] |
cs.LG math.OC stat.ML | null | 1212.2002 | null | null | http://arxiv.org/pdf/1212.2002v2 | 2012-12-20T20:55:23Z | 2012-12-10T09:22:06Z | A simpler approach to obtaining an O(1/t) convergence rate for the
projected stochastic subgradient method | In this note, we present a new averaging technique for the projected
stochastic subgradient method. By using a weighted average with a weight of t+1
for each iterate w_t at iteration t, we obtain the convergence rate of O(1/t)
with both an easy proof and an easy implementation. The new scheme is compared
empirically to existing techniques, with similar performance behavior.
| [
"Simon Lacoste-Julien, Mark Schmidt, Francis Bach",
"['Simon Lacoste-Julien' 'Mark Schmidt' 'Francis Bach']"
] |
cs.LG stat.ML | 10.1109/TPAMI.2013.99 | 1212.2136 | null | null | http://arxiv.org/abs/1212.2136v2 | 2013-06-18T12:03:42Z | 2012-12-10T17:12:51Z | A class of random fields on complete graphs with tractable partition
function | The aim of this short note is to draw attention to a method by which the
partition function and marginal probabilities for a certain class of random
fields on complete graphs can be computed in polynomial time. This class
includes Ising models with homogeneous pairwise potentials but arbitrary
(inhomogeneous) unary potentials. Similarly, the partition function and
marginal probabilities can be computed in polynomial time for random fields on
complete bipartite graphs, provided they have homogeneous pairwise potentials.
We expect that these tractable classes of large scale random fields can be very
useful for the evaluation of approximation algorithms by providing exact error
estimates.
| [
"Boris Flach",
"['Boris Flach']"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.