categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG stat.ML | null | 1306.1840 | null | null | http://arxiv.org/pdf/1306.1840v2 | 2013-06-23T05:32:31Z | 2013-06-07T20:12:17Z | Loss-Proportional Subsampling for Subsequent ERM | We propose a sampling scheme suitable for reducing a data set prior to
selecting a hypothesis with minimum empirical risk. The sampling only considers
a subset of the ultimate (unknown) hypothesis set, but can nonetheless
guarantee that the final excess risk will compare favorably with utilizing the
entire original data set. We demonstrate the practical benefits of our approach
on a large dataset which we subsample and subsequently fit with boosted trees.
| [
"['Paul Mineiro' 'Nikos Karampatziakis']",
"Paul Mineiro, Nikos Karampatziakis"
] |
cs.CV cs.LG stat.ML | 10.1109/CVPRW.2013.131 | 1306.1913 | null | null | http://arxiv.org/abs/1306.1913v1 | 2013-06-08T12:57:39Z | 2013-06-08T12:57:39Z | Emotional Expression Classification using Time-Series Kernels | Estimation of facial expressions, as spatio-temporal processes, can take
advantage of kernel methods if one considers facial landmark positions and
their motion in 3D space. We applied support vector classification with kernels
derived from dynamic time-warping similarity measures. We achieved over 99%
accuracy - measured by area under ROC curve - using only the 'motion pattern'
of the PCA compressed representation of the marker point vector, the so-called
shape parameters. Beyond the classification of full motion patterns, several
expressions were recognized with over 90% accuracy in as few as 5-6 frames from
their onset, about 200 milliseconds.
| [
"Andras Lorincz, Laszlo Jeni, Zoltan Szabo, Jeffrey Cohn, Takeo Kanade",
"['Andras Lorincz' 'Laszlo Jeni' 'Zoltan Szabo' 'Jeffrey Cohn'\n 'Takeo Kanade']"
] |
stat.ML cs.LG math.ST stat.TH | null | 1306.2035 | null | null | http://arxiv.org/pdf/1306.2035v1 | 2013-06-09T16:28:56Z | 2013-06-09T16:28:56Z | Minimax Theory for High-dimensional Gaussian Mixtures with Sparse Mean
Separation | While several papers have investigated computationally and statistically
efficient methods for learning Gaussian mixtures, precise minimax bounds for
their statistical performance as well as fundamental limits in high-dimensional
settings are not well-understood. In this paper, we provide precise information
theoretic bounds on the clustering accuracy and sample complexity of learning a
mixture of two isotropic Gaussians in high dimensions under small mean
separation. If there is a sparse subset of relevant dimensions that determine
the mean separation, then the sample complexity only depends on the number of
relevant dimensions and mean separation, and can be achieved by a simple
computationally efficient procedure. Our results provide the first step of a
theoretical basis for recent methods that combine feature selection and
clustering.
| [
"Martin Azizyan, Aarti Singh, Larry Wasserman",
"['Martin Azizyan' 'Aarti Singh' 'Larry Wasserman']"
] |
stat.ML cs.LG | null | 1306.2084 | null | null | http://arxiv.org/pdf/1306.2084v1 | 2013-06-10T01:45:49Z | 2013-06-10T01:45:49Z | Logistic Tensor Factorization for Multi-Relational Data | Tensor factorizations have become increasingly popular approaches for various
learning tasks on structured data. In this work, we extend the RESCAL tensor
factorization, which has shown state-of-the-art results for multi-relational
learning, to account for the binary nature of adjacency tensors. We study the
improvements that can be gained via this approach on various benchmark datasets
and show that the logistic extension can improve the prediction results
significantly.
| [
"Maximilian Nickel, Volker Tresp",
"['Maximilian Nickel' 'Volker Tresp']"
] |
cs.LG stat.AP | null | 1306.2094 | null | null | http://arxiv.org/pdf/1306.2094v1 | 2013-06-10T03:18:25Z | 2013-06-10T03:18:25Z | Predicting Risk-of-Readmission for Congestive Heart Failure Patients: A
Multi-Layer Approach | Mitigating risk-of-readmission of Congestive Heart Failure (CHF) patients
within 30 days of discharge is important because such readmissions are not only
expensive but also critical indicator of provider care and quality of
treatment. Accurately predicting the risk-of-readmission may allow hospitals to
identify high-risk patients and eventually improve quality of care by
identifying factors that contribute to such readmissions in many scenarios. In
this paper, we investigate the problem of predicting risk-of-readmission as a
supervised learning problem, using a multi-layer classification approach.
Earlier contributions inadequately attempted to assess a risk value for 30 day
readmission by building a direct predictive model as opposed to our approach.
We first split the problem into various stages, (a) at risk in general (b) risk
within 60 days (c) risk within 30 days, and then build suitable classifiers for
each stage, thereby increasing the ability to accurately predict the risk using
multiple layers of decision. The advantage of our approach is that we can use
different classification models for the subtasks that are more suited for the
respective problems. Moreover, each of the subtasks can be solved using
different features and training data leading to a highly confident diagnosis or
risk compared to a one-shot single layer approach. An experimental evaluation
on actual hospital patient record data from Multicare Health Systems shows that
our model is significantly better at predicting risk-of-readmission of CHF
patients within 30 days after discharge compared to prior attempts.
| [
"Kiyana Zolfaghar, Nele Verbiest, Jayshree Agarwal, Naren Meadem,\n Si-Chi Chin, Senjuti Basu Roy, Ankur Teredesai, David Hazel, Paul Amoroso,\n Lester Reed",
"['Kiyana Zolfaghar' 'Nele Verbiest' 'Jayshree Agarwal' 'Naren Meadem'\n 'Si-Chi Chin' 'Senjuti Basu Roy' 'Ankur Teredesai' 'David Hazel'\n 'Paul Amoroso' 'Lester Reed']"
] |
cs.CE cs.LG | null | 1306.2118 | null | null | http://arxiv.org/pdf/1306.2118v1 | 2013-06-10T07:28:51Z | 2013-06-10T07:28:51Z | A Novel Approach for Single Gene Selection Using Clustering and
Dimensionality Reduction | We extend the standard rough set-based approach to deal with huge amounts of
numeric attributes versus small amount of available objects. Here, a novel
approach of clustering along with dimensionality reduction; Hybrid Fuzzy C
Means-Quick Reduct (FCMQR) algorithm is proposed for single gene selection.
Gene selection is a process to select genes which are more informative. It is
one of the important steps in knowledge discovery. The problem is that all
genes are not important in gene expression data. Some of the genes may be
redundant, and others may be irrelevant and noisy. In this study, the entire
dataset is divided in proper grouping of similar genes by applying Fuzzy C
Means (FCM) algorithm. A high class discriminated genes has been selected based
on their degree of dependence by applying Quick Reduct algorithm based on Rough
Set Theory to all the resultant clusters. Average Correlation Value (ACV) is
calculated for the high class discriminated genes. The clusters which have the
ACV value a s 1 is determined as significant clusters, whose classification
accuracy will be equal or high when comparing to the accuracy of the entire
dataset. The proposed algorithm is evaluated using WEKA classifiers and
compared. Finally, experimental results related to the leukemia cancer data
confirm that our approach is quite promising, though it surely requires further
research.
| [
"E.N.Sathishkumar, K.Thangavel, T.Chandrasekhar",
"['E. N. Sathishkumar' 'K. Thangavel' 'T. Chandrasekhar']"
] |
null | null | 1306.2119 | null | null | http://arxiv.org/pdf/1306.2119v1 | 2013-06-10T07:31:10Z | 2013-06-10T07:31:10Z | Non-strongly-convex smooth stochastic approximation with convergence
rate O(1/n) | We consider the stochastic approximation problem where a convex function has to be minimized, given only the knowledge of unbiased estimates of its gradients at certain points, a framework which includes machine learning methods based on the minimization of the empirical risk. We focus on problems without strong convexity, for which all previously known algorithms achieve a convergence rate for function values of O(1/n^{1/2}). We consider and analyze two algorithms that achieve a rate of O(1/n) for classical supervised learning problems. For least-squares regression, we show that averaged stochastic gradient descent with constant step-size achieves the desired rate. For logistic regression, this is achieved by a simple novel stochastic gradient algorithm that (a) constructs successive local quadratic approximations of the loss functions, while (b) preserving the same running time complexity as stochastic gradient descent. For these algorithms, we provide a non-asymptotic analysis of the generalization error (in expectation, and also in high probability for least-squares), and run extensive experiments on standard machine learning benchmarks showing that they often outperform existing approaches. | [
"['Francis Bach' 'Eric Moulines']"
] |
math.ST cs.LG math.PR stat.TH | null | 1306.2290 | null | null | http://arxiv.org/pdf/1306.2290v1 | 2013-06-10T19:11:25Z | 2013-06-10T19:11:25Z | Asymptotically Optimal Sequential Estimation of the Mean Based on
Inclusion Principle | A large class of problems in sciences and engineering can be formulated as
the general problem of constructing random intervals with pre-specified
coverage probabilities for the mean. Wee propose a general approach for
statistical inference of mean values based on accumulated observational data.
We show that the construction of such random intervals can be accomplished by
comparing the endpoints of random intervals with confidence sequences for the
mean. Asymptotic results are obtained for such sequential methods.
| [
"['Xinjia Chen']",
"Xinjia Chen"
] |
cs.AI cs.LG | null | 1306.2295 | null | null | http://arxiv.org/pdf/1306.2295v1 | 2013-06-10T19:36:31Z | 2013-06-10T19:36:31Z | Markov random fields factorization with context-specific independences | Markov random fields provide a compact representation of joint probability
distributions by representing its independence properties in an undirected
graph. The well-known Hammersley-Clifford theorem uses these conditional
independences to factorize a Gibbs distribution into a set of factors. However,
an important issue of using a graph to represent independences is that it
cannot encode some types of independence relations, such as the
context-specific independences (CSIs). They are a particular case of
conditional independences that is true only for a certain assignment of its
conditioning set; in contrast to conditional independences that must hold for
all its assignments. This work presents a method for factorizing a Markov
random field according to CSIs present in a distribution, and formally
guarantees that this factorization is correct. This is presented in our main
contribution, the context-specific Hammersley-Clifford theorem, a
generalization to CSIs of the Hammersley-Clifford theorem that applies for
conditional independences.
| [
"Alejandro Edera, Facundo Bromberg, and Federico Schl\\\"uter",
"['Alejandro Edera' 'Facundo Bromberg' 'Federico Schlüter']"
] |
cs.SI cs.LG physics.soc-ph stat.ML | 10.1063/1.4840235 | 1306.2298 | null | null | http://arxiv.org/abs/1306.2298v3 | 2014-02-01T10:42:30Z | 2013-06-10T19:42:10Z | Generative Model Selection Using a Scalable and Size-Independent Complex
Network Classifier | Real networks exhibit nontrivial topological features such as heavy-tailed
degree distribution, high clustering, and small-worldness. Researchers have
developed several generative models for synthesizing artificial networks that
are structurally similar to real networks. An important research problem is to
identify the generative model that best fits to a target network. In this
paper, we investigate this problem and our goal is to select the model that is
able to generate graphs similar to a given network instance. By the means of
generating synthetic networks with seven outstanding generative models, we have
utilized machine learning methods to develop a decision tree for model
selection. Our proposed method, which is named "Generative Model Selection for
Complex Networks" (GMSCN), outperforms existing methods with respect to
accuracy, scalability and size-independence.
| [
"Sadegh Motallebi, Sadegh Aliakbary, Jafar Habibi",
"['Sadegh Motallebi' 'Sadegh Aliakbary' 'Jafar Habibi']"
] |
cs.LG | null | 1306.2347 | null | null | http://arxiv.org/pdf/1306.2347v4 | 2015-07-12T10:11:57Z | 2013-06-10T20:18:48Z | Auditing: Active Learning with Outcome-Dependent Query Costs | We propose a learning setting in which unlabeled data is free, and the cost
of a label depends on its value, which is not known in advance. We study binary
classification in an extreme case, where the algorithm only pays for negative
labels. Our motivation are applications such as fraud detection, in which
investigating an honest transaction should be avoided if possible. We term the
setting auditing, and consider the auditing complexity of an algorithm: the
number of negative labels the algorithm requires in order to learn a hypothesis
with low relative error. We design auditing algorithms for simple hypothesis
classes (thresholds and rectangles), and show that with these algorithms, the
auditing complexity can be significantly lower than the active label
complexity. We also discuss a general competitive approach for auditing and
possible modifications to the framework.
| [
"Sivan Sabato and Anand D. Sarwate and Nathan Srebro",
"['Sivan Sabato' 'Anand D. Sarwate' 'Nathan Srebro']"
] |
cs.LG stat.ML | null | 1306.2533 | null | null | http://arxiv.org/pdf/1306.2533v3 | 2017-02-17T13:37:25Z | 2013-06-11T14:13:46Z | DISCOMAX: A Proximity-Preserving Distance Correlation Maximization
Algorithm | In a regression setting we propose algorithms that reduce the dimensionality
of the features while simultaneously maximizing a statistical measure of
dependence known as distance correlation between the low-dimensional features
and a response variable. This helps in solving the prediction problem with a
low-dimensional set of features. Our setting is different from subset-selection
algorithms where the problem is to choose the best subset of features for
regression. Instead, we attempt to generate a new set of low-dimensional
features as in a feature-learning setting. We attempt to keep our proposed
approach as model-free and our algorithm does not assume the application of any
specific regression model in conjunction with the low-dimensional features that
it learns. The algorithm is iterative and is fomulated as a combination of the
majorization-minimization and concave-convex optimization procedures. We also
present spectral radius based convergence results for the proposed iterations.
| [
"Praneeth Vepakomma and Ahmed Elgammal",
"['Praneeth Vepakomma' 'Ahmed Elgammal']"
] |
cs.LG cs.DS stat.ML | null | 1306.2547 | null | null | http://arxiv.org/pdf/1306.2547v3 | 2014-07-10T21:33:44Z | 2013-06-11T15:00:35Z | Efficient Classification for Metric Data | Recent advances in large-margin classification of data residing in general
metric spaces (rather than Hilbert spaces) enable classification under various
natural metrics, such as string edit and earthmover distance. A general
framework developed for this purpose by von Luxburg and Bousquet [JMLR, 2004]
left open the questions of computational efficiency and of providing direct
bounds on generalization error.
We design a new algorithm for classification in general metric spaces, whose
runtime and accuracy depend on the doubling dimension of the data points, and
can thus achieve superior classification performance in many common scenarios.
The algorithmic core of our approach is an approximate (rather than exact)
solution to the classical problems of Lipschitz extension and of Nearest
Neighbor Search. The algorithm's generalization performance is guaranteed via
the fat-shattering dimension of Lipschitz classifiers, and we present
experimental evidence of its superiority to some common kernel methods. As a
by-product, we offer a new perspective on the nearest neighbor classifier,
which yields significantly sharper risk asymptotics than the classic analysis
of Cover and Hart [IEEE Trans. Info. Theory, 1967].
| [
"['Lee-Ad Gottlieb' 'Aryeh Kontorovich' 'Robert Krauthgamer']",
"Lee-Ad Gottlieb and Aryeh Kontorovich and Robert Krauthgamer"
] |
cs.NI cs.IT cs.LG math.IT | null | 1306.2554 | null | null | http://arxiv.org/pdf/1306.2554v1 | 2013-06-11T15:31:25Z | 2013-06-11T15:31:25Z | The association problem in wireless networks: a Policy Gradient
Reinforcement Learning approach | The purpose of this paper is to develop a self-optimized association
algorithm based on PGRL (Policy Gradient Reinforcement Learning), which is both
scalable, stable and robust. The term robust means that performance degradation
in the learning phase should be forbidden or limited to predefined thresholds.
The algorithm is model-free (as opposed to Value Iteration) and robust (as
opposed to Q-Learning). The association problem is modeled as a Markov Decision
Process (MDP). The policy space is parameterized. The parameterized family of
policies is then used as expert knowledge for the PGRL. The PGRL converges
towards a local optimum and the average cost decreases monotonically during the
learning process. The properties of the solution make it a good candidate for
practical implementation. Furthermore, the robustness property allows to use
the PGRL algorithm in an "always-on" learning mode.
| [
"['Richard Combes' 'Ilham El Bouloumi' 'Stephane Senecal' 'Zwi Altman']",
"Richard Combes and Ilham El Bouloumi and Stephane Senecal and Zwi\n Altman"
] |
cs.LG stat.ML | null | 1306.2557 | null | null | http://arxiv.org/pdf/1306.2557v6 | 2020-01-24T16:44:09Z | 2013-06-11T15:42:00Z | Concentration bounds for temporal difference learning with linear
function approximation: The case of batch data and uniform sampling | We propose a stochastic approximation (SA) based method with randomization of
samples for policy evaluation using the least squares temporal difference
(LSTD) algorithm. Our proposed scheme is equivalent to running regular temporal
difference learning with linear function approximation, albeit with samples
picked uniformly from a given dataset. Our method results in an $O(d)$
improvement in complexity in comparison to LSTD, where $d$ is the dimension of
the data. We provide non-asymptotic bounds for our proposed method, both in
high probability and in expectation, under the assumption that the matrix
underlying the LSTD solution is positive definite. The latter assumption can be
easily satisfied for the pathwise LSTD variant proposed in [23]. Moreover, we
also establish that using our method in place of LSTD does not impact the rate
of convergence of the approximate value function to the true value function.
These rate results coupled with the low computational complexity of our method
make it attractive for implementation in big data settings, where $d$ is large.
A similar low-complexity alternative for least squares regression is well-known
as the stochastic gradient descent (SGD) algorithm. We provide finite-time
bounds for SGD. We demonstrate the practicality of our method as an efficient
alternative for pathwise LSTD empirically by combining it with the least
squares policy iteration (LSPI) algorithm in a traffic signal control
application. We also conduct another set of experiments that combines the SA
based low-complexity variant for least squares regression with the LinUCB
algorithm for contextual bandits, using the large scale news recommendation
dataset from Yahoo.
| [
"L.A. Prashanth, Nathaniel Korda and R\\'emi Munos",
"['L. A. Prashanth' 'Nathaniel Korda' 'Rémi Munos']"
] |
cs.LG cs.NA | null | 1306.2663 | null | null | http://arxiv.org/pdf/1306.2663v1 | 2013-06-11T21:39:56Z | 2013-06-11T21:39:56Z | Large Margin Low Rank Tensor Analysis | Other than vector representations, the direct objects of human cognition are
generally high-order tensors, such as 2D images and 3D textures. From this
fact, two interesting questions naturally arise: How does the human brain
represent these tensor perceptions in a "manifold" way, and how can they be
recognized on the "manifold"? In this paper, we present a supervised model to
learn the intrinsic structure of the tensors embedded in a high dimensional
Euclidean space. With the fixed point continuation procedures, our model
automatically and jointly discovers the optimal dimensionality and the
representations of the low dimensional embeddings. This makes it an effective
simulation of the cognitive process of human brain. Furthermore, the
generalization of our model based on similarity between the learned low
dimensional embeddings can be viewed as counterpart of recognition of human
brain. Experiments on applications for object recognition and face recognition
demonstrate the superiority of our proposed model over state-of-the-art
approaches.
| [
"['Guoqiang Zhong' 'Mohamed Cheriet']",
"Guoqiang Zhong and Mohamed Cheriet"
] |
cs.IT cs.LG cs.SY math.IT math.OC stat.ML | null | 1306.2665 | null | null | http://arxiv.org/pdf/1306.2665v3 | 2013-08-10T01:14:46Z | 2013-06-11T21:57:47Z | Precisely Verifying the Null Space Conditions in Compressed Sensing: A
Sandwiching Algorithm | In this paper, we propose new efficient algorithms to verify the null space
condition in compressed sensing (CS). Given an $(n-m) \times n$ ($m>0$) CS
matrix $A$ and a positive $k$, we are interested in computing $\displaystyle
\alpha_k = \max_{\{z: Az=0,z\neq 0\}}\max_{\{K: |K|\leq k\}}$ ${\|z_K
\|_{1}}{\|z\|_{1}}$, where $K$ represents subsets of $\{1,2,...,n\}$, and $|K|$
is the cardinality of $K$. In particular, we are interested in finding the
maximum $k$ such that $\alpha_k < {1}{2}$. However, computing $\alpha_k$ is
known to be extremely challenging. In this paper, we first propose a series of
new polynomial-time algorithms to compute upper bounds on $\alpha_k$. Based on
these new polynomial-time algorithms, we further design a new sandwiching
algorithm, to compute the \emph{exact} $\alpha_k$ with greatly reduced
complexity. When needed, this new sandwiching algorithm also achieves a smooth
tradeoff between computational complexity and result accuracy. Empirical
results show the performance improvements of our algorithm over existing known
methods; and our algorithm outputs precise values of $\alpha_k$, with much
lower complexity than exhaustive search.
| [
"Myung Cho and Weiyu Xu",
"['Myung Cho' 'Weiyu Xu']"
] |
math.OC cs.LG | null | 1306.2672 | null | null | http://arxiv.org/pdf/1306.2672v2 | 2014-09-20T12:59:58Z | 2013-06-11T22:42:21Z | R3MC: A Riemannian three-factor algorithm for low-rank matrix completion | We exploit the versatile framework of Riemannian optimization on quotient
manifolds to develop R3MC, a nonlinear conjugate-gradient method for low-rank
matrix completion. The underlying search space of fixed-rank matrices is
endowed with a novel Riemannian metric that is tailored to the least-squares
cost. Numerical comparisons suggest that R3MC robustly outperforms
state-of-the-art algorithms across different problem instances, especially
those that combine scarcely sampled and ill-conditioned data.
| [
"['B. Mishra' 'R. Sepulchre']",
"B. Mishra and R. Sepulchre"
] |
stat.ML cs.LG stat.CO | null | 1306.2685 | null | null | http://arxiv.org/pdf/1306.2685v3 | 2013-11-14T15:31:46Z | 2013-06-12T01:13:46Z | Flexible sampling of discrete data correlations without the marginal
distributions | Learning the joint dependence of discrete variables is a fundamental problem
in machine learning, with many applications including prediction, clustering
and dimensionality reduction. More recently, the framework of copula modeling
has gained popularity due to its modular parametrization of joint
distributions. Among other properties, copulas provide a recipe for combining
flexible models for univariate marginal distributions with parametric families
suitable for potentially high dimensional dependence structures. More
radically, the extended rank likelihood approach of Hoff (2007) bypasses
learning marginal models completely when such information is ancillary to the
learning task at hand as in, e.g., standard dimensionality reduction problems
or copula parameter estimation. The main idea is to represent data by their
observable rank statistics, ignoring any other information from the marginals.
Inference is typically done in a Bayesian framework with Gaussian copulas, and
it is complicated by the fact this implies sampling within a space where the
number of constraints increases quadratically with the number of data points.
The result is slow mixing when using off-the-shelf Gibbs sampling. We present
an efficient algorithm based on recent advances on constrained Hamiltonian
Markov chain Monte Carlo that is simple to implement and does not require
paying for a quadratic cost in sample size.
| [
"['Alfredo Kalaitzis' 'Ricardo Silva']",
"Alfredo Kalaitzis and Ricardo Silva"
] |
cs.LG stat.ML | null | 1306.2733 | null | null | http://arxiv.org/pdf/1306.2733v2 | 2013-10-06T05:51:41Z | 2013-06-12T07:42:15Z | Copula Mixed-Membership Stochastic Blockmodel for Intra-Subgroup
Correlations | The \emph{Mixed-Membership Stochastic Blockmodel (MMSB)} is a popular
framework for modeling social network relationships. It can fully exploit each
individual node's participation (or membership) in a social structure. Despite
its powerful representations, this model makes an assumption that the
distributions of relational membership indicators between two nodes are
independent. Under many social network settings, however, it is possible that
certain known subgroups of people may have high or low correlations in terms of
their membership categories towards each other, and such prior information
should be incorporated into the model. To this end, we introduce a \emph{Copula
Mixed-Membership Stochastic Blockmodel (cMMSB)} where an individual Copula
function is employed to jointly model the membership pairs of those nodes
within the subgroup of interest. The model enables the use of various Copula
functions to suit the scenario, while maintaining the membership's marginal
distribution, as needed, for modeling membership indicators with other nodes
outside of the subgroup of interest. We describe the proposed model and its
inference algorithm in detail for both the finite and infinite cases. In the
experiment section, we compare our algorithms with other popular models in
terms of link prediction, using both synthetic and real world data.
| [
"['Xuhui Fan' 'Longbing Cao' 'Richard Yi Da Xu']",
"Xuhui Fan, Longbing Cao, Richard Yi Da Xu"
] |
cs.LG stat.ML | null | 1306.2759 | null | null | http://arxiv.org/pdf/1306.2759v1 | 2013-06-12T08:57:35Z | 2013-06-12T08:57:35Z | Horizontal and Vertical Ensemble with Deep Representation for
Classification | Representation learning, especially which by using deep learning, has been
widely applied in classification. However, how to use limited size of labeled
data to achieve good classification performance with deep neural network, and
how can the learned features further improve classification remain indefinite.
In this paper, we propose Horizontal Voting Vertical Voting and Horizontal
Stacked Ensemble methods to improve the classification performance of deep
neural networks. In the ICML 2013 Black Box Challenge, via using these methods
independently, Bing Xu achieved 3rd in public leaderboard, and 7th in private
leaderboard; Jingjing Xie achieved 4th in public leaderboard, and 5th in
private leaderboard.
| [
"Jingjing Xie, Bing Xu, Zhang Chuang",
"['Jingjing Xie' 'Bing Xu' 'Zhang Chuang']"
] |
cs.NE cs.LG stat.ML | null | 1306.2801 | null | null | http://arxiv.org/pdf/1306.2801v4 | 2013-08-18T21:39:12Z | 2013-06-12T12:38:40Z | Understanding Dropout: Training Multi-Layer Perceptrons with Auxiliary
Independent Stochastic Neurons | In this paper, a simple, general method of adding auxiliary stochastic
neurons to a multi-layer perceptron is proposed. It is shown that the proposed
method is a generalization of recently successful methods of dropout (Hinton et
al., 2012), explicit noise injection (Vincent et al., 2010; Bishop, 1995) and
semantic hashing (Salakhutdinov & Hinton, 2009). Under the proposed framework,
an extension of dropout which allows using separate dropping probabilities for
different hidden neurons, or layers, is found to be available. The use of
different dropping probabilities for hidden layers separately is empirically
investigated.
| [
"['Kyunghyun Cho']",
"Kyunghyun Cho"
] |
stat.ML cs.LG cs.SY | null | 1306.2861 | null | null | http://arxiv.org/pdf/1306.2861v2 | 2013-12-17T16:10:24Z | 2013-06-12T15:20:28Z | Bayesian Inference and Learning in Gaussian Process State-Space Models
with Particle MCMC | State-space models are successfully used in many areas of science,
engineering and economics to model time series and dynamical systems. We
present a fully Bayesian approach to inference \emph{and learning} (i.e. state
estimation and system identification) in nonlinear nonparametric state-space
models. We place a Gaussian process prior over the state transition dynamics,
resulting in a flexible model able to capture complex dynamical phenomena. To
enable efficient inference, we marginalize over the transition dynamics
function and infer directly the joint smoothing distribution using specially
tailored Particle Markov Chain Monte Carlo samplers. Once a sample from the
smoothing distribution is computed, the state transition predictive
distribution can be formulated analytically. Our approach preserves the full
nonparametric expressivity of the model and can make use of sparse Gaussian
processes to greatly reduce computational complexity.
| [
"Roger Frigola, Fredrik Lindsten, Thomas B. Sch\\\"on, Carl E. Rasmussen",
"['Roger Frigola' 'Fredrik Lindsten' 'Thomas B. Schön' 'Carl E. Rasmussen']"
] |
cs.LG cs.SD stat.ML | null | 1306.2906 | null | null | http://arxiv.org/pdf/1306.2906v1 | 2013-06-12T17:32:02Z | 2013-06-12T17:32:02Z | Robust Support Vector Machines for Speaker Verification Task | An important step in speaker verification is extracting features that best
characterize the speaker voice. This paper investigates a front-end processing
that aims at improving the performance of speaker verification based on the
SVMs classifier, in text independent mode. This approach combines features
based on conventional Mel-cepstral Coefficients (MFCCs) and Line Spectral
Frequencies (LSFs) to constitute robust multivariate feature vectors. To reduce
the high dimensionality required for training these feature vectors, we use a
dimension reduction method called principal component analysis (PCA). In order
to evaluate the robustness of these systems, different noisy environments have
been used. The obtained results using TIMIT database showed that, using the
paradigm that combines these spectral cues leads to a significant improvement
in verification accuracy, especially with PCA reduction for low signal-to-noise
ratio noisy environment.
| [
"Kawthar Yasmine Zergat, Abderrahmane Amrouche",
"['Kawthar Yasmine Zergat' 'Abderrahmane Amrouche']"
] |
cs.GT cs.LG math.PR | null | 1306.2918 | null | null | http://arxiv.org/pdf/1306.2918v1 | 2013-06-12T18:37:10Z | 2013-06-12T18:37:10Z | Reinforcement learning with restrictions on the action set | Consider a 2-player normal-form game repeated over time. We introduce an
adaptive learning procedure, where the players only observe their own realized
payoff at each stage. We assume that agents do not know their own payoff
function, and have no information on the other player. Furthermore, we assume
that they have restrictions on their own action set such that, at each stage,
their choice is limited to a subset of their action set. We prove that the
empirical distributions of play converge to the set of Nash equilibria for
zero-sum and potential games, and games where one player has two actions.
| [
"Mario Bravo (ISCI), Mathieu Faure (AMSE)",
"['Mario Bravo' 'Mathieu Faure']"
] |
stat.ML cs.IT cs.LG math.IT | null | 1306.2979 | null | null | http://arxiv.org/pdf/1306.2979v4 | 2014-07-21T09:48:19Z | 2013-06-12T21:26:00Z | Completing Any Low-rank Matrix, Provably | Matrix completion, i.e., the exact and provable recovery of a low-rank matrix
from a small subset of its elements, is currently only known to be possible if
the matrix satisfies a restrictive structural constraint---known as {\em
incoherence}---on its row and column spaces. In these cases, the subset of
elements is sampled uniformly at random.
In this paper, we show that {\em any} rank-$ r $ $ n$-by-$ n $ matrix can be
exactly recovered from as few as $O(nr \log^2 n)$ randomly chosen elements,
provided this random choice is made according to a {\em specific biased
distribution}: the probability of any element being sampled should be
proportional to the sum of the leverage scores of the corresponding row, and
column. Perhaps equally important, we show that this specific form of sampling
is nearly necessary, in a natural precise sense; this implies that other
perhaps more intuitive sampling schemes fail.
We further establish three ways to use the above result for the setting when
leverage scores are not known \textit{a priori}: (a) a sampling strategy for
the case when only one of the row or column spaces are incoherent, (b) a
two-phase sampling procedure for general matrices that first samples to
estimate leverage scores followed by sampling for exact recovery, and (c) an
analysis showing the advantages of weighted nuclear/trace-norm minimization
over the vanilla un-weighted formulation for the case of non-uniform sampling.
| [
"['Yudong Chen' 'Srinadh Bhojanapalli' 'Sujay Sanghavi' 'Rachel Ward']",
"Yudong Chen, Srinadh Bhojanapalli, Sujay Sanghavi, Rachel Ward"
] |
cs.SI cs.LG stat.ML | null | 1306.2999 | null | null | http://arxiv.org/pdf/1306.2999v1 | 2013-06-13T00:42:19Z | 2013-06-13T00:42:19Z | Dynamic Infinite Mixed-Membership Stochastic Blockmodel | Directional and pairwise measurements are often used to model
inter-relationships in a social network setting. The Mixed-Membership
Stochastic Blockmodel (MMSB) was a seminal work in this area, and many of its
capabilities were extended since then. In this paper, we propose the
\emph{Dynamic Infinite Mixed-Membership stochastic blockModel (DIM3)}, a
generalised framework that extends the existing work to a potentially infinite
number of communities and mixture memberships for each of the network's nodes.
This model is in a dynamic setting, where additional model parameters are
introduced to reflect the degree of persistence between one's memberships at
consecutive times. Accordingly, two effective posterior sampling strategies and
their results are presented using both synthetic and real data.
| [
"['Xuhui Fan' 'Longbing Cao' 'Richard Yi Da Xu']",
"Xuhui Fan, Longbing Cao, Richard Yi Da Xu"
] |
stat.ML cs.LG | null | 1306.3002 | null | null | http://arxiv.org/pdf/1306.3002v1 | 2013-06-13T01:00:21Z | 2013-06-13T01:00:21Z | A Convergence Theorem for the Graph Shift-type Algorithms | Graph Shift (GS) algorithms are recently focused as a promising approach for
discovering dense subgraphs in noisy data. However, there are no theoretical
foundations for proving the convergence of the GS Algorithm. In this paper, we
propose a generic theoretical framework consisting of three key GS components:
simplex of generated sequence set, monotonic and continuous objective function
and closed mapping. We prove that GS algorithms with such components can be
transformed to fit the Zangwill's convergence theorem, and the sequence set
generated by the GS procedures always terminates at a local maximum, or at
worst, contains a subsequence which converges to a local maximum of the
similarity measure function. The framework is verified by expanding it to other
GS-type algorithms and experimental results.
| [
"['Xuhui Fan' 'Longbing Cao']",
"Xuhui Fan, Longbing Cao"
] |
cs.LG cs.CV stat.ML | null | 1306.3003 | null | null | http://arxiv.org/pdf/1306.3003v1 | 2013-06-13T01:20:50Z | 2013-06-13T01:20:50Z | Non-parametric Power-law Data Clustering | It has always been a great challenge for clustering algorithms to
automatically determine the cluster numbers according to the distribution of
datasets. Several approaches have been proposed to address this issue,
including the recent promising work which incorporate Bayesian Nonparametrics
into the $k$-means clustering procedure. This approach shows simplicity in
implementation and solidity in theory, while it also provides a feasible way to
inference in large scale datasets. However, several problems remains unsolved
in this pioneering work, including the power-law data applicability, mechanism
to merge centers to avoid the over-fitting problem, clustering order problem,
e.t.c.. To address these issues, the Pitman-Yor Process based k-means (namely
\emph{pyp-means}) is proposed in this paper. Taking advantage of the Pitman-Yor
Process, \emph{pyp-means} treats clusters differently by dynamically and
adaptively changing the threshold to guarantee the generation of power-law
clustering results. Also, one center agglomeration procedure is integrated into
the implementation to be able to merge small but close clusters and then
adaptively determine the cluster number. With more discussion on the clustering
order, the convergence proof, complexity analysis and extension to spectral
clustering, our approach is compared with traditional clustering algorithm and
variational inference methods. The advantages and properties of pyp-means are
validated by experiments on both synthetic datasets and real world datasets.
| [
"Xuhui Fan, Yiling Zeng, Longbing Cao",
"['Xuhui Fan' 'Yiling Zeng' 'Longbing Cao']"
] |
cs.LG cs.CE stat.ML | null | 1306.3058 | null | null | http://arxiv.org/pdf/1306.3058v1 | 2013-06-13T09:05:08Z | 2013-06-13T09:05:08Z | Physeter catodon localization by sparse coding | This paper presents a spermwhale' localization architecture using jointly a
bag-of-features (BoF) approach and machine learning framework. BoF methods are
known, especially in computer vision, to produce from a collection of local
features a global representation invariant to principal signal transformations.
Our idea is to regress supervisely from these local features two rough
estimates of the distance and azimuth thanks to some datasets where both
acoustic events and ground-truth position are now available. Furthermore, these
estimates can feed a particle filter system in order to obtain a precise
spermwhale' position even in mono-hydrophone configuration. Anti-collision
system and whale watching are considered applications of this work.
| [
"['Sébastien Paris' 'Yann Doh' 'Hervé Glotin' 'Xanadu Halkias'\n 'Joseph Razik']",
"S\\'ebastien Paris and Yann Doh and Herv\\'e Glotin and Xanadu Halkias\n and Joseph Razik"
] |
cs.LG | null | 1306.3108 | null | null | http://arxiv.org/pdf/1306.3108v2 | 2013-08-29T15:38:27Z | 2013-06-13T13:47:51Z | Guaranteed Classification via Regularized Similarity Learning | Learning an appropriate (dis)similarity function from the available data is a
central problem in machine learning, since the success of many machine learning
algorithms critically depends on the choice of a similarity function to compare
examples. Despite many approaches for similarity metric learning have been
proposed, there is little theoretical study on the links between similarity
met- ric learning and the classification performance of the result classifier.
In this paper, we propose a regularized similarity learning formulation
associated with general matrix-norms, and establish their generalization
bounds. We show that the generalization error of the resulting linear separator
can be bounded by the derived generalization bound of similarity learning. This
shows that a good gen- eralization of the learnt similarity function guarantees
a good classification of the resulting linear classifier. Our results extend
and improve those obtained by Bellet at al. [3]. Due to the techniques
dependent on the notion of uniform stability [6], the bound obtained there
holds true only for the Frobenius matrix- norm regularization. Our techniques
using the Rademacher complexity [5] and its related Khinchin-type inequality
enable us to establish bounds for regularized similarity learning formulations
associated with general matrix-norms including sparse L 1 -norm and mixed
(2,1)-norm.
| [
"Zheng-Chu Guo and Yiming Ying",
"['Zheng-Chu Guo' 'Yiming Ying']"
] |
stat.ML cs.LG | 10.1016/j.neunet.2014.02.002 | 1306.3161 | null | null | http://arxiv.org/abs/1306.3161v2 | 2014-03-02T13:57:55Z | 2013-06-13T16:36:07Z | Learning Using Privileged Information: SVM+ and Weighted SVM | Prior knowledge can be used to improve predictive performance of learning
algorithms or reduce the amount of data required for training. The same goal is
pursued within the learning using privileged information paradigm which was
recently introduced by Vapnik et al. and is aimed at utilizing additional
information available only at training time -- a framework implemented by SVM+.
We relate the privileged information to importance weighting and show that the
prior knowledge expressible with privileged features can also be encoded by
weights associated with every training example. We show that a weighted SVM can
always replicate an SVM+ solution, while the converse is not true and we
construct a counterexample highlighting the limitations of SVM+. Finally, we
touch on the problem of choosing weights for weighted SVMs when privileged
features are not available.
| [
"Maksim Lapin, Matthias Hein, Bernt Schiele",
"['Maksim Lapin' 'Matthias Hein' 'Bernt Schiele']"
] |
cs.CV cs.LG stat.ML | null | 1306.3162 | null | null | http://arxiv.org/pdf/1306.3162v3 | 2014-02-10T11:19:23Z | 2013-06-13T16:46:03Z | Learning to encode motion using spatio-temporal synchrony | We consider the task of learning to extract motion from videos. To this end,
we show that the detection of spatial transformations can be viewed as the
detection of synchrony between the image sequence and a sequence of features
undergoing the motion we wish to detect. We show that learning about synchrony
is possible using very fast, local learning rules, by introducing
multiplicative "gating" interactions between hidden units across frames. This
makes it possible to achieve competitive performance in a wide variety of
motion estimation tasks, using a small fraction of the time required to learn
features, and to outperform hand-crafted spatio-temporal features by a large
margin. We also show how learning about synchrony can be viewed as performing
greedy parameter estimation in the well-known motion energy model.
| [
"['Kishore Reddy Konda' 'Roland Memisevic' 'Vincent Michalski']",
"Kishore Reddy Konda, Roland Memisevic, Vincent Michalski"
] |
stat.ME cs.IT cs.LG math.IT | null | 1306.3171 | null | null | http://arxiv.org/pdf/1306.3171v2 | 2014-04-02T00:29:37Z | 2013-06-13T17:19:39Z | Confidence Intervals and Hypothesis Testing for High-Dimensional
Regression | Fitting high-dimensional statistical models often requires the use of
non-linear parameter estimation procedures. As a consequence, it is generally
impossible to obtain an exact characterization of the probability distribution
of the parameter estimates. This in turn implies that it is extremely
challenging to quantify the \emph{uncertainty} associated with a certain
parameter estimate. Concretely, no commonly accepted procedure exists for
computing classical measures of uncertainty and statistical significance as
confidence intervals or $p$-values for these models.
We consider here high-dimensional linear regression problem, and propose an
efficient algorithm for constructing confidence intervals and $p$-values. The
resulting confidence intervals have nearly optimal size. When testing for the
null hypothesis that a certain parameter is vanishing, our method has nearly
optimal power.
Our approach is based on constructing a `de-biased' version of regularized
M-estimators. The new construction improves over recent work in the field in
that it does not assume a special structure on the design matrix. We test our
method on synthetic data and a high-throughput genomic data set about
riboflavin production rate.
| [
"['Adel Javanmard' 'Andrea Montanari']",
"Adel Javanmard and Andrea Montanari"
] |
math.OC cs.LG stat.ML | null | 1306.3203 | null | null | http://arxiv.org/pdf/1306.3203v3 | 2014-07-08T03:55:36Z | 2013-06-13T19:22:16Z | Bregman Alternating Direction Method of Multipliers | The mirror descent algorithm (MDA) generalizes gradient descent by using a
Bregman divergence to replace squared Euclidean distance. In this paper, we
similarly generalize the alternating direction method of multipliers (ADMM) to
Bregman ADMM (BADMM), which allows the choice of different Bregman divergences
to exploit the structure of problems. BADMM provides a unified framework for
ADMM and its variants, including generalized ADMM, inexact ADMM and Bethe ADMM.
We establish the global convergence and the $O(1/T)$ iteration complexity for
BADMM. In some cases, BADMM can be faster than ADMM by a factor of
$O(n/\log(n))$. In solving the linear program of mass transportation problem,
BADMM leads to massive parallelism and can easily run on GPU. BADMM is several
times faster than highly optimized commercial software Gurobi.
| [
"['Huahua Wang' 'Arindam Banerjee']",
"Huahua Wang and Arindam Banerjee"
] |
cs.LG stat.ML | null | 1306.3212 | null | null | http://arxiv.org/pdf/1306.3212v1 | 2013-06-13T19:51:59Z | 2013-06-13T19:51:59Z | Sparse Inverse Covariance Matrix Estimation Using Quadratic
Approximation | The L1-regularized Gaussian maximum likelihood estimator (MLE) has been shown
to have strong statistical guarantees in recovering a sparse inverse covariance
matrix, or alternatively the underlying graph structure of a Gaussian Markov
Random Field, from very limited samples. We propose a novel algorithm for
solving the resulting optimization problem which is a regularized
log-determinant program. In contrast to recent state-of-the-art methods that
largely use first order gradient information, our algorithm is based on
Newton's method and employs a quadratic approximation, but with some
modifications that leverage the structure of the sparse Gaussian MLE problem.
We show that our method is superlinearly convergent, and present experimental
results using synthetic and real-world application data that demonstrate the
considerable improvements in performance of our method when compared to other
state-of-the-art methods.
| [
"['Cho-Jui Hsieh' 'Matyas A. Sustik' 'Inderjit S. Dhillon'\n 'Pradeep Ravikumar']",
"Cho-Jui Hsieh, Matyas A. Sustik, Inderjit S. Dhillon and Pradeep\n Ravikumar"
] |
cs.LG cs.NA stat.ML | null | 1306.3343 | null | null | http://arxiv.org/pdf/1306.3343v3 | 2014-02-12T09:27:57Z | 2013-06-14T09:10:00Z | Relaxed Sparse Eigenvalue Conditions for Sparse Estimation via
Non-convex Regularized Regression | Non-convex regularizers usually improve the performance of sparse estimation
in practice. To prove this fact, we study the conditions of sparse estimations
for the sharp concave regularizers which are a general family of non-convex
regularizers including many existing regularizers. For the global solutions of
the regularized regression, our sparse eigenvalue based conditions are weaker
than that of L1-regularization for parameter estimation and sparseness
estimation. For the approximate global and approximate stationary (AGAS)
solutions, almost the same conditions are also enough. We show that the desired
AGAS solutions can be obtained by coordinate descent (CD) based methods.
Finally, we perform some experiments to show the performance of CD methods on
giving AGAS solutions and the degree of weakness of the estimation conditions
required by the sharp concave regularizers.
| [
"['Zheng Pan' 'Changshui Zhang']",
"Zheng Pan, Changshui Zhang"
] |
stat.ML cs.LG math.OC | null | 1306.3409 | null | null | http://arxiv.org/pdf/1306.3409v1 | 2013-06-14T14:20:29Z | 2013-06-14T14:20:29Z | Constrained fractional set programs and their application in local
clustering and community detection | The (constrained) minimization of a ratio of set functions is a problem
frequently occurring in clustering and community detection. As these
optimization problems are typically NP-hard, one uses convex or spectral
relaxations in practice. While these relaxations can be solved globally
optimally, they are often too loose and thus lead to results far away from the
optimum. In this paper we show that every constrained minimization problem of a
ratio of non-negative set functions allows a tight relaxation into an
unconstrained continuous optimization problem. This result leads to a flexible
framework for solving constrained problems in network analysis. While a
globally optimal solution for the resulting non-convex problem cannot be
guaranteed, we outperform the loose convex or spectral relaxations by a large
margin on constrained local clustering problems.
| [
"['Thomas Bühler' 'Syama Sundar Rangapuram' 'Simon Setzer' 'Matthias Hein']",
"Thomas B\\\"uhler, Syama Sundar Rangapuram, Simon Setzer, Matthias Hein"
] |
cs.LG cs.HC stat.ML | null | 1306.3474 | null | null | http://arxiv.org/pdf/1306.3474v1 | 2013-06-14T18:24:19Z | 2013-06-14T18:24:19Z | Classifying Single-Trial EEG during Motor Imagery with a Small Training
Set | Before the operation of a motor imagery based brain-computer interface (BCI)
adopting machine learning techniques, a cumbersome training procedure is
unavoidable. The development of a practical BCI posed the challenge of
classifying single-trial EEG with a small training set. In this letter, we
addressed this problem by employing a series of signal processing and machine
learning approaches to alleviate overfitting and obtained test accuracy similar
to training accuracy on the datasets from BCI Competition III and our own
experiments.
| [
"['Yijun Wang']",
"Yijun Wang"
] |
cs.CV cs.LG stat.ML | null | 1306.3476 | null | null | http://arxiv.org/pdf/1306.3476v1 | 2013-06-14T18:28:52Z | 2013-06-14T18:28:52Z | Hyperparameter Optimization and Boosting for Classifying Facial
Expressions: How good can a "Null" Model be? | One of the goals of the ICML workshop on representation and learning is to
establish benchmark scores for a new data set of labeled facial expressions.
This paper presents the performance of a "Null" model consisting of
convolutions with random weights, PCA, pooling, normalization, and a linear
readout. Our approach focused on hyperparameter optimization rather than novel
model components. On the Facial Expression Recognition Challenge held by the
Kaggle website, our hyperparameter optimization approach achieved a score of
60% accuracy on the test data. This paper also introduces a new ensemble
construction variant that combines hyperparameter optimization with the
construction of ensembles. This algorithm constructed an ensemble of four
models that scored 65.5% accuracy. These scores rank 12th and 5th respectively
among the 56 challenge participants. It is worth noting that our approach was
developed prior to the release of the data set, and applied without
modification; our strong competition performance suggests that the TPE
hyperparameter optimization algorithm and domain expertise encoded in our Null
model can generalize to new image classification data sets.
| [
"James Bergstra and David D. Cox",
"['James Bergstra' 'David D. Cox']"
] |
cs.DS cs.LG | null | 1306.3525 | null | null | http://arxiv.org/pdf/1306.3525v2 | 2013-07-17T19:16:47Z | 2013-06-14T22:24:29Z | Approximation Algorithms for Bayesian Multi-Armed Bandit Problems | In this paper, we consider several finite-horizon Bayesian multi-armed bandit
problems with side constraints which are computationally intractable (NP-Hard)
and for which no optimal (or near optimal) algorithms are known to exist with
sub-exponential running time. All of these problems violate the standard
exchange property, which assumes that the reward from the play of an arm is not
contingent upon when the arm is played. Not only are index policies suboptimal
in these contexts, there has been little analysis of such policies in these
problem settings. We show that if we consider near-optimal policies, in the
sense of approximation algorithms, then there exists (near) index policies.
Conceptually, if we can find policies that satisfy an approximate version of
the exchange property, namely, that the reward from the play of an arm depends
on when the arm is played to within a constant factor, then we have an avenue
towards solving these problems. However such an approximate version of the
idling bandit property does not hold on a per-play basis and are shown to hold
in a global sense. Clearly, such a property is not necessarily true of
arbitrary single arm policies and finding such single arm policies is
nontrivial. We show that by restricting the state spaces of arms we can find
single arm policies and that these single arm policies can be combined into
global (near) index policies where the approximate version of the exchange
property is true in expectation. The number of different bandit problems that
can be addressed by this technique already demonstrate its wide applicability.
| [
"Sudipto Guha and Kamesh Munagala",
"['Sudipto Guha' 'Kamesh Munagala']"
] |
cs.LG cs.DB stat.ML | null | 1306.3558 | null | null | http://arxiv.org/pdf/1306.3558v1 | 2013-06-15T08:52:46Z | 2013-06-15T08:52:46Z | Outlying Property Detection with Numerical Attributes | The outlying property detection problem is the problem of discovering the
properties distinguishing a given object, known in advance to be an outlier in
a database, from the other database objects. In this paper, we analyze the
problem within a context where numerical attributes are taken into account,
which represents a relevant case left open in the literature. We introduce a
measure to quantify the degree the outlierness of an object, which is
associated with the relative likelihood of the value, compared to the to the
relative likelihood of other objects in the database. As a major contribution,
we present an efficient algorithm to compute the outlierness relative to
significant subsets of the data. The latter subsets are characterized in a
"rule-based" fashion, and hence the basis for the underlying explanation of the
outlierness.
| [
"['Fabrizio Angiulli' 'Fabio Fassetti' 'Luigi Palopoli' 'Giuseppe Manco']",
"Fabrizio Angiulli and Fabio Fassetti and Luigi Palopoli and Giuseppe\n Manco"
] |
cs.LG math.OC | null | 1306.3721 | null | null | http://arxiv.org/pdf/1306.3721v2 | 2013-07-10T18:36:18Z | 2013-06-17T01:27:10Z | Online Alternating Direction Method (longer version) | Online optimization has emerged as powerful tool in large scale optimization.
In this pa- per, we introduce efficient online optimization algorithms based on
the alternating direction method (ADM), which can solve online convex
optimization under linear constraints where the objective could be non-smooth.
We introduce new proof techniques for ADM in the batch setting, which yields a
O(1/T) convergence rate for ADM and forms the basis for regret anal- ysis in
the online setting. We consider two scenarios in the online setting, based on
whether an additional Bregman divergence is needed or not. In both settings, we
establish regret bounds for both the objective function as well as constraints
violation for general and strongly convex functions. We also consider inexact
ADM updates where certain terms are linearized to yield efficient updates and
show the stochastic convergence rates. In addition, we briefly discuss that
online ADM can be used as projection- free online learning algorithm in some
scenarios. Preliminary results are presented to illustrate the performance of
the proposed algorithms.
| [
"['Huahua Wang' 'Arindam Banerjee']",
"Huahua Wang and Arindam Banerjee"
] |
cs.LG stat.ML | null | 1306.3729 | null | null | http://arxiv.org/pdf/1306.3729v1 | 2013-06-17T03:02:05Z | 2013-06-17T03:02:05Z | Spectral Experts for Estimating Mixtures of Linear Regressions | Discriminative latent-variable models are typically learned using EM or
gradient-based optimization, which suffer from local optima. In this paper, we
develop a new computationally efficient and provably consistent estimator for a
mixture of linear regressions, a simple instance of a discriminative
latent-variable model. Our approach relies on a low-rank linear regression to
recover a symmetric tensor, which can be factorized into the parameters using a
tensor power method. We prove rates of convergence for our estimator and
provide an empirical evaluation illustrating its strengths relative to local
optimization (EM).
| [
"['Arun Tejasvi Chaganty' 'Percy Liang']",
"Arun Tejasvi Chaganty and Percy Liang"
] |
cs.LG cs.HC | null | 1306.3860 | null | null | http://arxiv.org/pdf/1306.3860v1 | 2013-06-17T13:57:00Z | 2013-06-17T13:57:00Z | Cluster coloring of the Self-Organizing Map: An information
visualization perspective | This paper takes an information visualization perspective to visual
representations in the general SOM paradigm. This involves viewing SOM-based
visualizations through the eyes of Bertin's and Tufte's theories on data
graphics. The regular grid shape of the Self-Organizing Map (SOM), while being
a virtue for linking visualizations to it, restricts representation of cluster
structures. From the viewpoint of information visualization, this paper
provides a general, yet simple, solution to projection-based coloring of the
SOM that reveals structures. First, the proposed color space is easy to
construct and customize to the purpose of use, while aiming at being
perceptually correct and informative through two separable dimensions. Second,
the coloring method is not dependent on any specific method of projection, but
is rather modular to fit any objective function suitable for the task at hand.
The cluster coloring is illustrated on two datasets: the iris data, and welfare
and poverty indicators.
| [
"['Peter Sarlin' 'Samuel Rönnqvist']",
"Peter Sarlin and Samuel R\\\"onnqvist"
] |
cs.LG | null | 1306.3895 | null | null | http://arxiv.org/pdf/1306.3895v2 | 2014-05-09T05:28:39Z | 2013-06-17T15:29:00Z | On-line PCA with Optimal Regrets | We carefully investigate the on-line version of PCA, where in each trial a
learning algorithm plays a k-dimensional subspace, and suffers the compression
loss on the next instance when projected into the chosen subspace. In this
setting, we analyze two popular on-line algorithms, Gradient Descent (GD) and
Exponentiated Gradient (EG). We show that both algorithms are essentially
optimal in the worst-case. This comes as a surprise, since EG is known to
perform sub-optimally when the instances are sparse. This different behavior of
EG for PCA is mainly related to the non-negativity of the loss in this case,
which makes the PCA setting qualitatively different from other settings studied
in the literature. Furthermore, we show that when considering regret bounds as
function of a loss budget, EG remains optimal and strictly outperforms GD.
Next, we study the extension of the PCA setting, in which the Nature is allowed
to play with dense instances, which are positive matrices with bounded largest
eigenvalue. Again we can show that EG is optimal and strictly better than GD in
this setting.
| [
"Jiazhong Nie and Wojciech Kotlowski and Manfred K. Warmuth",
"['Jiazhong Nie' 'Wojciech Kotlowski' 'Manfred K. Warmuth']"
] |
cs.LG stat.ML | null | 1306.3905 | null | null | http://arxiv.org/pdf/1306.3905v1 | 2013-06-17T15:44:30Z | 2013-06-17T15:44:30Z | Stability of Multi-Task Kernel Regression Algorithms | We study the stability properties of nonlinear multi-task regression in
reproducing Hilbert spaces with operator-valued kernels. Such kernels, a.k.a.
multi-task kernels, are appropriate for learning prob- lems with nonscalar
outputs like multi-task learning and structured out- put prediction. We show
that multi-task kernel regression algorithms are uniformly stable in the
general case of infinite-dimensional output spaces. We then derive under mild
assumption on the kernel generaliza- tion bounds of such algorithms, and we
show their consistency even with non Hilbert-Schmidt operator-valued kernels .
We demonstrate how to apply the results to various multi-task kernel regression
methods such as vector-valued SVR and functional ridge regression.
| [
"Julien Audiffren (LIF), Hachem Kadri (LIF)",
"['Julien Audiffren' 'Hachem Kadri']"
] |
stat.ML cs.LG | null | 1306.3917 | null | null | http://arxiv.org/pdf/1306.3917v1 | 2013-06-17T16:24:13Z | 2013-06-17T16:24:13Z | On Finding the Largest Mean Among Many | Sampling from distributions to find the one with the largest mean arises in a
broad range of applications, and it can be mathematically modeled as a
multi-armed bandit problem in which each distribution is associated with an
arm. This paper studies the sample complexity of identifying the best arm
(largest mean) in a multi-armed bandit problem. Motivated by large-scale
applications, we are especially interested in identifying situations where the
total number of samples that are necessary and sufficient to find the best arm
scale linearly with the number of arms. We present a single-parameter
multi-armed bandit model that spans the range from linear to superlinear sample
complexity. We also give a new algorithm for best arm identification, called
PRISM, with linear sample complexity for a wide range of mean distributions.
The algorithm, like most exploration procedures for multi-armed bandits, is
adaptive in the sense that the next arms to sample are selected based on
previous samples. We compare the sample complexity of adaptive procedures with
simpler non-adaptive procedures using new lower bounds. For many problem
instances, the increased sample complexity required by non-adaptive procedures
is a polynomial factor of the number of arms.
| [
"['Kevin Jamieson' 'Matthew Malloy' 'Robert Nowak' 'Sebastien Bubeck']",
"Kevin Jamieson, Matthew Malloy, Robert Nowak, Sebastien Bubeck"
] |
cs.LG cs.NA | null | 1306.4080 | null | null | http://arxiv.org/pdf/1306.4080v4 | 2017-12-07T09:16:27Z | 2013-06-18T07:03:16Z | Parallel Coordinate Descent Newton Method for Efficient
$\ell_1$-Regularized Minimization | The recent years have witnessed advances in parallel algorithms for large
scale optimization problems. Notwithstanding demonstrated success, existing
algorithms that parallelize over features are usually limited by divergence
issues under high parallelism or require data preprocessing to alleviate these
problems. In this work, we propose a Parallel Coordinate Descent Newton
algorithm using multidimensional approximate Newton steps (PCDN), where the
off-diagonal elements of the Hessian are set to zero to enable parallelization.
It randomly partitions the feature set into $b$ bundles/subsets with size of
$P$, and sequentially processes each bundle by first computing the descent
directions for each feature in parallel and then conducting $P$-dimensional
line search to obtain the step size. We show that: (1) PCDN is guaranteed to
converge globally despite increasing parallelism; (2) PCDN converges to the
specified accuracy $\epsilon$ within the limited iteration number of
$T_\epsilon$, and $T_\epsilon$ decreases with increasing parallelism (bundle
size $P$). Using the implementation technique of maintaining intermediate
quantities, we minimize the data transfer and synchronization cost of the
$P$-dimensional line search. For concreteness, the proposed PCDN algorithm is
applied to $\ell_1$-regularized logistic regression and $\ell_2$-loss SVM.
Experimental evaluations on six benchmark datasets show that the proposed PCDN
algorithm exploits parallelism well and outperforms the state-of-the-art
methods in speed without losing accuracy.
| [
"['An Bian' 'Xiong Li' 'Yuncai Liu' 'Ming-Hsuan Yang']",
"An Bian, Xiong Li, Yuncai Liu, Ming-Hsuan Yang"
] |
cs.LG stat.ML | null | 1306.4152 | null | null | http://arxiv.org/pdf/1306.4152v1 | 2013-06-18T11:42:03Z | 2013-06-18T11:42:03Z | Bioclimating Modelling: A Machine Learning Perspective | Many machine learning (ML) approaches are widely used to generate bioclimatic
models for prediction of geographic range of organism as a function of climate.
Applications such as prediction of range shift in organism, range of invasive
species influenced by climate change are important parameters in understanding
the impact of climate change. However, success of machine learning-based
approaches depends on a number of factors. While it can be safely said that no
particular ML technique can be effective in all applications and success of a
technique is predominantly dependent on the application or the type of the
problem, it is useful to understand their behaviour to ensure informed choice
of techniques. This paper presents a comprehensive review of machine
learning-based bioclimatic model generation and analyses the factors
influencing success of such models. Considering the wide use of statistical
techniques, in our discussion we also include conventional statistical
techniques used in bioclimatic modelling.
| [
"Maumita Bhattacharya",
"['Maumita Bhattacharya']"
] |
stat.ML cs.LG | null | 1306.4410 | null | null | http://arxiv.org/pdf/1306.4410v1 | 2013-06-19T01:56:29Z | 2013-06-19T01:56:29Z | Joint estimation of sparse multivariate regression and conditional
graphical models | Multivariate regression model is a natural generalization of the classical
univari- ate regression model for fitting multiple responses. In this paper, we
propose a high- dimensional multivariate conditional regression model for
constructing sparse estimates of the multivariate regression coefficient matrix
that accounts for the dependency struc- ture among the multiple responses. The
proposed method decomposes the multivariate regression problem into a series of
penalized conditional log-likelihood of each response conditioned on the
covariates and other responses. It allows simultaneous estimation of the sparse
regression coefficient matrix and the sparse inverse covariance matrix. The
asymptotic selection consistency and normality are established for the
diverging dimension of the covariates and number of responses. The
effectiveness of the pro- posed method is also demonstrated in a variety of
simulated examples as well as an application to the Glioblastoma multiforme
cancer data.
| [
"Junhui Wang",
"['Junhui Wang']"
] |
cs.CR cs.LG stat.ML | null | 1306.4447 | null | null | http://arxiv.org/pdf/1306.4447v1 | 2013-06-19T07:51:49Z | 2013-06-19T07:51:49Z | Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data
from Machine Learning Classifiers | Machine Learning (ML) algorithms are used to train computers to perform a
variety of complex tasks and improve with experience. Computers learn how to
recognize patterns, make unintended decisions, or react to a dynamic
environment. Certain trained machines may be more effective than others because
they are based on more suitable ML algorithms or because they were trained
through superior training sets. Although ML algorithms are known and publicly
released, training sets may not be reasonably ascertainable and, indeed, may be
guarded as trade secrets. While much research has been performed about the
privacy of the elements of training sets, in this paper we focus our attention
on ML classifiers and on the statistical information that can be unconsciously
or maliciously revealed from them. We show that it is possible to infer
unexpected but useful information from ML classifiers. In particular, we build
a novel meta-classifier and train it to hack other classifiers, obtaining
meaningful information about their training sets. This kind of information
leakage can be exploited, for example, by a vendor to build more effective
classifiers or to simply acquire trade secrets from a competitor's apparatus,
potentially violating its intellectual property rights.
| [
"['Giuseppe Ateniese' 'Giovanni Felici' 'Luigi V. Mancini'\n 'Angelo Spognardi' 'Antonio Villani' 'Domenico Vitali']",
"Giuseppe Ateniese, Giovanni Felici, Luigi V. Mancini, Angelo\n Spognardi, Antonio Villani, Domenico Vitali"
] |
cs.LG cs.DL cs.IR | null | 1306.4631 | null | null | http://arxiv.org/pdf/1306.4631v1 | 2013-06-06T08:08:22Z | 2013-06-06T08:08:22Z | Table of Content detection using Machine Learning | Table of content (TOC) detection has drawn attention now a day because it
plays an important role in digitization of multipage document. Generally book
document is multipage document. So it becomes necessary to detect Table of
Content page for easy navigation of multipage document and also to make
information retrieval faster for desirable data from the multipage document.
All the Table of content pages follow the different layout, different way of
presenting the contents of the document like chapter, section, subsection etc.
This paper introduces a new method to detect Table of content using machine
learning technique with different features. With the main aim to detect Table
of Content pages is to structure the document according to their contents.
| [
"['Rachana Parikh' 'Avani R. Vasant']",
"Rachana Parikh and Avani R. Vasant"
] |
cs.LG cs.IR | null | 1306.4633 | null | null | http://arxiv.org/pdf/1306.4633v1 | 2013-06-06T07:35:23Z | 2013-06-06T07:35:23Z | A Fuzzy Based Approach to Text Mining and Document Clustering | Fuzzy logic deals with degrees of truth. In this paper, we have shown how to
apply fuzzy logic in text mining in order to perform document clustering. We
took an example of document clustering where the documents had to be clustered
into two categories. The method involved cleaning up the text and stemming of
words. Then, we chose m number of features which differ significantly in their
word frequencies (WF), normalized by document length, between documents
belonging to these two clusters. The documents to be clustered were represented
as a collection of m normalized WF values. Fuzzy c-means (FCM) algorithm was
used to cluster these documents into two clusters. After the FCM execution
finished, the documents in the two clusters were analysed for the values of
their respective m features. It was known that documents belonging to a
document type, say X, tend to have higher WF values for some particular
features. If the documents belonging to a cluster had higher WF values for
those same features, then that cluster was said to represent X. By fuzzy logic,
we not only get the cluster name, but also the degree to which a document
belongs to a cluster.
| [
"Sumit Goswami and Mayank Singh Shishodia",
"['Sumit Goswami' 'Mayank Singh Shishodia']"
] |
stat.ML cs.LG math.OC | null | 1306.4650 | null | null | http://arxiv.org/pdf/1306.4650v2 | 2013-09-10T12:29:41Z | 2013-06-19T19:21:48Z | Stochastic Majorization-Minimization Algorithms for Large-Scale
Optimization | Majorization-minimization algorithms consist of iteratively minimizing a
majorizing surrogate of an objective function. Because of its simplicity and
its wide applicability, this principle has been very popular in statistics and
in signal processing. In this paper, we intend to make this principle scalable.
We introduce a stochastic majorization-minimization scheme which is able to
deal with large-scale or possibly infinite data sets. When applied to convex
optimization problems under suitable assumptions, we show that it achieves an
expected convergence rate of $O(1/\sqrt{n})$ after $n$ iterations, and of
$O(1/n)$ for strongly convex functions. Equally important, our scheme almost
surely converges to stationary points for a large class of non-convex problems.
We develop several efficient algorithms based on our framework. First, we
propose a new stochastic proximal gradient method, which experimentally matches
state-of-the-art solvers for large-scale $\ell_1$-logistic regression. Second,
we develop an online DC programming algorithm for non-convex sparse estimation.
Finally, we demonstrate the effectiveness of our approach for solving
large-scale structured matrix factorization problems.
| [
"Julien Mairal (INRIA Grenoble Rh\\^one-Alpes / LJK Laboratoire Jean\n Kuntzmann)",
"['Julien Mairal']"
] |
cs.LG | null | 1306.4653 | null | null | http://arxiv.org/pdf/1306.4653v4 | 2013-07-08T19:05:49Z | 2013-06-19T19:25:51Z | Multiarmed Bandits With Limited Expert Advice | We solve the COLT 2013 open problem of \citet{SCB} on minimizing regret in
the setting of advice-efficient multiarmed bandits with expert advice. We give
an algorithm for the setting of K arms and N experts out of which we are
allowed to query and use only M experts' advices in each round, which has a
regret bound of \tilde{O}\bigP{\sqrt{\frac{\min\{K, M\} N}{M} T}} after T
rounds. We also prove that any algorithm for this problem must have expected
regret at least \tilde{\Omega}\bigP{\sqrt{\frac{\min\{K, M\} N}{M}T}}, thus
showing that our upper bound is nearly tight.
| [
"['Satyen Kale']",
"Satyen Kale"
] |
cs.LG cs.AI math.OC | null | 1306.4753 | null | null | http://arxiv.org/pdf/1306.4753v1 | 2013-06-20T04:48:37Z | 2013-06-20T04:48:37Z | Galerkin Methods for Complementarity Problems and Variational
Inequalities | Complementarity problems and variational inequalities arise in a wide variety
of areas, including machine learning, planning, game theory, and physical
simulation. In all of these areas, to handle large-scale problem instances, we
need fast approximate solution methods. One promising idea is Galerkin
approximation, in which we search for the best answer within the span of a
given set of basis functions. Bertsekas proposed one possible Galerkin method
for variational inequalities. However, this method can exhibit two problems in
practice: its approximation error is worse than might be expected based on the
ability of the basis to represent the desired solution, and each iteration
requires a projection step that is not always easy to implement efficiently.
So, in this paper, we present a new Galerkin method with improved behavior: our
new error bounds depend directly on the distance from the true solution to the
subspace spanned by our basis, and the only projections we require are onto the
feasible region or onto the span of our basis.
| [
"['Geoffrey J. Gordon']",
"Geoffrey J. Gordon"
] |
cs.NA cs.LG | 10.1016/j.jcss.2015.06.002 | 1306.4905 | null | null | http://arxiv.org/abs/1306.4905v1 | 2013-06-20T15:19:22Z | 2013-06-20T15:19:22Z | From-Below Approximations in Boolean Matrix Factorization: Geometry and
New Algorithm | We present new results on Boolean matrix factorization and a new algorithm
based on these results. The results emphasize the significance of
factorizations that provide from-below approximations of the input matrix.
While the previously proposed algorithms do not consider the possibly different
significance of different matrix entries, our results help measure such
significance and suggest where to focus when computing factors. An experimental
evaluation of the new algorithm on both synthetic and real data demonstrates
its good performance in terms of good coverage by the first k factors as well
as a small number of factors needed for exact decomposition and indicates that
the algorithm outperforms the available ones in these terms. We also propose
future research topics.
| [
"Radim Belohlavek, Martin Trnecka",
"['Radim Belohlavek' 'Martin Trnecka']"
] |
cs.LG | null | 1306.4947 | null | null | http://arxiv.org/pdf/1306.4947v2 | 2013-10-03T17:15:45Z | 2013-06-20T18:04:24Z | Machine Teaching for Bayesian Learners in the Exponential Family | What if there is a teacher who knows the learning goal and wants to design
good training data for a machine learner? We propose an optimal teaching
framework aimed at learners who employ Bayesian models. Our framework is
expressed as an optimization problem over teaching examples that balance the
future loss of the learner and the effort of the teacher. This optimization
problem is in general hard. In the case where the learner employs conjugate
exponential family models, we present an approximate algorithm for finding the
optimal teaching set. Our algorithm optimizes the aggregate sufficient
statistics, then unpacks them into actual teaching examples. We give several
examples to illustrate our framework.
| [
"Xiaojin Zhu",
"['Xiaojin Zhu']"
] |
stat.ML cs.LG | null | 1306.5056 | null | null | http://arxiv.org/pdf/1306.5056v3 | 2014-02-22T08:58:10Z | 2013-06-21T06:25:54Z | Class Proportion Estimation with Application to Multiclass Anomaly
Rejection | This work addresses two classification problems that fall under the heading
of domain adaptation, wherein the distributions of training and testing
examples differ. The first problem studied is that of class proportion
estimation, which is the problem of estimating the class proportions in an
unlabeled testing data set given labeled examples of each class. Compared to
previous work on this problem, our approach has the novel feature that it does
not require labeled training data from one of the classes. This property allows
us to address the second domain adaptation problem, namely, multiclass anomaly
rejection. Here, the goal is to design a classifier that has the option of
assigning a "reject" label, indicating that the instance did not arise from a
class present in the training data. We establish consistent learning strategies
for both of these domain adaptation problems, which to our knowledge are the
first of their kind. We also implement the class proportion estimation
technique and demonstrate its performance on several benchmark data sets.
| [
"Tyler Sanderson and Clayton Scott",
"['Tyler Sanderson' 'Clayton Scott']"
] |
cs.LG | null | 1306.5349 | null | null | http://arxiv.org/pdf/1306.5349v1 | 2013-06-22T19:32:05Z | 2013-06-22T19:32:05Z | Song-based Classification techniques for Endangered Bird Conservation | The work presented in this paper is part of a global framework which long
term goal is to design a wireless sensor network able to support the
observation of a population of endangered birds. We present the first stage for
which we have conducted a knowledge discovery approach on a sample of
acoustical data. We use MFCC features extracted from bird songs and we exploit
two knowledge discovery techniques. One that relies on clustering-based
approaches, that highlights the homogeneity in the songs of the species. The
other, based on predictive modeling, that demonstrates the good performances of
various machine learning techniques for the identification process. The
knowledge elicited provides promising results to consider a widespread study
and to elicit guidelines for designing a first version of the automatic
approach for data collection based on acoustic sensors.
| [
"Erick Stattner and Wilfried Segretier and Martine Collard and Philippe\n Hunel and Nicolas Vidot",
"['Erick Stattner' 'Wilfried Segretier' 'Martine Collard' 'Philippe Hunel'\n 'Nicolas Vidot']"
] |
stat.ME cs.LG stat.ML | null | 1306.5362 | null | null | http://arxiv.org/pdf/1306.5362v1 | 2013-06-23T00:31:15Z | 2013-06-23T00:31:15Z | A Statistical Perspective on Algorithmic Leveraging | One popular method for dealing with large-scale data sets is sampling. For
example, by using the empirical statistical leverage scores as an importance
sampling distribution, the method of algorithmic leveraging samples and
rescales rows/columns of data matrices to reduce the data size before
performing computations on the subproblem. This method has been successful in
improving computational efficiency of algorithms for matrix problems such as
least-squares approximation, least absolute deviations approximation, and
low-rank matrix approximation. Existing work has focused on algorithmic issues
such as worst-case running times and numerical issues associated with providing
high-quality implementations, but none of it addresses statistical aspects of
this method.
In this paper, we provide a simple yet effective framework to evaluate the
statistical properties of algorithmic leveraging in the context of estimating
parameters in a linear regression model with a fixed number of predictors. We
show that from the statistical perspective of bias and variance, neither
leverage-based sampling nor uniform sampling dominates the other. This result
is particularly striking, given the well-known result that, from the
algorithmic perspective of worst-case analysis, leverage-based sampling
provides uniformly superior worst-case algorithmic results, when compared with
uniform sampling. Based on these theoretical results, we propose and analyze
two new leveraging algorithms. A detailed empirical evaluation of existing
leverage-based methods as well as these two new methods is carried out on both
synthetic and real data sets. The empirical results indicate that our theory is
a good predictor of practical performance of existing and new leverage-based
algorithms and that the new algorithms achieve improved performance.
| [
"['Ping Ma' 'Michael W. Mahoney' 'Bin Yu']",
"Ping Ma and Michael W. Mahoney and Bin Yu"
] |
cs.LG | null | 1306.5487 | null | null | http://arxiv.org/pdf/1306.5487v1 | 2013-06-23T23:36:40Z | 2013-06-23T23:36:40Z | Model Reframing by Feature Context Change | The feature space (including both input and output variables) characterises a
data mining problem. In predictive (supervised) problems, the quality and
availability of features determines the predictability of the dependent
variable, and the performance of data mining models in terms of
misclassification or regression error. Good features, however, are usually
difficult to obtain. It is usual that many instances come with missing values,
either because the actual value for a given attribute was not available or
because it was too expensive. This is usually interpreted as a utility or
cost-sensitive learning dilemma, in this case between misclassification (or
regression error) costs and attribute tests costs. Both misclassification cost
(MC) and test cost (TC) can be integrated into a single measure, known as joint
cost (JC). We introduce methods and plots (such as the so-called JROC plots)
that can work with any of-the-shelf predictive technique, including ensembles,
such that we re-frame the model to use the appropriate subset of attributes
(the feature configuration) during deployment time. In other words, models are
trained with the available attributes (once and for all) and then deployed by
setting missing values on the attributes that are deemed ineffective for
reducing the joint cost. As the number of feature configuration combinations
grows exponentially with the number of features we introduce quadratic methods
that are able to approximate the optimal configuration and model choices, as
shown by the experimental results.
| [
"Celestine-Periale Maguedong-Djoumessi",
"['Celestine-Periale Maguedong-Djoumessi']"
] |
cs.LG stat.ML | null | 1306.5532 | null | null | http://arxiv.org/pdf/1306.5532v2 | 2015-06-25T17:26:01Z | 2013-06-24T07:52:45Z | Deep Learning by Scattering | We introduce general scattering transforms as mathematical models of deep
neural networks with l2 pooling. Scattering networks iteratively apply complex
valued unitary operators, and the pooling is performed by a complex modulus. An
expected scattering defines a contractive representation of a high-dimensional
probability distribution, which preserves its mean-square norm. We show that
unsupervised learning can be casted as an optimization of the space contraction
to preserve the volume occupied by unlabeled examples, at each layer of the
network. Supervised learning and classification are performed with an averaged
scattering, which provides scattering estimations for multiple classes.
| [
"['Stéphane Mallat' 'Irène Waldspurger']",
"St\\'ephane Mallat and Ir\\`ene Waldspurger"
] |
stat.ML cs.LG | null | 1306.5554 | null | null | http://arxiv.org/pdf/1306.5554v2 | 2013-11-05T11:28:33Z | 2013-06-24T09:49:08Z | Correlated random features for fast semi-supervised learning | This paper presents Correlated Nystrom Views (XNV), a fast semi-supervised
algorithm for regression and classification. The algorithm draws on two main
ideas. First, it generates two views consisting of computationally inexpensive
random features. Second, XNV applies multiview regression using Canonical
Correlation Analysis (CCA) on unlabeled data to bias the regression towards
useful features. It has been shown that, if the views contains accurate
estimators, CCA regression can substantially reduce variance with a minimal
increase in bias. Random views are justified by recent theoretical and
empirical work showing that regression with random features closely
approximates kernel regression, implying that random views can be expected to
contain accurate estimators. We show that XNV consistently outperforms a
state-of-the-art algorithm for semi-supervised learning: substantially
improving predictive performance and reducing the variability of performance on
a wide variety of real-world datasets, whilst also reducing runtime by orders
of magnitude.
| [
"['Brian McWilliams' 'David Balduzzi' 'Joachim M. Buhmann']",
"Brian McWilliams, David Balduzzi and Joachim M. Buhmann"
] |
cs.RO cs.AI cs.LG | 10.1109/IROS.2014.6942972 | 1306.5707 | null | null | http://arxiv.org/abs/1306.5707v2 | 2014-06-24T05:10:50Z | 2013-06-24T18:48:54Z | Synthesizing Manipulation Sequences for Under-Specified Tasks using
Unrolled Markov Random Fields | Many tasks in human environments require performing a sequence of navigation
and manipulation steps involving objects. In unstructured human environments,
the location and configuration of the objects involved often change in
unpredictable ways. This requires a high-level planning strategy that is robust
and flexible in an uncertain environment. We propose a novel dynamic planning
strategy, which can be trained from a set of example sequences. High level
tasks are expressed as a sequence of primitive actions or controllers (with
appropriate parameters). Our score function, based on Markov Random Field
(MRF), captures the relations between environment, controllers, and their
arguments. By expressing the environment using sets of attributes, the approach
generalizes well to unseen scenarios. We train the parameters of our MRF using
a maximum margin learning method. We provide a detailed empirical validation of
our overall framework demonstrating successful plan strategies for a variety of
tasks.
| [
"['Jaeyong Sung' 'Bart Selman' 'Ashutosh Saxena']",
"Jaeyong Sung, Bart Selman, Ashutosh Saxena"
] |
cs.LG cs.DS stat.ML | null | 1306.5825 | null | null | http://arxiv.org/pdf/1306.5825v5 | 2014-06-27T20:37:17Z | 2013-06-25T01:44:46Z | Fourier PCA and Robust Tensor Decomposition | Fourier PCA is Principal Component Analysis of a matrix obtained from higher
order derivatives of the logarithm of the Fourier transform of a
distribution.We make this method algorithmic by developing a tensor
decomposition method for a pair of tensors sharing the same vectors in rank-$1$
decompositions. Our main application is the first provably polynomial-time
algorithm for underdetermined ICA, i.e., learning an $n \times m$ matrix $A$
from observations $y=Ax$ where $x$ is drawn from an unknown product
distribution with arbitrary non-Gaussian components. The number of component
distributions $m$ can be arbitrarily higher than the dimension $n$ and the
columns of $A$ only need to satisfy a natural and efficiently verifiable
nondegeneracy condition. As a second application, we give an alternative
algorithm for learning mixtures of spherical Gaussians with linearly
independent means. These results also hold in the presence of Gaussian noise.
| [
"Navin Goyal, Santosh Vempala and Ying Xiao",
"['Navin Goyal' 'Santosh Vempala' 'Ying Xiao']"
] |
cs.AI cs.HC cs.LG | null | 1306.5884 | null | null | http://arxiv.org/pdf/1306.5884v2 | 2014-01-01T11:02:00Z | 2013-06-25T08:56:58Z | Design of an Agent for Answering Back in Smart Phones | The objective of the paper is to design an agent which provides efficient
response to the caller when a call goes unanswered in smartphones. The agent
provides responses through text messages, email etc stating the most likely
reason as to why the callee is unable to answer a call. Responses are composed
taking into consideration the importance of the present call and the situation
the callee is in at the moment like driving, sleeping, at work etc. The agent
makes decisons in the compostion of response messages based on the patterns it
has come across in the learning environment. Initially the user helps the agent
to compose response messages. The agent associates this message to the percept
it recieves with respect to the environment the callee is in. The user may
thereafter either choose to make to response system automatic or choose to
recieve suggestions from the agent for responses messages and confirm what is
to be sent to the caller.
| [
"Sandeep Venkatesh, Meera V Patil, Nanditha Swamy",
"['Sandeep Venkatesh' 'Meera V Patil' 'Nanditha Swamy']"
] |
math.OC cs.LG cs.NA math.NA stat.ML | null | 1306.5918 | null | null | http://arxiv.org/pdf/1306.5918v2 | 2015-03-21T01:11:56Z | 2013-06-25T11:11:42Z | A Randomized Nonmonotone Block Proximal Gradient Method for a Class of
Structured Nonlinear Programming | We propose a randomized nonmonotone block proximal gradient (RNBPG) method
for minimizing the sum of a smooth (possibly nonconvex) function and a
block-separable (possibly nonconvex nonsmooth) function. At each iteration,
this method randomly picks a block according to any prescribed probability
distribution and solves typically several associated proximal subproblems that
usually have a closed-form solution, until a certain progress on objective
value is achieved. In contrast to the usual randomized block coordinate descent
method [23,20], our method has a nonmonotone flavor and uses variable stepsizes
that can partially utilize the local curvature information of the smooth
component of objective function. We show that any accumulation point of the
solution sequence of the method is a stationary point of the problem {\it
almost surely} and the method is capable of finding an approximate stationary
point with high probability. We also establish a sublinear rate of convergence
for the method in terms of the minimal expected squared norm of certain
proximal gradients over the iterations. When the problem under consideration is
convex, we show that the expected objective values generated by RNBPG converge
to the optimal value of the problem. Under some assumptions, we further
establish a sublinear and linear rate of convergence on the expected objective
values generated by a monotone version of RNBPG. Finally, we conduct some
preliminary experiments to test the performance of RNBPG on the
$\ell_1$-regularized least-squares problem and a dual SVM problem in machine
learning. The computational results demonstrate that our method substantially
outperforms the randomized block coordinate {\it descent} method with fixed or
variable stepsizes.
| [
"['Zhaosong Lu' 'Lin Xiao']",
"Zhaosong Lu and Lin Xiao"
] |
cs.SI cs.LG physics.soc-ph stat.AP stat.ML | null | 1306.6111 | null | null | http://arxiv.org/pdf/1306.6111v2 | 2013-08-23T20:13:27Z | 2013-06-26T00:58:39Z | Understanding the Predictive Power of Computational Mechanics and Echo
State Networks in Social Media | There is a large amount of interest in understanding users of social media in
order to predict their behavior in this space. Despite this interest, user
predictability in social media is not well-understood. To examine this
question, we consider a network of fifteen thousand users on Twitter over a
seven week period. We apply two contrasting modeling paradigms: computational
mechanics and echo state networks. Both methods attempt to model the behavior
of users on the basis of their past behavior. We demonstrate that the behavior
of users on Twitter can be well-modeled as processes with self-feedback. We
find that the two modeling approaches perform very similarly for most users,
but that they differ in performance on a small subset of the users. By
exploring the properties of these performance-differentiated users, we
highlight the challenges faced in applying predictive models to dynamic social
data.
| [
"David Darmon, Jared Sylvester, Michelle Girvan, William Rand",
"['David Darmon' 'Jared Sylvester' 'Michelle Girvan' 'William Rand']"
] |
cs.LG stat.ML | null | 1306.6189 | null | null | http://arxiv.org/pdf/1306.6189v1 | 2013-06-26T09:52:51Z | 2013-06-26T09:52:51Z | Scaling Up Robust MDPs by Reinforcement Learning | We consider large-scale Markov decision processes (MDPs) with parameter
uncertainty, under the robust MDP paradigm. Previous studies showed that robust
MDPs, based on a minimax approach to handle uncertainty, can be solved using
dynamic programming for small to medium sized problems. However, due to the
"curse of dimensionality", MDPs that model real-life problems are typically
prohibitively large for such approaches. In this work we employ a reinforcement
learning approach to tackle this planning problem: we develop a robust
approximate dynamic programming method based on a projected fixed point
equation to approximately solve large scale robust MDPs. We show that the
proposed method provably succeeds under certain technical conditions, and
demonstrate its effectiveness through simulation of an option pricing problem.
To the best of our knowledge, this is the first attempt to scale up the robust
MDPs paradigm.
| [
"Aviv Tamar, Huan Xu, Shie Mannor",
"['Aviv Tamar' 'Huan Xu' 'Shie Mannor']"
] |
cs.AI cs.LG | null | 1306.6302 | null | null | http://arxiv.org/pdf/1306.6302v2 | 2013-06-27T13:57:19Z | 2013-06-26T17:59:49Z | Solving Relational MDPs with Exogenous Events and Additive Rewards | We formalize a simple but natural subclass of service domains for relational
planning problems with object-centered, independent exogenous events and
additive rewards capturing, for example, problems in inventory control.
Focusing on this subclass, we present a new symbolic planning algorithm which
is the first algorithm that has explicit performance guarantees for relational
MDPs with exogenous events. In particular, under some technical conditions, our
planning algorithm provides a monotonic lower bound on the optimal value
function. To support this algorithm we present novel evaluation and reduction
techniques for generalized first order decision diagrams, a knowledge
representation for real-valued functions over relational world states. Our
planning algorithm uses a set of focus states, which serves as a training set,
to simplify and approximate the symbolic solution, and can thus be seen to
perform learning for planning. A preliminary experimental evaluation
demonstrates the validity of our approach.
| [
"['S. Joshi' 'R. Khardon' 'P. Tadepalli' 'A. Raghavan' 'A. Fern']",
"S. Joshi, R. Khardon, P. Tadepalli, A. Raghavan, A. Fern"
] |
stat.ML cond-mat.dis-nn cs.LG | 10.1088/0266-5611/30/2/025003 | 1306.6482 | null | null | http://arxiv.org/abs/1306.6482v1 | 2013-06-27T12:43:09Z | 2013-06-27T12:43:09Z | Traffic data reconstruction based on Markov random field modeling | We consider the traffic data reconstruction problem. Suppose we have the
traffic data of an entire city that are incomplete because some road data are
unobserved. The problem is to reconstruct the unobserved parts of the data. In
this paper, we propose a new method to reconstruct incomplete traffic data
collected from various traffic sensors. Our approach is based on Markov random
field modeling of road traffic. The reconstruction is achieved by using
mean-field method and a machine learning method. We numerically verify the
performance of our method using realistic simulated traffic data for the real
road network of Sendai, Japan.
| [
"['Shun Kataoka' 'Muneki Yasuda' 'Cyril Furtlehner' 'Kazuyuki Tanaka']",
"Shun Kataoka, Muneki Yasuda, Cyril Furtlehner and Kazuyuki Tanaka"
] |
cs.LG cs.AI stat.ML | null | 1306.6709 | null | null | http://arxiv.org/pdf/1306.6709v4 | 2014-02-12T07:45:11Z | 2013-06-28T03:56:15Z | A Survey on Metric Learning for Feature Vectors and Structured Data | The need for appropriate ways to measure the distance or similarity between
data is ubiquitous in machine learning, pattern recognition and data mining,
but handcrafting such good metrics for specific problems is generally
difficult. This has led to the emergence of metric learning, which aims at
automatically learning a metric from data and has attracted a lot of interest
in machine learning and related fields for the past ten years. This survey
paper proposes a systematic review of the metric learning literature,
highlighting the pros and cons of each approach. We pay particular attention to
Mahalanobis distance metric learning, a well-studied and successful framework,
but additionally present a wide range of methods that have recently emerged as
powerful alternatives, including nonlinear metric learning, similarity learning
and local metric learning. Recent trends and extensions, such as
semi-supervised metric learning, metric learning for histogram data and the
derivation of generalization guarantees, are also covered. Finally, this survey
addresses metric learning for structured data, in particular edit distance
learning, and attempts to give an overview of the remaining challenges in
metric learning for the years to come.
| [
"['Aurélien Bellet' 'Amaury Habrard' 'Marc Sebban']",
"Aur\\'elien Bellet, Amaury Habrard and Marc Sebban"
] |
cs.AI cs.LG | 10.1007/s10618-014-0382-x | 1306.6802 | null | null | http://arxiv.org/abs/1306.6802v2 | 2013-07-01T17:33:58Z | 2013-06-28T11:49:53Z | Evaluation Measures for Hierarchical Classification: a unified view and
novel approaches | Hierarchical classification addresses the problem of classifying items into a
hierarchy of classes. An important issue in hierarchical classification is the
evaluation of different classification algorithms, which is complicated by the
hierarchical relations among the classes. Several evaluation measures have been
proposed for hierarchical classification using the hierarchy in different ways.
This paper studies the problem of evaluation in hierarchical classification by
analyzing and abstracting the key components of the existing performance
measures. It also proposes two alternative generic views of hierarchical
evaluation and introduces two corresponding novel measures. The proposed
measures, along with the state-of-the art ones, are empirically tested on three
large datasets from the domain of text classification. The empirical results
illustrate the undesirable behavior of existing approaches and how the proposed
methods overcome most of these methods across a range of cases.
| [
"Aris Kosmopoulos, Ioannis Partalas, Eric Gaussier, Georgios Paliouras,\n Ion Androutsopoulos",
"['Aris Kosmopoulos' 'Ioannis Partalas' 'Eric Gaussier'\n 'Georgios Paliouras' 'Ion Androutsopoulos']"
] |
stat.ML cs.IT cs.LG math.IT | null | 1307.0032 | null | null | http://arxiv.org/pdf/1307.0032v1 | 2013-06-28T21:38:17Z | 2013-06-28T21:38:17Z | Memory Limited, Streaming PCA | We consider streaming, one-pass principal component analysis (PCA), in the
high-dimensional regime, with limited memory. Here, $p$-dimensional samples are
presented sequentially, and the goal is to produce the $k$-dimensional subspace
that best approximates these points. Standard algorithms require $O(p^2)$
memory; meanwhile no algorithm can do better than $O(kp)$ memory, since this is
what the output itself requires. Memory (or storage) complexity is most
meaningful when understood in the context of computational and sample
complexity. Sample complexity for high-dimensional PCA is typically studied in
the setting of the {\em spiked covariance model}, where $p$-dimensional points
are generated from a population covariance equal to the identity (white noise)
plus a low-dimensional perturbation (the spike) which is the signal to be
recovered. It is now well-understood that the spike can be recovered when the
number of samples, $n$, scales proportionally with the dimension, $p$. Yet, all
algorithms that provably achieve this, have memory complexity $O(p^2)$.
Meanwhile, algorithms with memory-complexity $O(kp)$ do not have provable
bounds on sample complexity comparable to $p$. We present an algorithm that
achieves both: it uses $O(kp)$ memory (meaning storage of any kind) and is able
to compute the $k$-dimensional spike with $O(p \log p)$ sample-complexity --
the first algorithm of its kind. While our theoretical analysis focuses on the
spiked covariance model, our simulations show that our algorithm is successful
on much more general models for the data.
| [
"['Ioannis Mitliagkas' 'Constantine Caramanis' 'Prateek Jain']",
"Ioannis Mitliagkas, Constantine Caramanis, Prateek Jain"
] |
stat.ML cs.DC cs.LG | null | 1307.0048 | null | null | http://arxiv.org/pdf/1307.0048v3 | 2016-04-14T01:55:55Z | 2013-06-28T23:32:11Z | Simple one-pass algorithm for penalized linear regression with
cross-validation on MapReduce | In this paper, we propose a one-pass algorithm on MapReduce for penalized
linear regression
\[f_\lambda(\alpha, \beta) = \|Y - \alpha\mathbf{1} - X\beta\|_2^2 +
p_{\lambda}(\beta)\] where $\alpha$ is the intercept which can be omitted
depending on application; $\beta$ is the coefficients and $p_{\lambda}$ is the
penalized function with penalizing parameter $\lambda$. $f_\lambda(\alpha,
\beta)$ includes interesting classes such as Lasso, Ridge regression and
Elastic-net. Compared to latest iterative distributed algorithms requiring
multiple MapReduce jobs, our algorithm achieves huge performance improvement;
moreover, our algorithm is exact compared to the approximate algorithms such as
parallel stochastic gradient decent. Moreover, what our algorithm distinguishes
with others is that it trains the model with cross validation to choose optimal
$\lambda$ instead of user specified one.
Key words: penalized linear regression, lasso, elastic-net, ridge, MapReduce
| [
"['Kun Yang']",
"Kun Yang"
] |
cs.LG stat.ML | null | 1307.0127 | null | null | http://arxiv.org/pdf/1307.0127v1 | 2013-06-29T16:36:30Z | 2013-06-29T16:36:30Z | Concentration and Confidence for Discrete Bayesian Sequence Predictors | Bayesian sequence prediction is a simple technique for predicting future
symbols sampled from an unknown measure on infinite sequences over a countable
alphabet. While strong bounds on the expected cumulative error are known, there
are only limited results on the distribution of this error. We prove tight
high-probability bounds on the cumulative error, which is measured in terms of
the Kullback-Leibler (KL) divergence. We also consider the problem of
constructing upper confidence bounds on the KL and Hellinger errors similar to
those constructed from Hoeffding-like bounds in the i.i.d. case. The new
results are applied to show that Bayesian sequence prediction can be used in
the Knows What It Knows (KWIK) framework with bounds that match the
state-of-the-art.
| [
"Tor Lattimore and Marcus Hutter and Peter Sunehag",
"['Tor Lattimore' 'Marcus Hutter' 'Peter Sunehag']"
] |
stat.ME cs.LG stat.ML | 10.1002/wics.1270 | 1307.0252 | null | null | http://arxiv.org/abs/1307.0252v1 | 2013-07-01T00:51:07Z | 2013-07-01T00:51:07Z | Semi-supervised clustering methods | Cluster analysis methods seek to partition a data set into homogeneous
subgroups. It is useful in a wide variety of applications, including document
processing and modern genetics. Conventional clustering methods are
unsupervised, meaning that there is no outcome variable nor is anything known
about the relationship between the observations in the data set. In many
situations, however, information about the clusters is available in addition to
the values of the features. For example, the cluster labels of some
observations may be known, or certain observations may be known to belong to
the same cluster. In other cases, one may wish to identify clusters that are
associated with a particular outcome variable. This review describes several
clustering algorithms (known as "semi-supervised clustering" methods) that can
be applied in these situations. The majority of these methods are modifications
of the popular k-means clustering method, and several of them will be described
in detail. A brief description of some other semi-supervised clustering
algorithms is also provided.
| [
"Eric Bair",
"['Eric Bair']"
] |
cs.LG | null | 1307.0253 | null | null | http://arxiv.org/pdf/1307.0253v1 | 2013-07-01T01:09:25Z | 2013-07-01T01:09:25Z | Exploratory Learning | In multiclass semi-supervised learning (SSL), it is sometimes the case that
the number of classes present in the data is not known, and hence no labeled
examples are provided for some classes. In this paper we present variants of
well-known semi-supervised multiclass learning methods that are robust when the
data contains an unknown number of classes. In particular, we present an
"exploratory" extension of expectation-maximization (EM) that explores
different numbers of classes while learning. "Exploratory" SSL greatly improves
performance on three datasets in terms of F1 on the classes with seed examples
i.e., the classes which are expected to be in the data. Our Exploratory EM
algorithm also outperforms a SSL method based non-parametric Bayesian
clustering.
| [
"['Bhavana Dalvi' 'William W. Cohen' 'Jamie Callan']",
"Bhavana Dalvi, William W. Cohen, Jamie Callan"
] |
cs.LG cs.CL cs.IR | null | 1307.0261 | null | null | http://arxiv.org/pdf/1307.0261v1 | 2013-07-01T02:49:08Z | 2013-07-01T02:49:08Z | WebSets: Extracting Sets of Entities from the Web Using Unsupervised
Information Extraction | We describe a open-domain information extraction method for extracting
concept-instance pairs from an HTML corpus. Most earlier approaches to this
problem rely on combining clusters of distributionally similar terms and
concept-instance pairs obtained with Hearst patterns. In contrast, our method
relies on a novel approach for clustering terms found in HTML tables, and then
assigning concept names to these clusters using Hearst patterns. The method can
be efficiently applied to a large corpus, and experimental results on several
datasets show that our method can accurately extract large numbers of
concept-instance pairs.
| [
"Bhavana Dalvi, William W. Cohen, and Jamie Callan",
"['Bhavana Dalvi' 'William W. Cohen' 'Jamie Callan']"
] |
cs.LG cs.IR stat.ML | null | 1307.0317 | null | null | http://arxiv.org/pdf/1307.0317v1 | 2013-07-01T10:03:58Z | 2013-07-01T10:03:58Z | Algorithms of the LDA model [REPORT] | We review three algorithms for Latent Dirichlet Allocation (LDA). Two of them
are variational inference algorithms: Variational Bayesian inference and Online
Variational Bayesian inference and one is Markov Chain Monte Carlo (MCMC)
algorithm -- Collapsed Gibbs sampling. We compare their time complexity and
performance. We find that online variational Bayesian inference is the fastest
algorithm and still returns reasonably good results.
| [
"Jaka \\v{S}peh, Andrej Muhi\\v{c}, Jan Rupnik",
"['Jaka Špeh' 'Andrej Muhič' 'Jan Rupnik']"
] |
math.ST cs.LG stat.TH | null | 1307.0366 | null | null | http://arxiv.org/pdf/1307.0366v4 | 2019-07-27T12:33:21Z | 2013-07-01T13:41:40Z | Learning directed acyclic graphs based on sparsest permutations | We consider the problem of learning a Bayesian network or directed acyclic
graph (DAG) model from observational data. A number of constraint-based,
score-based and hybrid algorithms have been developed for this purpose. For
constraint-based methods, statistical consistency guarantees typically rely on
the faithfulness assumption, which has been show to be restrictive especially
for graphs with cycles in the skeleton. However, there is only limited work on
consistency guarantees for score-based and hybrid algorithms and it has been
unclear whether consistency guarantees can be proven under weaker conditions
than the faithfulness assumption. In this paper, we propose the sparsest
permutation (SP) algorithm. This algorithm is based on finding the causal
ordering of the variables that yields the sparsest DAG. We prove that this new
score-based method is consistent under strictly weaker conditions than the
faithfulness assumption. We also demonstrate through simulations on small DAGs
that the SP algorithm compares favorably to the constraint-based PC and SGS
algorithms as well as the score-based Greedy Equivalence Search and hybrid
Max-Min Hill-Climbing method. In the Gaussian setting, we prove that our
algorithm boils down to finding the permutation of the variables with sparsest
Cholesky decomposition for the inverse covariance matrix. Using this
connection, we show that in the oracle setting, where the true covariance
matrix is known, the SP algorithm is in fact equivalent to $\ell_0$-penalized
maximum likelihood estimation.
| [
"['Garvesh Raskutti' 'Caroline Uhler']",
"Garvesh Raskutti and Caroline Uhler"
] |
stat.ML cs.LG | null | 1307.0414 | null | null | http://arxiv.org/pdf/1307.0414v1 | 2013-07-01T15:53:22Z | 2013-07-01T15:53:22Z | Challenges in Representation Learning: A report on three machine
learning contests | The ICML 2013 Workshop on Challenges in Representation Learning focused on
three challenges: the black box learning challenge, the facial expression
recognition challenge, and the multimodal learning challenge. We describe the
datasets created for these challenges and summarize the results of the
competitions. We provide suggestions for organizers of future challenges and
some comments on what kind of knowledge can be gained from machine learning
competitions.
| [
"['Ian J. Goodfellow' 'Dumitru Erhan' 'Pierre Luc Carrier'\n 'Aaron Courville' 'Mehdi Mirza' 'Ben Hamner' 'Will Cukierski'\n 'Yichuan Tang' 'David Thaler' 'Dong-Hyun Lee' 'Yingbo Zhou'\n 'Chetan Ramaiah' 'Fangxiang Feng' 'Ruifan Li' 'Xiaojie Wang'\n 'Dimitris Athanasakis' 'John Shawe-Taylor' 'Maxim Milakov' 'John Park'\n 'Radu Ionescu' 'Marius Popescu' 'Cristian Grozea' 'James Bergstra'\n 'Jingjing Xie' 'Lukasz Romaszko' 'Bing Xu' 'Zhang Chuang' 'Yoshua Bengio']",
"Ian J. Goodfellow, Dumitru Erhan, Pierre Luc Carrier, Aaron Courville,\n Mehdi Mirza, Ben Hamner, Will Cukierski, Yichuan Tang, David Thaler,\n Dong-Hyun Lee, Yingbo Zhou, Chetan Ramaiah, Fangxiang Feng, Ruifan Li,\n Xiaojie Wang, Dimitris Athanasakis, John Shawe-Taylor, Maxim Milakov, John\n Park, Radu Ionescu, Marius Popescu, Cristian Grozea, James Bergstra, Jingjing\n Xie, Lukasz Romaszko, Bing Xu, Zhang Chuang, and Yoshua Bengio"
] |
cs.CV cs.AI cs.LG | 10.1109/TIP.2016.2544703 | 1307.0426 | null | null | http://arxiv.org/abs/1307.0426v3 | 2016-04-26T11:05:18Z | 2013-07-01T16:16:40Z | An Empirical Study into Annotator Agreement, Ground Truth Estimation,
and Algorithm Evaluation | Although agreement between annotators has been studied in the past from a
statistical viewpoint, little work has attempted to quantify the extent to
which this phenomenon affects the evaluation of computer vision (CV) object
detection algorithms. Many researchers utilise ground truth (GT) in experiments
and more often than not this GT is derived from one annotator's opinion. How
does the difference in opinion affect an algorithm's evaluation? Four examples
of typical CV problems are chosen, and a methodology is applied to each to
quantify the inter-annotator variance and to offer insight into the mechanisms
behind agreement and the use of GT. It is found that when detecting linear
objects annotator agreement is very low. The agreement in object position,
linear or otherwise, can be partially explained through basic image properties.
Automatic object detectors are compared to annotator agreement and it is found
that a clear relationship exists. Several methods for calculating GTs from a
number of annotations are applied and the resulting differences in the
performance of the object detectors are quantified. It is found that the rank
of a detector is highly dependent upon the method used to form the GT. It is
also found that although the STAPLE and LSML GT estimation methods appear to
represent the mean of the performance measured using the individual
annotations, when there are few annotations, or there is a large variance in
them, these estimates tend to degrade. Furthermore, one of the most commonly
adopted annotation combination methods--consensus voting--accentuates more
obvious features, which results in an overestimation of the algorithm's
performance. Finally, it is concluded that in some datasets it may not be
possible to state with any confidence that one algorithm outperforms another
when evaluating upon one GT and a method for calculating confidence bounds is
discussed.
| [
"Thomas A. Lampert, Andr\\'e Stumpf, Pierre Gan\\c{c}arski",
"['Thomas A. Lampert' 'André Stumpf' 'Pierre Gançarski']"
] |
quant-ph cs.LG | 10.1103/PhysRevLett.113.130503 | 1307.0471 | null | null | http://arxiv.org/abs/1307.0471v3 | 2014-07-10T04:33:52Z | 2013-07-01T18:35:53Z | Quantum support vector machine for big data classification | Supervised machine learning is the classification of new data based on
already classified training examples. In this work, we show that the support
vector machine, an optimized binary classifier, can be implemented on a quantum
computer, with complexity logarithmic in the size of the vectors and the number
of training examples. In cases when classical sampling algorithms require
polynomial time, an exponential speed-up is obtained. At the core of this
quantum big data algorithm is a non-sparse matrix exponentiation technique for
efficiently performing a matrix inversion of the training data inner-product
(kernel) matrix.
| [
"Patrick Rebentrost, Masoud Mohseni, Seth Lloyd",
"['Patrick Rebentrost' 'Masoud Mohseni' 'Seth Lloyd']"
] |
math.OC cs.DC cs.LG | null | 1307.0473 | null | null | http://arxiv.org/pdf/1307.0473v2 | 2015-01-29T04:20:48Z | 2013-07-01T18:46:06Z | Online discrete optimization in social networks in the presence of
Knightian uncertainty | We study a model of collective real-time decision-making (or learning) in a
social network operating in an uncertain environment, for which no a priori
probabilistic model is available. Instead, the environment's impact on the
agents in the network is seen through a sequence of cost functions, revealed to
the agents in a causal manner only after all the relevant actions are taken.
There are two kinds of costs: individual costs incurred by each agent and
local-interaction costs incurred by each agent and its neighbors in the social
network. Moreover, agents have inertia: each agent has a default mixed strategy
that stays fixed regardless of the state of the environment, and must expend
effort to deviate from this strategy in order to respond to cost signals coming
from the environment. We construct a decentralized strategy, wherein each agent
selects its action based only on the costs directly affecting it and on the
decisions made by its neighbors in the network. In this setting, we quantify
social learning in terms of regret, which is given by the difference between
the realized network performance over a given time horizon and the best
performance that could have been achieved in hindsight by a fictitious
centralized entity with full knowledge of the environment's evolution. We show
that our strategy achieves the regret that scales polylogarithmically with the
time horizon and polynomially with the number of agents and the maximum number
of neighbors of any agent in the social network.
| [
"Maxim Raginsky and Angelia Nedi\\'c",
"['Maxim Raginsky' 'Angelia Nedić']"
] |
stat.ML cs.LG | null | 1307.0578 | null | null | http://arxiv.org/pdf/1307.0578v1 | 2013-07-02T02:54:09Z | 2013-07-02T02:54:09Z | A non-parametric conditional factor regression model for
high-dimensional input and response | In this paper, we propose a non-parametric conditional factor regression
(NCFR)model for domains with high-dimensional input and response. NCFR enhances
linear regression in two ways: a) introducing low-dimensional latent factors
leading to dimensionality reduction and b) integrating an Indian Buffet Process
as a prior for the latent factors to derive unlimited sparse dimensions.
Experimental results comparing NCRF to several alternatives give evidence to
remarkable prediction performance.
| [
"['Ava Bargi' 'Richard Yi Da Xu' 'Massimo Piccardi']",
"Ava Bargi, Richard Yi Da Xu, Massimo Piccardi"
] |
cs.LG cs.DB cs.SD | null | 1307.0589 | null | null | http://arxiv.org/pdf/1307.0589v1 | 2013-07-02T04:59:19Z | 2013-07-02T04:59:19Z | The Orchive : Data mining a massive bioacoustic archive | The Orchive is a large collection of over 20,000 hours of audio recordings
from the OrcaLab research facility located off the northern tip of Vancouver
Island. It contains recorded orca vocalizations from the 1980 to the present
time and is one of the largest resources of bioacoustic data in the world. We
have developed a web-based interface that allows researchers to listen to these
recordings, view waveform and spectral representations of the audio, label
clips with annotations, and view the results of machine learning classifiers
based on automatic audio features extraction. In this paper we describe such
classifiers that discriminate between background noise, orca calls, and the
voice notes that are present in most of the tapes. Furthermore we show
classification results for individual calls based on a previously existing orca
call catalog. We have also experimentally investigated the scalability of
classifiers over the entire Orchive.
| [
"['Steven Ness' 'Helena Symonds' 'Paul Spong' 'George Tzanetakis']",
"Steven Ness, Helena Symonds, Paul Spong, George Tzanetakis"
] |
cs.IT cs.LG math.IT | null | 1307.0643 | null | null | http://arxiv.org/pdf/1307.0643v1 | 2013-07-02T09:35:16Z | 2013-07-02T09:35:16Z | Discovering the Markov network structure | In this paper a new proof is given for the supermodularity of information
content. Using the decomposability of the information content an algorithm is
given for discovering the Markov network graph structure endowed by the
pairwise Markov property of a given probability distribution. A discrete
probability distribution is given for which the equivalence of
Hammersley-Clifford theorem is fulfilled although some of the possible vector
realizations are taken on with zero probability. Our algorithm for discovering
the pairwise Markov network is illustrated on this example, too.
| [
"['Edith Kovács' 'Tamás Szántai']",
"Edith Kov\\'acs and Tam\\'as Sz\\'antai"
] |
cs.LG stat.ML | null | 1307.0781 | null | null | http://arxiv.org/pdf/1307.0781v1 | 2013-07-02T18:09:59Z | 2013-07-02T18:09:59Z | Distributed Online Big Data Classification Using Context Information | Distributed, online data mining systems have emerged as a result of
applications requiring analysis of large amounts of correlated and
high-dimensional data produced by multiple distributed data sources. We propose
a distributed online data classification framework where data is gathered by
distributed data sources and processed by a heterogeneous set of distributed
learners which learn online, at run-time, how to classify the different data
streams either by using their locally available classification functions or by
helping each other by classifying each other's data. Importantly, since the
data is gathered at different locations, sending the data to another learner to
process incurs additional costs such as delays, and hence this will be only
beneficial if the benefits obtained from a better classification will exceed
the costs. We model the problem of joint classification by the distributed and
heterogeneous learners from multiple data sources as a distributed contextual
bandit problem where each data is characterized by a specific context. We
develop a distributed online learning algorithm for which we can prove
sublinear regret. Compared to prior work in distributed online data mining, our
work is the first to provide analytic regret results characterizing the
performance of the proposed algorithm.
| [
"['Cem Tekin' 'Mihaela van der Schaar']",
"Cem Tekin, Mihaela van der Schaar"
] |
cs.LG cs.AI cs.DB stat.ML | 10.1109/TPAMI.2014.2343973 | 1307.0803 | null | null | http://arxiv.org/abs/1307.0803v2 | 2015-02-06T16:15:38Z | 2013-07-02T19:35:21Z | Data Fusion by Matrix Factorization | For most problems in science and engineering we can obtain data sets that
describe the observed system from various perspectives and record the behavior
of its individual components. Heterogeneous data sets can be collectively mined
by data fusion. Fusion can focus on a specific target relation and exploit
directly associated data together with contextual data and data about system's
constraints. In the paper we describe a data fusion approach with penalized
matrix tri-factorization (DFMF) that simultaneously factorizes data matrices to
reveal hidden associations. The approach can directly consider any data that
can be expressed in a matrix, including those from feature-based
representations, ontologies, associations and networks. We demonstrate the
utility of DFMF for gene function prediction task with eleven different data
sources and for prediction of pharmacologic actions by fusing six data sources.
Our data fusion algorithm compares favorably to alternative data integration
approaches and achieves higher accuracy than can be obtained from any single
data source alone.
| [
"['Marinka Žitnik' 'Blaž Zupan']",
"Marinka \\v{Z}itnik and Bla\\v{z} Zupan"
] |
stat.ML cs.AI cs.LG cs.RO | null | 1307.0813 | null | null | http://arxiv.org/pdf/1307.0813v2 | 2014-02-12T09:17:52Z | 2013-07-02T07:59:32Z | Multi-Task Policy Search | Learning policies that generalize across multiple tasks is an important and
challenging research topic in reinforcement learning and robotics. Training
individual policies for every single potential task is often impractical,
especially for continuous task variations, requiring more principled approaches
to share and transfer knowledge among similar tasks. We present a novel
approach for learning a nonlinear feedback policy that generalizes across
multiple tasks. The key idea is to define a parametrized policy as a function
of both the state and the task, which allows learning a single policy that
generalizes across multiple known and unknown tasks. Applications of our novel
approach to reinforcement and imitation learning in real-robot experiments are
shown.
| [
"['Marc Peter Deisenroth' 'Peter Englert' 'Jan Peters' 'Dieter Fox']",
"Marc Peter Deisenroth, Peter Englert, Jan Peters and Dieter Fox"
] |
stat.ML cs.IR cs.LG | null | 1307.0846 | null | null | http://arxiv.org/pdf/1307.0846v1 | 2013-07-02T20:51:40Z | 2013-07-02T20:51:40Z | Semi-supervised Ranking Pursuit | We propose a novel sparse preference learning/ranking algorithm. Our
algorithm approximates the true utility function by a weighted sum of basis
functions using the squared loss on pairs of data points, and is a
generalization of the kernel matching pursuit method. It can operate both in a
supervised and a semi-supervised setting and allows efficient search for
multiple, near-optimal solutions. Furthermore, we describe the extension of the
algorithm suitable for combined ranking and regression tasks. In our
experiments we demonstrate that the proposed algorithm outperforms several
state-of-the-art learning methods when taking into account unlabeled data and
performs comparably in a supervised learning scenario, while providing sparser
solutions.
| [
"['Evgeni Tsivtsivadze' 'Tom Heskes']",
"Evgeni Tsivtsivadze and Tom Heskes"
] |
null | null | 1307.0995 | null | null | http://arxiv.org/pdf/1307.0995v1 | 2013-07-03T12:54:25Z | 2013-07-03T12:54:25Z | An Efficient Model Selection for Gaussian Mixture Model in a Bayesian
Framework | In order to cluster or partition data, we often use Expectation-and-Maximization (EM) or Variational approximation with a Gaussian Mixture Model (GMM), which is a parametric probability density function represented as a weighted sum of $hat{K}$ Gaussian component densities. However, model selection to find underlying $hat{K}$ is one of the key concerns in GMM clustering, since we can obtain the desired clusters only when $hat{K}$ is known. In this paper, we propose a new model selection algorithm to explore $hat{K}$ in a Bayesian framework. The proposed algorithm builds the density of the model order which any information criterions such as AIC and BIC basically fail to reconstruct. In addition, this algorithm reconstructs the density quickly as compared to the time-consuming Monte Carlo simulation. | [
"['Ji Won Yoon']"
] |
math.CO cs.LG math.NT | 10.1137/140978090 | 1307.1058 | null | null | http://arxiv.org/abs/1307.1058v2 | 2014-07-19T07:10:33Z | 2013-07-03T16:05:10Z | On the minimal teaching sets of two-dimensional threshold functions | It is known that a minimal teaching set of any threshold function on the
twodimensional rectangular grid consists of 3 or 4 points. We derive exact
formulae for the numbers of functions corresponding to these values and further
refine them in the case of a minimal teaching set of size 3. We also prove that
the average cardinality of the minimal teaching sets of threshold functions is
asymptotically 7/2.
We further present corollaries of these results concerning some special
arrangements of lines in the plane.
| [
"['Max A. Alekseyev' 'Marina G. Basova' 'Nikolai Yu. Zolotykh']",
"Max A. Alekseyev, Marina G. Basova, Nikolai Yu. Zolotykh"
] |
cs.CE cs.LG | null | 1307.1078 | null | null | http://arxiv.org/pdf/1307.1078v1 | 2013-07-03T16:55:32Z | 2013-07-03T16:55:32Z | Investigating the Detection of Adverse Drug Events in a UK General
Practice Electronic Health-Care Database | Data-mining techniques have frequently been developed for Spontaneous
reporting databases. These techniques aim to find adverse drug events
accurately and efficiently. Spontaneous reporting databases are prone to
missing information, under reporting and incorrect entries. This often results
in a detection lag or prevents the detection of some adverse drug events. These
limitations do not occur in electronic health-care databases. In this paper,
existing methods developed for spontaneous reporting databases are implemented
on both a spontaneous reporting database and a general practice electronic
health-care database and compared. The results suggests that the application of
existing methods to the general practice database may help find signals that
have gone undetected when using the spontaneous reporting system database. In
addition the general practice database provides far more supplementary
information, that if incorporated in analysis could provide a wealth of
information for identifying adverse events more accurately.
| [
"Jenna Reps, Jan Feyereisl, Jonathan M. Garibaldi, Uwe Aickelin, Jack\n E. Gibson, Richard B. Hubbard",
"['Jenna Reps' 'Jan Feyereisl' 'Jonathan M. Garibaldi' 'Uwe Aickelin'\n 'Jack E. Gibson' 'Richard B. Hubbard']"
] |
cs.CE cs.LG | null | 1307.1079 | null | null | http://arxiv.org/pdf/1307.1079v1 | 2013-07-03T17:03:31Z | 2013-07-03T17:03:31Z | Application of a clustering framework to UK domestic electricity data | This paper takes an approach to clustering domestic electricity load profiles
that has been successfully used with data from Portugal and applies it to UK
data. Clustering techniques are applied and it is found that the preferred
technique in the Portuguese work (a two stage process combining Self Organised
Maps and Kmeans) is not appropriate for the UK data. The work shows that up to
nine clusters of households can be identified with the differences in usage
profiles being visually striking. This demonstrates the appropriateness of
breaking the electricity usage patterns down to more detail than the two load
profiles currently published by the electricity industry. The paper details
initial results using data collected in Milton Keynes around 1990. Further work
is described and will concentrate on building accurate and meaningful clusters
of similar electricity users in order to better direct demand side management
initiatives to the most relevant target customers.
| [
"['Ian Dent' 'Uwe Aickelin' 'Tom Rodden']",
"Ian Dent, Uwe Aickelin, Tom Rodden"
] |
stat.ML cs.LG math.OC | null | 1307.1192 | null | null | http://arxiv.org/pdf/1307.1192v1 | 2013-07-04T03:17:23Z | 2013-07-04T03:17:23Z | AdaBoost and Forward Stagewise Regression are First-Order Convex
Optimization Methods | Boosting methods are highly popular and effective supervised learning methods
which combine weak learners into a single accurate model with good statistical
performance. In this paper, we analyze two well-known boosting methods,
AdaBoost and Incremental Forward Stagewise Regression (FS$_\varepsilon$), by
establishing their precise connections to the Mirror Descent algorithm, which
is a first-order method in convex optimization. As a consequence of these
connections we obtain novel computational guarantees for these boosting
methods. In particular, we characterize convergence bounds of AdaBoost, related
to both the margin and log-exponential loss function, for any step-size
sequence. Furthermore, this paper presents, for the first time, precise
computational complexity results for FS$_\varepsilon$.
| [
"Robert M. Freund, Paul Grigas, Rahul Mazumder",
"['Robert M. Freund' 'Paul Grigas' 'Rahul Mazumder']"
] |
cs.LG cs.NE | null | 1307.1275 | null | null | http://arxiv.org/pdf/1307.1275v1 | 2013-07-04T11:10:45Z | 2013-07-04T11:10:45Z | Constructing Hierarchical Image-tags Bimodal Representations for Word
Tags Alternative Choice | This paper describes our solution to the multi-modal learning challenge of
ICML. This solution comprises constructing three-level representations in three
consecutive stages and choosing correct tag words with a data-specific
strategy. Firstly, we use typical methods to obtain level-1 representations.
Each image is represented using MPEG-7 and gist descriptors with additional
features released by the contest organizers. And the corresponding word tags
are represented by bag-of-words model with a dictionary of 4000 words.
Secondly, we learn the level-2 representations using two stacked RBMs for each
modality. Thirdly, we propose a bimodal auto-encoder to learn the
similarities/dissimilarities between the pairwise image-tags as level-3
representations. Finally, during the test phase, based on one observation of
the dataset, we come up with a data-specific strategy to choose the correct tag
words leading to a leap of an improved overall performance. Our final average
accuracy on the private test set is 100%, which ranks the first place in this
challenge.
| [
"Fangxiang Feng and Ruifan Li and Xiaojie Wang",
"['Fangxiang Feng' 'Ruifan Li' 'Xiaojie Wang']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.