categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
cs.CV cs.LG | null | 1403.1024 | null | null | http://arxiv.org/pdf/1403.1024v4 | 2014-05-15T22:08:59Z | 2014-03-05T07:21:20Z | On learning to localize objects with minimal supervision | Learning to localize objects with minimal supervision is an important problem
in computer vision, since large fully annotated datasets are extremely costly
to obtain. In this paper, we propose a new method that achieves this goal with
only image-level labels of whether the objects are present or not. Our approach
combines a discriminative submodular cover problem for automatically
discovering a set of positive object windows with a smoothed latent SVM
formulation. The latter allows us to leverage efficient quasi-Newton
optimization techniques. Our experiments demonstrate that the proposed approach
provides a 50% relative improvement in mean average precision over the current
state-of-the-art on PASCAL VOC 2007 detection.
| [
"['Hyun Oh Song' 'Ross Girshick' 'Stefanie Jegelka' 'Julien Mairal'\n 'Zaid Harchaoui' 'Trevor Darrell']",
"Hyun Oh Song, Ross Girshick, Stefanie Jegelka, Julien Mairal, Zaid\n Harchaoui, Trevor Darrell"
]
|
stat.ME cs.LG math.ST stat.ML stat.TH | null | 1403.1124 | null | null | http://arxiv.org/pdf/1403.1124v2 | 2014-07-02T08:12:09Z | 2014-03-05T13:40:29Z | Estimating complex causal effects from incomplete observational data | Despite the major advances taken in causal modeling, causality is still an
unfamiliar topic for many statisticians. In this paper, it is demonstrated from
the beginning to the end how causal effects can be estimated from observational
data assuming that the causal structure is known. To make the problem more
challenging, the causal effects are highly nonlinear and the data are missing
at random. The tools used in the estimation include causal models with design,
causal calculus, multiple imputation and generalized additive models. The main
message is that a trained statistician can estimate causal effects by
judiciously combining existing tools.
| [
"Juha Karvanen",
"['Juha Karvanen']"
]
|
cs.LG cs.CL cs.SI | null | 1403.1252 | null | null | http://arxiv.org/pdf/1403.1252v2 | 2014-06-27T17:36:43Z | 2014-03-06T01:36:53Z | Inducing Language Networks from Continuous Space Word Representations | Recent advancements in unsupervised feature learning have developed powerful
latent representations of words. However, it is still not clear what makes one
representation better than another and how we can learn the ideal
representation. Understanding the structure of latent spaces attained is key to
any future advancement in unsupervised learning. In this work, we introduce a
new view of continuous space word representations as language networks. We
explore two techniques to create language networks from learned features by
inducing them for two popular word representation methods and examining the
properties of their resulting networks. We find that the induced networks
differ from other methods of creating language networks, and that they contain
meaningful community structure.
| [
"['Bryan Perozzi' 'Rami Al-Rfou' 'Vivek Kulkarni' 'Steven Skiena']",
"Bryan Perozzi, Rami Al-Rfou, Vivek Kulkarni, Steven Skiena"
]
|
cs.LG | null | 1403.1329 | null | null | http://arxiv.org/pdf/1403.1329v1 | 2014-03-06T02:42:22Z | 2014-03-06T02:42:22Z | Integer Programming Relaxations for Integrated Clustering and Outlier
Detection | In this paper we present methods for exemplar based clustering with outlier
selection based on the facility location formulation. Given a distance function
and the number of outliers to be found, the methods automatically determine the
number of clusters and outliers. We formulate the problem as an integer program
to which we present relaxations that allow for solutions that scale to large
data sets. The advantages of combining clustering and outlier selection
include: (i) the resulting clusters tend to be compact and semantically
coherent (ii) the clusters are more robust against data perturbations and (iii)
the outliers are contextualised by the clusters and more interpretable, i.e. it
is easier to distinguish between outliers which are the result of data errors
from those that may be indicative of a new pattern emergent in the data. We
present and contrast three relaxations to the integer program formulation: (i)
a linear programming formulation (LP) (ii) an extension of affinity propagation
to outlier detection (APOC) and (iii) a Lagrangian duality based formulation
(LD). Evaluation on synthetic as well as real data shows the quality and
scalability of these different methods.
| [
"['Lionel Ott' 'Linsey Pang' 'Fabio Ramos' 'David Howe' 'Sanjay Chawla']",
"Lionel Ott, Linsey Pang, Fabio Ramos, David Howe, Sanjay Chawla"
]
|
cs.CE cs.LG | null | 1403.1336 | null | null | http://arxiv.org/pdf/1403.1336v1 | 2014-03-06T03:46:38Z | 2014-03-06T03:46:38Z | An Extensive Repot on the Efficiency of AIS-INMACA (A Novel Integrated
MACA based Clonal Classifier for Protein Coding and Promoter Region
Prediction) | This paper exclusively reports the efficiency of AIS-INMACA. AIS-INMACA has
created good impact on solving major problems in bioinformatics like protein
region identification and promoter region prediction with less time (Pokkuluri
Kiran Sree, 2014). This AIS-INMACA is now came with several variations
(Pokkuluri Kiran Sree, 2014) towards projecting it as a tool in bioinformatics
for solving many problems in bioinformatics. So this paper will be very much
useful for so many researchers who are working in the domain of bioinformatics
with cellular automata.
| [
"['Pokkuluri Kiran Sree' 'Inampudi Ramesh Babu']",
"Pokkuluri Kiran Sree, Inampudi Ramesh Babu"
]
|
q-bio.QM cs.CE cs.LG | null | 1403.1347 | null | null | http://arxiv.org/pdf/1403.1347v1 | 2014-03-06T05:18:26Z | 2014-03-06T05:18:26Z | Deep Supervised and Convolutional Generative Stochastic Network for
Protein Secondary Structure Prediction | Predicting protein secondary structure is a fundamental problem in protein
structure prediction. Here we present a new supervised generative stochastic
network (GSN) based method to predict local secondary structure with deep
hierarchical representations. GSN is a recently proposed deep learning
technique (Bengio & Thibodeau-Laufer, 2013) to globally train deep generative
model. We present the supervised extension of GSN, which learns a Markov chain
to sample from a conditional distribution, and applied it to protein structure
prediction. To scale the model to full-sized, high-dimensional data, like
protein sequences with hundreds of amino acids, we introduce a convolutional
architecture, which allows efficient learning across multiple layers of
hierarchical representations. Our architecture uniquely focuses on predicting
structured low-level labels informed with both low and high-level
representations learned by the model. In our application this corresponds to
labeling the secondary structure state of each amino-acid residue. We trained
and tested the model on separate sets of non-homologous proteins sharing less
than 30% sequence identity. Our model achieves 66.4% Q8 accuracy on the CB513
dataset, better than the previously reported best performance 64.9% (Wang et
al., 2011) for this challenging secondary structure prediction problem.
| [
"['Jian Zhou' 'Olga G. Troyanskaya']",
"Jian Zhou and Olga G. Troyanskaya"
]
|
cs.CV cs.AI cs.LG | null | 1403.1353 | null | null | http://arxiv.org/pdf/1403.1353v1 | 2014-03-06T05:44:32Z | 2014-03-06T05:44:32Z | Collaborative Representation for Classification, Sparse or Non-sparse? | Sparse representation based classification (SRC) has been proved to be a
simple, effective and robust solution to face recognition. As it gets popular,
doubts on the necessity of enforcing sparsity starts coming up, and primary
experimental results showed that simply changing the $l_1$-norm based
regularization to the computationally much more efficient $l_2$-norm based
non-sparse version would lead to a similar or even better performance. However,
that's not always the case. Given a new classification task, it's still unclear
which regularization strategy (i.e., making the coefficients sparse or
non-sparse) is a better choice without trying both for comparison. In this
paper, we present as far as we know the first study on solving this issue,
based on plenty of diverse classification experiments. We propose a scoring
function for pre-selecting the regularization strategy using only the dataset
size, the feature dimensionality and a discrimination score derived from a
given feature representation. Moreover, we show that when dictionary learning
is taking into account, non-sparse representation has a more significant
superiority to sparse representation. This work is expected to enrich our
understanding of sparse/non-sparse collaborative representation for
classification and motivate further research activities.
| [
"['Yang Wu' 'Vansteenberge Jarich' 'Masayuki Mukunoki' 'Michihiko Minoh']",
"Yang Wu, Vansteenberge Jarich, Masayuki Mukunoki, and Michihiko Minoh"
]
|
stat.AP cs.IT cs.LG math.IT | null | 1403.1412 | null | null | http://arxiv.org/pdf/1403.1412v5 | 2014-08-08T11:10:18Z | 2014-03-06T11:32:00Z | Rate Prediction and Selection in LTE systems using Modified Source
Encoding Techniques | In current wireless systems, the base-Station (eNodeB) tries to serve its
user-equipment (UE) at the highest possible rate that the UE can reliably
decode. The eNodeB obtains this rate information as a quantized feedback from
the UE at time n and uses this, for rate selection till the next feedback is
received at time n + {\delta}. The feedback received at n can become outdated
before n + {\delta}, because of a) Doppler fading, and b) Change in the set of
active interferers for a UE. Therefore rate prediction becomes essential.
Since, the rates belong to a discrete set, we propose a discrete sequence
prediction approach, wherein, frequency trees for the discrete sequences are
built using source encoding algorithms like Prediction by Partial Match (PPM).
Finding the optimal depth of the frequency tree used for prediction is cast as
a model order selection problem. The rate sequence complexity is analysed to
provide an upper bound on model order. Information-theoretic criteria are then
used to solve the model order problem. Finally, two prediction algorithms are
proposed, using the PPM with optimal model order and system level simulations
demonstrate the improvement in packet loss and throughput due to these
algorithms.
| [
"K.P. Saishankar, Sheetal Kalyani, K. Narendran",
"['K. P. Saishankar' 'Sheetal Kalyani' 'K. Narendran']"
]
|
cs.LG cs.CV stat.ML | null | 1403.1430 | null | null | http://arxiv.org/pdf/1403.1430v2 | 2014-05-01T04:05:18Z | 2014-03-06T12:37:49Z | Sparse Principal Component Analysis via Rotation and Truncation | Sparse principal component analysis (sparse PCA) aims at finding a sparse
basis to improve the interpretability over the dense basis of PCA, meanwhile
the sparse basis should cover the data subspace as much as possible. In
contrast to most of existing work which deal with the problem by adding some
sparsity penalties on various objectives of PCA, in this paper, we propose a
new method SPCArt, whose motivation is to find a rotation matrix and a sparse
basis such that the sparse basis approximates the basis of PCA after the
rotation. The algorithm of SPCArt consists of three alternating steps: rotate
PCA basis, truncate small entries, and update the rotation matrix. Its
performance bounds are also given. SPCArt is efficient, with each iteration
scaling linearly with the data dimension. It is easy to choose parameters in
SPCArt, due to its explicit physical explanations. Besides, we give a unified
view to several existing sparse PCA methods and discuss the connection with
SPCArt. Some ideas in SPCArt are extended to GPower, a popular sparse PCA
algorithm, to overcome its drawback. Experimental results demonstrate that
SPCArt achieves the state-of-the-art performance. It also achieves a good
tradeoff among various criteria, including sparsity, explained variance,
orthogonality, balance of sparsity among loadings, and computational speed.
| [
"Zhenfang Hu, Gang Pan, Yueming Wang, and Zhaohui Wu",
"['Zhenfang Hu' 'Gang Pan' 'Yueming Wang' 'Zhaohui Wu']"
]
|
stat.ML cs.IT cs.LG math.IT | null | 1403.1600 | null | null | http://arxiv.org/pdf/1403.1600v1 | 2014-03-06T21:51:48Z | 2014-03-06T21:51:48Z | Collaborative Filtering with Information-Rich and Information-Sparse
Entities | In this paper, we consider a popular model for collaborative filtering in
recommender systems where some users of a website rate some items, such as
movies, and the goal is to recover the ratings of some or all of the unrated
items of each user. In particular, we consider both the clustering model, where
only users (or items) are clustered, and the co-clustering model, where both
users and items are clustered, and further, we assume that some users rate many
items (information-rich users) and some users rate only a few items
(information-sparse users). When users (or items) are clustered, our algorithm
can recover the rating matrix with $\omega(MK \log M)$ noisy entries while $MK$
entries are necessary, where $K$ is the number of clusters and $M$ is the
number of items. In the case of co-clustering, we prove that $K^2$ entries are
necessary for recovering the rating matrix, and our algorithm achieves this
lower bound within a logarithmic factor when $K$ is sufficiently large. We
compare our algorithms with a well-known algorithms called alternating
minimization (AM), and a similarity score-based algorithm known as the
popularity-among-friends (PAF) algorithm by applying all three to the MovieLens
and Netflix data sets. Our co-clustering algorithm and AM have similar overall
error rates when recovering the rating matrix, both of which are lower than the
error rate under PAF. But more importantly, the error rate of our co-clustering
algorithm is significantly lower than AM and PAF in the scenarios of interest
in recommender systems: when recommending a few items to each user or when
recommending items to users who only rated a few items (these users are the
majority of the total user population). The performance difference increases
even more when noise is added to the datasets.
| [
"Kai Zhu, Rui Wu, Lei Ying, R. Srikant",
"['Kai Zhu' 'Rui Wu' 'Lei Ying' 'R. Srikant']"
]
|
cs.LG cs.SY | null | 1403.1863 | null | null | http://arxiv.org/pdf/1403.1863v1 | 2014-03-07T20:26:09Z | 2014-03-07T20:26:09Z | Statistical Structure Learning, Towards a Robust Smart Grid | Robust control and maintenance of the grid relies on accurate data. Both PMUs
and state estimators are prone to false data injection attacks. Thus, it is
crucial to have a mechanism for fast and accurate detection of an agent
maliciously tampering with the data---for both preventing attacks that may lead
to blackouts, and for routine monitoring and control tasks of current and
future grids. We propose a decentralized false data injection detection scheme
based on Markov graph of the bus phase angles. We utilize the Conditional
Covariance Test (CCT) to learn the structure of the grid. Using the DC power
flow model, we show that under normal circumstances, and because of
walk-summability of the grid graph, the Markov graph of the voltage angles can
be determined by the power grid graph. Therefore, a discrepancy between
calculated Markov graph and learned structure should trigger the alarm. Local
grid topology is available online from the protection system and we exploit it
to check for mismatch. Should a mismatch be detected, we use correlation
anomaly score to detect the set of attacked nodes. Our method can detect the
most recent stealthy deception attack on the power grid that assumes knowledge
of bus-branch model of the system and is capable of deceiving the state
estimator, damaging power network observatory, control, monitoring, demand
response and pricing schemes. Specifically, under the stealthy deception
attack, the Markov graph of phase angles changes. In addition to detect a state
of attack, our method can detect the set of attacked nodes. To the best of our
knowledge, our remedy is the first to comprehensively detect this sophisticated
attack and it does not need additional hardware. Moreover, our detection scheme
is successful no matter the size of the attacked subset. Simulation of various
power networks confirms our claims.
| [
"Hanie Sedghi and Edmond Jonckheere",
"['Hanie Sedghi' 'Edmond Jonckheere']"
]
|
cs.LG cs.AI stat.AP stat.ML | null | 1403.1891 | null | null | http://arxiv.org/pdf/1403.1891v2 | 2014-03-12T06:36:02Z | 2014-03-07T22:54:52Z | Counterfactual Estimation and Optimization of Click Metrics for Search
Engines | Optimizing an interactive system against a predefined online metric is
particularly challenging, when the metric is computed from user feedback such
as clicks and payments. The key challenge is the counterfactual nature: in the
case of Web search, any change to a component of the search engine may result
in a different search result page for the same query, but we normally cannot
infer reliably from search log how users would react to the new result page.
Consequently, it appears impossible to accurately estimate online metrics that
depend on user feedback, unless the new engine is run to serve users and
compared with a baseline in an A/B test. This approach, while valid and
successful, is unfortunately expensive and time-consuming. In this paper, we
propose to address this problem using causal inference techniques, under the
contextual-bandit framework. This approach effectively allows one to run
(potentially infinitely) many A/B tests offline from search log, making it
possible to estimate and optimize online metrics quickly and inexpensively.
Focusing on an important component in a commercial search engine, we show how
these ideas can be instantiated and applied, and obtain very promising results
that suggest the wide applicability of these techniques.
| [
"Lihong Li and Shunbao Chen and Jim Kleban and Ankur Gupta",
"['Lihong Li' 'Shunbao Chen' 'Jim Kleban' 'Ankur Gupta']"
]
|
stat.ML cs.AI cs.LG | null | 1403.1893 | null | null | http://arxiv.org/pdf/1403.1893v1 | 2014-03-07T22:58:48Z | 2014-03-07T22:58:48Z | Becoming More Robust to Label Noise with Classifier Diversity | It is widely known in the machine learning community that class noise can be
(and often is) detrimental to inducing a model of the data. Many current
approaches use a single, often biased, measurement to determine if an instance
is noisy. A biased measure may work well on certain data sets, but it can also
be less effective on a broader set of data sets. In this paper, we present
noise identification using classifier diversity (NICD) -- a method for deriving
a less biased noise measurement and integrating it into the learning process.
To lessen the bias of the noise measure, NICD selects a diverse set of
classifiers (based on their predictions of novel instances) to determine which
instances are noisy. We examine NICD as a technique for filtering, instance
weighting, and selecting the base classifiers of a voting ensemble. We compare
NICD with several other noise handling techniques that do not consider
classifier diversity on a set of 54 data sets and 5 learning algorithms. NICD
significantly increases the classification accuracy over the other considered
approaches and is effective across a broad set of data sets and learning
algorithms.
| [
"['Michael R. Smith' 'Tony Martinez']",
"Michael R. Smith and Tony Martinez"
]
|
cs.LG | null | 1403.1942 | null | null | http://arxiv.org/pdf/1403.1942v2 | 2014-12-01T20:04:30Z | 2014-03-08T07:07:12Z | Predictive Overlapping Co-Clustering | In the past few years co-clustering has emerged as an important data mining
tool for two way data analysis. Co-clustering is more advantageous over
traditional one dimensional clustering in many ways such as, ability to find
highly correlated sub-groups of rows and columns. However, one of the
overlooked benefits of co-clustering is that, it can be used to extract
meaningful knowledge for various other knowledge extraction purposes. For
example, building predictive models with high dimensional data and
heterogeneous population is a non-trivial task. Co-clusters extracted from such
data, which shows similar pattern in both the dimension, can be used for a more
accurate predictive model building. Several applications such as finding
patient-disease cohorts in health care analysis, finding user-genre groups in
recommendation systems and community detection problems can benefit from
co-clustering technique that utilizes the predictive power of the data to
generate co-clusters for improved data analysis.
In this paper, we present the novel idea of Predictive Overlapping
Co-Clustering (POCC) as an optimization problem for a more effective and
improved predictive analysis. Our algorithm generates optimal co-clusters by
maximizing predictive power of the co-clusters subject to the constraints on
the number of row and column clusters. In this paper precision, recall and
f-measure have been used as evaluation measures of the resulting co-clusters.
Results of our algorithm has been compared with two other well-known techniques
- K-means and Spectral co-clustering, over four real data set namely, Leukemia,
Internet-Ads, Ovarian cancer and MovieLens data set. The results demonstrate
the effectiveness and utility of our algorithm POCC in practice.
| [
"Chandrima Sarkar, Jaideep Srivastava",
"['Chandrima Sarkar' 'Jaideep Srivastava']"
]
|
cs.LG cs.CV stat.ML | 10.1016/j.ins.2012.07.066 | 1403.1944 | null | null | http://arxiv.org/abs/1403.1944v1 | 2014-03-08T07:20:05Z | 2014-03-08T07:20:05Z | Multi-label ensemble based on variable pairwise constraint projection | Multi-label classification has attracted an increasing amount of attention in
recent years. To this end, many algorithms have been developed to classify
multi-label data in an effective manner. However, they usually do not consider
the pairwise relations indicated by sample labels, which actually play
important roles in multi-label classification. Inspired by this, we naturally
extend the traditional pairwise constraints to the multi-label scenario via a
flexible thresholding scheme. Moreover, to improve the generalization ability
of the classifier, we adopt a boosting-like strategy to construct a multi-label
ensemble from a group of base classifiers. To achieve these goals, this paper
presents a novel multi-label classification framework named Variable Pairwise
Constraint projection for Multi-label Ensemble (VPCME). Specifically, we take
advantage of the variable pairwise constraint projection to learn a
lower-dimensional data representation, which preserves the correlations between
samples and labels. Thereafter, the base classifiers are trained in the new
data space. For the boosting-like strategy, we employ both the variable
pairwise constraints and the bootstrap steps to diversify the base classifiers.
Empirical studies have shown the superiority of the proposed method in
comparison with other approaches.
| [
"Ping Li and Hong Li and Min Wu",
"['Ping Li' 'Hong Li' 'Min Wu']"
]
|
cs.LG | null | 1403.1946 | null | null | http://arxiv.org/pdf/1403.1946v1 | 2014-03-08T07:47:44Z | 2014-03-08T07:47:44Z | Improving Performance of a Group of Classification Algorithms Using
Resampling and Feature Selection | In recent years the importance of finding a meaningful pattern from huge
datasets has become more challenging. Data miners try to adopt innovative
methods to face this problem by applying feature selection methods. In this
paper we propose a new hybrid method in which we use a combination of
resampling, filtering the sample domain and wrapper subset evaluation method
with genetic search to reduce dimensions of Lung-Cancer dataset that we
received from UCI Repository of Machine Learning databases. Finally, we apply
some well- known classification algorithms (Na\"ive Bayes, Logistic, Multilayer
Perceptron, Best First Decision Tree and JRIP) to the resulting dataset and
compare the results and prediction rates before and after the application of
our feature selection method on that dataset. The results show a substantial
progress in the average performance of five classification algorithms
simultaneously and the classification error for these classifiers decreases
considerably. The experiments also show that this method outperforms other
feature selection methods with a lower cost.
| [
"['Mehdi Naseriparsa' 'Amir-masoud Bidgoli' 'Touraj Varaee']",
"Mehdi Naseriparsa, Amir-masoud Bidgoli, Touraj Varaee"
]
|
cs.LG cs.CE | 10.5120/13376-0987 | 1403.1949 | null | null | http://arxiv.org/abs/1403.1949v1 | 2014-03-08T08:12:54Z | 2014-03-08T08:12:54Z | Combination of PCA with SMOTE Resampling to Boost the Prediction Rate in
Lung Cancer Dataset | Classification algorithms are unable to make reliable models on the datasets
with huge sizes. These datasets contain many irrelevant and redundant features
that mislead the classifiers. Furthermore, many huge datasets have imbalanced
class distribution which leads to bias over majority class in the
classification process. In this paper combination of unsupervised
dimensionality reduction methods with resampling is proposed and the results
are tested on Lung-Cancer dataset. In the first step PCA is applied on
Lung-Cancer dataset to compact the dataset and eliminate irrelevant features
and in the second step SMOTE resampling is carried out to balance the class
distribution and increase the variety of sample domain. Finally, Naive Bayes
classifier is applied on the resulting dataset and the results are compared and
evaluation metrics are calculated. The experiments show the effectiveness of
the proposed method across four evaluation metrics: Overall accuracy, False
Positive Rate, Precision, Recall.
| [
"Mehdi Naseriparsa, Mohammad Mansour Riahi Kashani",
"['Mehdi Naseriparsa' 'Mohammad Mansour Riahi Kashani']"
]
|
cs.LG | null | 1403.2065 | null | null | http://arxiv.org/pdf/1403.2065v8 | 2016-01-14T22:54:32Z | 2014-03-09T14:51:53Z | Categorization Axioms for Clustering Results | Cluster analysis has attracted more and more attention in the field of
machine learning and data mining. Numerous clustering algorithms have been
proposed and are being developed due to diverse theories and various
requirements of emerging applications. Therefore, it is very worth establishing
an unified axiomatic framework for data clustering. In the literature, it is an
open problem and has been proved very challenging. In this paper, clustering
results are axiomatized by assuming that an proper clustering result should
satisfy categorization axioms. The proposed axioms not only introduce
classification of clustering results and inequalities of clustering results,
but also are consistent with prototype theory and exemplar theory of
categorization models in cognitive science. Moreover, the proposed axioms lead
to three principles of designing clustering algorithm and cluster validity
index, which follow many popular clustering algorithms and cluster validity
indices.
| [
"['Jian Yu' 'Zongben Xu']",
"Jian Yu, Zongben Xu"
]
|
cs.LG cs.CV | null | 1403.2295 | null | null | http://arxiv.org/pdf/1403.2295v1 | 2014-03-10T16:36:23Z | 2014-03-10T16:36:23Z | Sublinear Models for Graphs | This contribution extends linear models for feature vectors to sublinear
models for graphs and analyzes their properties. The results are (i) a
geometric interpretation of sublinear classifiers, (ii) a generic learning rule
based on the principle of empirical risk minimization, (iii) a convergence
theorem for the margin perceptron in the sublinearly separable case, and (iv)
the VC-dimension of sublinear functions. Empirical results on graph data show
that sublinear models on graphs have similar properties as linear models for
feature vectors.
| [
"['Brijnesh J. Jain']",
"Brijnesh J. Jain"
]
|
cs.LG | 10.5120/12065-8172 | 1403.2372 | null | null | http://arxiv.org/abs/1403.2372v1 | 2014-03-08T08:04:29Z | 2014-03-08T08:04:29Z | A Hybrid Feature Selection Method to Improve Performance of a Group of
Classification Algorithms | In this paper a hybrid feature selection method is proposed which takes
advantages of wrapper subset evaluation with a lower cost and improves the
performance of a group of classifiers. The method uses combination of sample
domain filtering and resampling to refine the sample domain and two feature
subset evaluation methods to select reliable features. This method utilizes
both feature space and sample domain in two phases. The first phase filters and
resamples the sample domain and the second phase adopts a hybrid procedure by
information gain, wrapper subset evaluation and genetic search to find the
optimal feature space. Experiments carried out on different types of datasets
from UCI Repository of Machine Learning databases and the results show a rise
in the average performance of five classifiers (Naive Bayes, Logistic,
Multilayer Perceptron, Best First Decision Tree and JRIP) simultaneously and
the classification error for these classifiers decreases considerably. The
experiments also show that this method outperforms other feature selection
methods with a lower cost.
| [
"Mehdi Naseriparsa, Amir-Masoud Bidgoli, Touraj Varaee",
"['Mehdi Naseriparsa' 'Amir-Masoud Bidgoli' 'Touraj Varaee']"
]
|
cs.LG stat.ML | null | 1403.2433 | null | null | http://arxiv.org/pdf/1403.2433v1 | 2014-03-10T22:55:11Z | 2014-03-10T22:55:11Z | Generalised Mixability, Constant Regret, and Bayesian Updating | Mixability of a loss is known to characterise when constant regret bounds are
achievable in games of prediction with expert advice through the use of Vovk's
aggregating algorithm. We provide a new interpretation of mixability via convex
analysis that highlights the role of the Kullback-Leibler divergence in its
definition. This naturally generalises to what we call $\Phi$-mixability where
the Bregman divergence $D_\Phi$ replaces the KL divergence. We prove that
losses that are $\Phi$-mixable also enjoy constant regret bounds via a
generalised aggregating algorithm that is similar to mirror descent.
| [
"Mark D. Reid and Rafael M. Frongillo and Robert C. Williamson",
"['Mark D. Reid' 'Rafael M. Frongillo' 'Robert C. Williamson']"
]
|
cs.LG cs.SI | 10.1109/ICDM.2013.116 | 1403.2484 | null | null | http://arxiv.org/abs/1403.2484v1 | 2014-03-11T06:49:56Z | 2014-03-11T06:49:56Z | Transfer Learning across Networks for Collective Classification | This paper addresses the problem of transferring useful knowledge from a
source network to predict node labels in a newly formed target network. While
existing transfer learning research has primarily focused on vector-based data,
in which the instances are assumed to be independent and identically
distributed, how to effectively transfer knowledge across different information
networks has not been well studied, mainly because networks may have their
distinct node features and link relationships between nodes. In this paper, we
propose a new transfer learning algorithm that attempts to transfer common
latent structure features across the source and target networks. The proposed
algorithm discovers these latent features by constructing label propagation
matrices in the source and target networks, and mapping them into a shared
latent feature space. The latent features capture common structure patterns
shared by two networks, and serve as domain-independent features to be
transferred between networks. Together with domain-dependent node features, we
thereafter propose an iterative classification algorithm that leverages label
correlations to predict node labels in the target network. Experiments on
real-world networks demonstrate that our proposed algorithm can successfully
achieve knowledge transfer between networks to help improve the accuracy of
classifying nodes in the target network.
| [
"['Meng Fang' 'Jie Yin' 'Xingquan Zhu']",
"Meng Fang, Jie Yin, Xingquan Zhu"
]
|
cs.IT cs.LG math.IT | null | 1403.2485 | null | null | http://arxiv.org/pdf/1403.2485v2 | 2014-05-26T03:48:59Z | 2014-03-11T06:52:04Z | Optimal interval clustering: Application to Bregman clustering and
statistical mixture learning | We present a generic dynamic programming method to compute the optimal
clustering of $n$ scalar elements into $k$ pairwise disjoint intervals. This
case includes 1D Euclidean $k$-means, $k$-medoids, $k$-medians, $k$-centers,
etc. We extend the method to incorporate cluster size constraints and show how
to choose the appropriate $k$ by model selection. Finally, we illustrate and
refine the method on two case studies: Bregman clustering and statistical
mixture learning maximizing the complete likelihood.
| [
"Frank Nielsen and Richard Nock",
"['Frank Nielsen' 'Richard Nock']"
]
|
cs.LG cs.CE | null | 1403.2654 | null | null | http://arxiv.org/pdf/1403.2654v1 | 2014-03-11T18:36:39Z | 2014-03-11T18:36:39Z | Flying Insect Classification with Inexpensive Sensors | The ability to use inexpensive, noninvasive sensors to accurately classify
flying insects would have significant implications for entomological research,
and allow for the development of many useful applications in vector control for
both medical and agricultural entomology. Given this, the last sixty years have
seen many research efforts on this task. To date, however, none of this
research has had a lasting impact. In this work, we explain this lack of
progress. We attribute the stagnation on this problem to several factors,
including the use of acoustic sensing devices, the over-reliance on the single
feature of wingbeat frequency, and the attempts to learn complex models with
relatively little data. In contrast, we show that pseudo-acoustic optical
sensors can produce vastly superior data, that we can exploit additional
features, both intrinsic and extrinsic to the insect's flight behavior, and
that a Bayesian classification approach allows us to efficiently learn
classification models that are very robust to over-fitting. We demonstrate our
findings with large scale experiments that dwarf all previous works combined,
as measured by the number of insects and the number of species considered.
| [
"Yanping Chen, Adena Why, Gustavo Batista, Agenor Mafra-Neto, Eamonn\n Keogh",
"['Yanping Chen' 'Adena Why' 'Gustavo Batista' 'Agenor Mafra-Neto'\n 'Eamonn Keogh']"
]
|
math.ST cs.DC cs.LG stat.TH | null | 1403.2660 | null | null | http://arxiv.org/pdf/1403.2660v3 | 2016-06-02T00:59:28Z | 2014-03-11T17:37:18Z | Robust and Scalable Bayes via a Median of Subset Posterior Measures | We propose a novel approach to Bayesian analysis that is provably robust to
outliers in the data and often has computational advantages over standard
methods. Our technique is based on splitting the data into non-overlapping
subgroups, evaluating the posterior distribution given each independent
subgroup, and then combining the resulting measures. The main novelty of our
approach is the proposed aggregation step, which is based on the evaluation of
a median in the space of probability measures equipped with a suitable
collection of distances that can be quickly and efficiently evaluated in
practice. We present both theoretical and numerical evidence illustrating the
improvements achieved by our method.
| [
"['Stanislav Minsker' 'Sanvesh Srivastava' 'Lizhen Lin' 'David B. Dunson']",
"Stanislav Minsker, Sanvesh Srivastava, Lizhen Lin and David B. Dunson"
]
|
cs.CV cs.LG | null | 1403.2802 | null | null | http://arxiv.org/pdf/1403.2802v1 | 2014-03-12T03:47:18Z | 2014-03-12T03:47:18Z | Learning Deep Face Representation | Face representation is a crucial step of face recognition systems. An optimal
face representation should be discriminative, robust, compact, and very
easy-to-implement. While numerous hand-crafted and learning-based
representations have been proposed, considerable room for improvement is still
present. In this paper, we present a very easy-to-implement deep learning
framework for face representation. Our method bases on a new structure of deep
network (called Pyramid CNN). The proposed Pyramid CNN adopts a
greedy-filter-and-down-sample operation, which enables the training procedure
to be very fast and computation-efficient. In addition, the structure of
Pyramid CNN can naturally incorporate feature sharing across multi-scale face
representations, increasing the discriminative ability of resulting
representation. Our basic network is capable of achieving high recognition
accuracy ($85.8\%$ on LFW benchmark) with only 8 dimension representation. When
extended to feature-sharing Pyramid CNN, our system achieves the
state-of-the-art performance ($97.3\%$) on LFW benchmark. We also introduce a
new benchmark of realistic face images on social network and validate our
proposed representation has a good ability of generalization.
| [
"['Haoqiang Fan' 'Zhimin Cao' 'Yuning Jiang' 'Qi Yin' 'Chinchilla Doudou']",
"Haoqiang Fan, Zhimin Cao, Yuning Jiang, Qi Yin, Chinchilla Doudou"
]
|
stat.ML cs.LG q-bio.QM | null | 1403.2877 | null | null | http://arxiv.org/pdf/1403.2877v1 | 2014-03-12T10:35:15Z | 2014-03-12T10:35:15Z | A survey of dimensionality reduction techniques | Experimental life sciences like biology or chemistry have seen in the recent
decades an explosion of the data available from experiments. Laboratory
instruments become more and more complex and report hundreds or thousands
measurements for a single experiment and therefore the statistical methods face
challenging tasks when dealing with such high dimensional data. However, much
of the data is highly redundant and can be efficiently brought down to a much
smaller number of variables without a significant loss of information. The
mathematical procedures making possible this reduction are called
dimensionality reduction techniques; they have widely been developed by fields
like Statistics or Machine Learning, and are currently a hot research topic. In
this review we categorize the plethora of dimension reduction techniques
available and give the mathematical insight behind them.
| [
"C.O.S. Sorzano, J. Vargas, A. Pascual Montano",
"['C. O. S. Sorzano' 'J. Vargas' 'A. Pascual Montano']"
]
|
cs.LG | 10.5121/ijscai.2014.3102 | 1403.2950 | null | null | http://arxiv.org/abs/1403.2950v1 | 2014-03-12T14:33:43Z | 2014-03-12T14:33:43Z | Cancer Prognosis Prediction Using Balanced Stratified Sampling | High accuracy in cancer prediction is important to improve the quality of the
treatment and to improve the rate of survivability of patients. As the data
volume is increasing rapidly in the healthcare research, the analytical
challenge exists in double. The use of effective sampling technique in
classification algorithms always yields good prediction accuracy. The SEER
public use cancer database provides various prominent class labels for
prognosis prediction. The main objective of this paper is to find the effect of
sampling techniques in classifying the prognosis variable and propose an ideal
sampling method based on the outcome of the experimentation. In the first phase
of this work the traditional random sampling and stratified sampling techniques
have been used. At the next level the balanced stratified sampling with
variations as per the choice of the prognosis class labels have been tested.
Much of the initial time has been focused on performing the pre_processing of
the SEER data set. The classification model for experimentation has been built
using the breast cancer, respiratory cancer and mixed cancer data sets with
three traditional classifiers namely Decision Tree, Naive Bayes and K-Nearest
Neighbor. The three prognosis factors survival, stage and metastasis have been
used as class labels for experimental comparisons. The results shows a steady
increase in the prediction accuracy of balanced stratified model as the sample
size increases, but the traditional approach fluctuates before the optimum
results.
| [
"['J S Saleema' 'N Bhagawathi' 'S Monica' 'P Deepa Shenoy' 'K R Venugopal'\n 'L M Patnaik']",
"J S Saleema, N Bhagawathi, S Monica, P Deepa Shenoy, K R Venugopal and\n L M Patnaik"
]
|
cs.LG math.OC stat.ML | null | 1403.3080 | null | null | http://arxiv.org/pdf/1403.3080v2 | 2014-04-24T08:52:28Z | 2014-03-12T19:55:00Z | Statistical Decision Making for Optimal Budget Allocation in Crowd
Labeling | In crowd labeling, a large amount of unlabeled data instances are outsourced
to a crowd of workers. Workers will be paid for each label they provide, but
the labeling requester usually has only a limited amount of the budget. Since
data instances have different levels of labeling difficulty and workers have
different reliability, it is desirable to have an optimal policy to allocate
the budget among all instance-worker pairs such that the overall labeling
accuracy is maximized. We consider categorical labeling tasks and formulate the
budget allocation problem as a Bayesian Markov decision process (MDP), which
simultaneously conducts learning and decision making. Using the dynamic
programming (DP) recurrence, one can obtain the optimal allocation policy.
However, DP quickly becomes computationally intractable when the size of the
problem increases. To solve this challenge, we propose a computationally
efficient approximate policy, called optimistic knowledge gradient policy. Our
MDP is a quite general framework, which applies to both pull crowdsourcing
marketplaces with homogeneous workers and push marketplaces with heterogeneous
workers. It can also incorporate the contextual information of instances when
they are available. The experiments on both simulated and real data show that
the proposed policy achieves a higher labeling accuracy than other existing
policies at the same budget level.
| [
"Xi Chen, Qihang Lin, Dengyong Zhou",
"['Xi Chen' 'Qihang Lin' 'Dengyong Zhou']"
]
|
cs.IT cs.LG math.IT math.ST stat.TH | null | 1403.3109 | null | null | http://arxiv.org/pdf/1403.3109v1 | 2014-03-12T20:32:02Z | 2014-03-12T20:32:02Z | Sparse Recovery with Linear and Nonlinear Observations: Dependent and
Noisy Data | We formulate sparse support recovery as a salient set identification problem
and use information-theoretic analyses to characterize the recovery performance
and sample complexity. We consider a very general model where we are not
restricted to linear models or specific distributions. We state non-asymptotic
bounds on recovery probability and a tight mutual information formula for
sample complexity. We evaluate our bounds for applications such as sparse
linear regression and explicitly characterize effects of correlation or noisy
features on recovery performance. We show improvements upon previous work and
identify gaps between the performance of recovery algorithms and fundamental
information.
| [
"['Cem Aksoylar' 'Venkatesh Saligrama']",
"Cem Aksoylar and Venkatesh Saligrama"
]
|
stat.ML cs.LG | null | 1403.3342 | null | null | http://arxiv.org/pdf/1403.3342v1 | 2014-03-13T17:48:19Z | 2014-03-13T17:48:19Z | The Potential Benefits of Filtering Versus Hyper-Parameter Optimization | The quality of an induced model by a learning algorithm is dependent on the
quality of the training data and the hyper-parameters supplied to the learning
algorithm. Prior work has shown that improving the quality of the training data
(i.e., by removing low quality instances) or tuning the learning algorithm
hyper-parameters can significantly improve the quality of an induced model. A
comparison of the two methods is lacking though. In this paper, we estimate and
compare the potential benefits of filtering and hyper-parameter optimization.
Estimating the potential benefit gives an overly optimistic estimate but also
empirically demonstrates an approximation of the maximum potential benefit of
each method. We find that, while both significantly improve the induced model,
improving the quality of the training set has a greater potential effect than
hyper-parameter optimization.
| [
"['Michael R. Smith' 'Tony Martinez' 'Christophe Giraud-Carrier']",
"Michael R. Smith and Tony Martinez and Christophe Giraud-Carrier"
]
|
stat.OT cs.LG stat.AP | null | 1403.3371 | null | null | http://arxiv.org/pdf/1403.3371v2 | 2014-04-09T16:25:31Z | 2014-03-13T19:01:28Z | Spectral Correlation Hub Screening of Multivariate Time Series | This chapter discusses correlation analysis of stationary multivariate
Gaussian time series in the spectral or Fourier domain. The goal is to identify
the hub time series, i.e., those that are highly correlated with a specified
number of other time series. We show that Fourier components of the time series
at different frequencies are asymptotically statistically independent. This
property permits independent correlation analysis at each frequency,
alleviating the computational and statistical challenges of high-dimensional
time series. To detect correlation hubs at each frequency, an existing
correlation screening method is extended to the complex numbers to accommodate
complex-valued Fourier components. We characterize the number of hub
discoveries at specified correlation and degree thresholds in the regime of
increasing dimension and fixed sample size. The theory specifies appropriate
thresholds to apply to sample correlation matrices to detect hubs and also
allows statistical significance to be attributed to hub discoveries. Numerical
results illustrate the accuracy of the theory and the usefulness of the
proposed spectral framework.
| [
"Hamed Firouzi, Dennis Wei, Alfred O. Hero III",
"['Hamed Firouzi' 'Dennis Wei' 'Alfred O. Hero III']"
]
|
stat.ML cs.LG | null | 1403.3378 | null | null | http://arxiv.org/pdf/1403.3378v2 | 2014-06-07T15:01:07Z | 2014-03-13T19:28:48Z | Box Drawings for Learning with Imbalanced Data | The vast majority of real world classification problems are imbalanced,
meaning there are far fewer data from the class of interest (the positive
class) than from other classes. We propose two machine learning algorithms to
handle highly imbalanced classification problems. The classifiers constructed
by both methods are created as unions of parallel axis rectangles around the
positive examples, and thus have the benefit of being interpretable. The first
algorithm uses mixed integer programming to optimize a weighted balance between
positive and negative class accuracies. Regularization is introduced to improve
generalization performance. The second method uses an approximation in order to
assist with scalability. Specifically, it follows a \textit{characterize then
discriminate} approach, where the positive class is characterized first by
boxes, and then each box boundary becomes a separate discriminative classifier.
This method has the computational advantages that it can be easily
parallelized, and considers only the relevant regions of feature space.
| [
"['Siong Thye Goh' 'Cynthia Rudin']",
"Siong Thye Goh, Cynthia Rudin"
]
|
cs.LG cs.CL cs.DB cs.IR | null | 1403.3460 | null | null | http://arxiv.org/pdf/1403.3460v1 | 2014-03-13T23:22:21Z | 2014-03-13T23:22:21Z | Scalable and Robust Construction of Topical Hierarchies | Automated generation of high-quality topical hierarchies for a text
collection is a dream problem in knowledge engineering with many valuable
applications. In this paper a scalable and robust algorithm is proposed for
constructing a hierarchy of topics from a text collection. We divide and
conquer the problem using a top-down recursive framework, based on a tensor
orthogonal decomposition technique. We solve a critical challenge to perform
scalable inference for our newly designed hierarchical topic model. Experiments
with various real-world datasets illustrate its ability to generate robust,
high-quality hierarchies efficiently. Our method reduces the time of
construction by several orders of magnitude, and its robust feature renders it
possible for users to interactively revise the hierarchy.
| [
"Chi Wang, Xueqing Liu, Yanglei Song, Jiawei Han",
"['Chi Wang' 'Xueqing Liu' 'Yanglei Song' 'Jiawei Han']"
]
|
cs.LG | null | 1403.3465 | null | null | http://arxiv.org/pdf/1403.3465v3 | 2015-11-09T17:32:51Z | 2014-03-14T00:25:03Z | A Survey of Algorithms and Analysis for Adaptive Online Learning | We present tools for the analysis of Follow-The-Regularized-Leader (FTRL),
Dual Averaging, and Mirror Descent algorithms when the regularizer
(equivalently, prox-function or learning rate schedule) is chosen adaptively
based on the data. Adaptivity can be used to prove regret bounds that hold on
every round, and also allows for data-dependent regret bounds as in
AdaGrad-style algorithms (e.g., Online Gradient Descent with adaptive
per-coordinate learning rates). We present results from a large number of prior
works in a unified manner, using a modular and tight analysis that isolates the
key arguments in easily re-usable lemmas. This approach strengthens pre-viously
known FTRL analysis techniques to produce bounds as tight as those achieved by
potential functions or primal-dual analysis. Further, we prove a general and
exact equivalence between an arbitrary adaptive Mirror Descent algorithm and a
correspond- ing FTRL update, which allows us to analyze any Mirror Descent
algorithm in the same framework. The key to bridging the gap between Dual
Averaging and Mirror Descent algorithms lies in an analysis of the
FTRL-Proximal algorithm family. Our regret bounds are proved in the most
general form, holding for arbitrary norms and non-smooth regularizers with
time-varying weight.
| [
"['H. Brendan McMahan']",
"H. Brendan McMahan"
]
|
cs.LG | 10.1016/j.neucom.2014.09.081 | 1403.3610 | null | null | http://arxiv.org/abs/1403.3610v2 | 2015-09-10T06:33:57Z | 2014-03-14T15:30:23Z | Making Risk Minimization Tolerant to Label Noise | In many applications, the training data, from which one needs to learn a
classifier, is corrupted with label noise. Many standard algorithms such as SVM
perform poorly in presence of label noise. In this paper we investigate the
robustness of risk minimization to label noise. We prove a sufficient condition
on a loss function for the risk minimization under that loss to be tolerant to
uniform label noise. We show that the $0-1$ loss, sigmoid loss, ramp loss and
probit loss satisfy this condition though none of the standard convex loss
functions satisfy it. We also prove that, by choosing a sufficiently large
value of a parameter in the loss function, the sigmoid loss, ramp loss and
probit loss can be made tolerant to non-uniform label noise also if we can
assume the classes to be separable under noise-free data distribution. Through
extensive empirical studies, we show that risk minimization under the $0-1$
loss, the sigmoid loss and the ramp loss has much better robustness to label
noise when compared to the SVM algorithm.
| [
"Aritra Ghosh, Naresh Manwani and P. S. Sastry",
"['Aritra Ghosh' 'Naresh Manwani' 'P. S. Sastry']"
]
|
cs.LG | null | 1403.3628 | null | null | http://arxiv.org/pdf/1403.3628v1 | 2014-03-14T16:15:24Z | 2014-03-14T16:15:24Z | Mixed-norm Regularization for Brain Decoding | This work investigates the use of mixed-norm regularization for sensor
selection in Event-Related Potential (ERP) based Brain-Computer Interfaces
(BCI). The classification problem is cast as a discriminative optimization
framework where sensor selection is induced through the use of mixed-norms.
This framework is extended to the multi-task learning situation where several
similar classification tasks related to different subjects are learned
simultaneously. In this case, multi-task learning helps in leveraging data
scarcity issue yielding to more robust classifiers. For this purpose, we have
introduced a regularizer that induces both sensor selection and classifier
similarities. The different regularization approaches are compared on three ERP
datasets showing the interest of mixed-norm regularization in terms of sensor
selection. The multi-task approaches are evaluated when a small number of
learning examples are available yielding to significant performance
improvements especially for subjects performing poorly.
| [
"['Rémi Flamary' 'Nisrine Jrad' 'Ronald Phlypo' 'Marco Congedo'\n 'Alain Rakotomamonjy']",
"R\\'emi Flamary (LAGRANGE), Nisrine Jrad (GIPSA-lab), Ronald Phlypo\n (GIPSA-lab), Marco Congedo (GIPSA-lab), Alain Rakotomamonjy (LITIS)"
]
|
cs.SI cs.LG physics.soc-ph stat.ML | null | 1403.3707 | null | null | http://arxiv.org/pdf/1403.3707v1 | 2014-03-14T20:37:06Z | 2014-03-14T20:37:06Z | Learning the Latent State Space of Time-Varying Graphs | From social networks to Internet applications, a wide variety of electronic
communication tools are producing streams of graph data; where the nodes
represent users and the edges represent the contacts between them over time.
This has led to an increased interest in mechanisms to model the dynamic
structure of time-varying graphs. In this work, we develop a framework for
learning the latent state space of a time-varying email graph. We show how the
framework can be used to find subsequences that correspond to global real-time
events in the Email graph (e.g. vacations, breaks, ...etc.). These events
impact the underlying graph process to make its characteristics non-stationary.
Within the framework, we compare two different representations of the temporal
relationships; discrete vs. probabilistic. We use the two representations as
inputs to a mixture model to learn the latent state transitions that correspond
to important changes in the Email graph structure over time.
| [
"['Nesreen K. Ahmed' 'Christopher Cole' 'Jennifer Neville']",
"Nesreen K. Ahmed, Christopher Cole, Jennifer Neville"
]
|
stat.ML cs.LG | null | 1403.3741 | null | null | http://arxiv.org/pdf/1403.3741v3 | 2014-10-31T23:34:32Z | 2014-03-15T01:56:02Z | Near-optimal Reinforcement Learning in Factored MDPs | Any reinforcement learning algorithm that applies to all Markov decision
processes (MDPs) will suffer $\Omega(\sqrt{SAT})$ regret on some MDP, where $T$
is the elapsed time and $S$ and $A$ are the cardinalities of the state and
action spaces. This implies $T = \Omega(SA)$ time to guarantee a near-optimal
policy. In many settings of practical interest, due to the curse of
dimensionality, $S$ and $A$ can be so enormous that this learning time is
unacceptable. We establish that, if the system is known to be a \emph{factored}
MDP, it is possible to achieve regret that scales polynomially in the number of
\emph{parameters} encoding the factored MDP, which may be exponentially smaller
than $S$ or $A$. We provide two algorithms that satisfy near-optimal regret
bounds in this context: posterior sampling reinforcement learning (PSRL) and an
upper confidence bound algorithm (UCRL-Factored).
| [
"Ian Osband, Benjamin Van Roy",
"['Ian Osband' 'Benjamin Van Roy']"
]
|
stat.ML cs.LG | null | 1403.4017 | null | null | http://arxiv.org/pdf/1403.4017v1 | 2014-03-17T08:04:41Z | 2014-03-17T08:04:41Z | Multi-task Feature Selection based Anomaly Detection | Network anomaly detection is still a vibrant research area. As the fast
growth of network bandwidth and the tremendous traffic on the network, there
arises an extremely challengeable question: How to efficiently and accurately
detect the anomaly on multiple traffic? In multi-task learning, the traffic
consisting of flows at different time periods is considered as a task. Multiple
tasks at different time periods performed simultaneously to detect anomalies.
In this paper, we apply the multi-task feature selection in network anomaly
detection area which provides a powerful method to gather information from
multiple traffic and detect anomalies on it simultaneously. In particular, the
multi-task feature selection includes the well-known l1-norm based feature
selection as a special case given only one task. Moreover, we show that the
multi-task feature selection is more accurate by utilizing more information
simultaneously than the l1-norm based method. At the evaluation stage, we
preprocess the raw data trace from trans-Pacific backbone link between Japan
and the United States, label with anomaly communities, and generate a
248-feature dataset. We show empirically that the multi-task feature selection
outperforms independent l1-norm based feature selection on real traffic
dataset.
| [
"Longqi Yang, Yibing Wang, Zhisong Pan and Guyu Hu",
"['Longqi Yang' 'Yibing Wang' 'Zhisong Pan' 'Guyu Hu']"
]
|
cs.LG | null | 1403.4224 | null | null | http://arxiv.org/pdf/1403.4224v2 | 2014-09-19T05:32:30Z | 2014-03-17T19:35:06Z | Learning Negative Mixture Models by Tensor Decompositions | This work considers the problem of estimating the parameters of negative
mixture models, i.e. mixture models that possibly involve negative weights. The
contributions of this paper are as follows. (i) We show that every rational
probability distributions on strings, a representation which occurs naturally
in spectral learning, can be computed by a negative mixture of at most two
probabilistic automata (or HMMs). (ii) We propose a method to estimate the
parameters of negative mixture models having a specific tensor structure in
their low order observable moments. Building upon a recent paper on tensor
decompositions for learning latent variable models, we extend this work to the
broader setting of tensors having a symmetric decomposition with positive and
negative weights. We introduce a generalization of the tensor power method for
complex valued tensors, and establish theoretical convergence guarantees. (iii)
We show how our approach applies to negative Gaussian mixture models, for which
we provide some experiments.
| [
"['Guillaume Rabusseau' 'François Denis']",
"Guillaume Rabusseau and Fran\\c{c}ois Denis"
]
|
cs.NA cs.LG | null | 1403.4267 | null | null | http://arxiv.org/pdf/1403.4267v2 | 2014-03-19T14:07:22Z | 2014-03-17T20:34:18Z | Balancing Sparsity and Rank Constraints in Quadratic Basis Pursuit | We investigate the methods that simultaneously enforce sparsity and low-rank
structure in a matrix as often employed for sparse phase retrieval problems or
phase calibration problems in compressive sensing. We propose a new approach
for analyzing the trade off between the sparsity and low rank constraints in
these approaches which not only helps to provide guidelines to adjust the
weights between the aforementioned constraints, but also enables new simulation
strategies for evaluating performance. We then provide simulation results for
phase retrieval and phase calibration cases both to demonstrate the consistency
of the proposed method with other approaches and to evaluate the change of
performance with different weights for the sparsity and low rank structure
constraints.
| [
"Cagdas Bilen (INRIA - IRISA), Gilles Puy, R\\'emi Gribonval (INRIA -\n IRISA), Laurent Daudet",
"['Cagdas Bilen' 'Gilles Puy' 'Rémi Gribonval' 'Laurent Daudet']"
]
|
cs.LG | 10.1109/CVPR.2014.191 | 1403.4378 | null | null | http://arxiv.org/abs/1403.4378v1 | 2014-03-18T09:04:02Z | 2014-03-18T09:04:02Z | Spectral Clustering with Jensen-type kernels and their multi-point
extensions | Motivated by multi-distribution divergences, which originate in information
theory, we propose a notion of `multi-point' kernels, and study their
applications. We study a class of kernels based on Jensen type divergences and
show that these can be extended to measure similarity among multiple points. We
study tensor flattening methods and develop a multi-point (kernel) spectral
clustering (MSC) method. We further emphasize on a special case of the proposed
kernels, which is a multi-point extension of the linear (dot-product) kernel
and show the existence of cubic time tensor flattening algorithm in this case.
Finally, we illustrate the usefulness of our contributions using standard data
sets and image segmentation tasks.
| [
"['Debarghya Ghoshdastidar' 'Ambedkar Dukkipati' 'Ajay P. Adsul'\n 'Aparna S. Vijayan']",
"Debarghya Ghoshdastidar, Ambedkar Dukkipati, Ajay P. Adsul, Aparna S.\n Vijayan"
]
|
math.OC cs.LG | null | 1403.4514 | null | null | http://arxiv.org/pdf/1403.4514v2 | 2014-03-31T23:02:53Z | 2014-03-18T15:57:48Z | Simultaneous Perturbation Algorithms for Batch Off-Policy Search | We propose novel policy search algorithms in the context of off-policy, batch
mode reinforcement learning (RL) with continuous state and action spaces. Given
a batch collection of trajectories, we perform off-line policy evaluation using
an algorithm similar to that by [Fonteneau et al., 2010]. Using this
Monte-Carlo like policy evaluator, we perform policy search in a class of
parameterized policies. We propose both first order policy gradient and second
order policy Newton algorithms. All our algorithms incorporate simultaneous
perturbation estimates for the gradient as well as the Hessian of the
cost-to-go vector, since the latter is unknown and only biased estimates are
available. We demonstrate their practicality on a simple 1-dimensional
continuous state space problem.
| [
"Raphael Fonteneau and L.A. Prashanth",
"['Raphael Fonteneau' 'L. A. Prashanth']"
]
|
cs.LG cs.NE | null | 1403.4540 | null | null | http://arxiv.org/pdf/1403.4540v1 | 2014-03-18T17:15:21Z | 2014-03-18T17:15:21Z | Similarity networks for classification: a case study in the Horse Colic
problem | This paper develops a two-layer neural network in which the neuron model
computes a user-defined similarity function between inputs and weights. The
neuron transfer function is formed by composition of an adapted logistic
function with the mean of the partial input-weight similarities. The resulting
neuron model is capable of dealing directly with variables of potentially
different nature (continuous, fuzzy, ordinal, categorical). There is also
provision for missing values. The network is trained using a two-stage
procedure very similar to that used to train a radial basis function (RBF)
neural network. The network is compared to two types of RBF networks in a
non-trivial dataset: the Horse Colic problem, taken as a case study and
analyzed in detail.
| [
"Llu\\'is Belanche and Jer\\'onimo Hern\\'andez",
"['Lluís Belanche' 'Jerónimo Hernández']"
]
|
cs.LG stat.ML | null | 1403.4781 | null | null | http://arxiv.org/pdf/1403.4781v1 | 2014-03-19T12:16:17Z | 2014-03-19T12:16:17Z | A Split-and-Merge Dictionary Learning Algorithm for Sparse
Representation | In big data image/video analytics, we encounter the problem of learning an
overcomplete dictionary for sparse representation from a large training
dataset, which can not be processed at once because of storage and
computational constraints. To tackle the problem of dictionary learning in such
scenarios, we propose an algorithm for parallel dictionary learning. The
fundamental idea behind the algorithm is to learn a sparse representation in
two phases. In the first phase, the whole training dataset is partitioned into
small non-overlapping subsets, and a dictionary is trained independently on
each small database. In the second phase, the dictionaries are merged to form a
global dictionary. We show that the proposed algorithm is efficient in its
usage of memory and computational complexity, and performs on par with the
standard learning strategy operating on the entire data at a time. As an
application, we consider the problem of image denoising. We present a
comparative analysis of our algorithm with the standard learning techniques,
that use the entire database at a time, in terms of training and denoising
performance. We observe that the split-and-merge algorithm results in a
remarkable reduction of training time, without significantly affecting the
denoising performance.
| [
"['Subhadip Mukherjee' 'Chandra Sekhar Seelamantula']",
"Subhadip Mukherjee and Chandra Sekhar Seelamantula"
]
|
cs.SI cs.LG physics.soc-ph | 10.1145/2700399 | 1403.4997 | null | null | http://arxiv.org/abs/1403.4997v1 | 2014-03-19T22:58:13Z | 2014-03-19T22:58:13Z | Universal and Distinct Properties of Communication Dynamics: How to
Generate Realistic Inter-event Times | With the advancement of information systems, means of communications are
becoming cheaper, faster and more available. Today, millions of people carrying
smart-phones or tablets are able to communicate at practically any time and
anywhere they want. Among others, they can access their e-mails, comment on
weblogs, watch and post comments on videos, make phone calls or text messages
almost ubiquitously. Given this scenario, in this paper we tackle a fundamental
aspect of this new era of communication: how the time intervals between
communication events behave for different technologies and means of
communications? Are there universal patterns for the inter-event time
distribution (IED)? In which ways inter-event times behave differently among
particular technologies? To answer these questions, we analyze eight different
datasets from real and modern communication data and we found four well defined
patterns that are seen in all the eight datasets. Moreover, we propose the use
of the Self-Feeding Process (SFP) to generate inter-event times between
communications. The SFP is extremely parsimonious point process that requires
at most two parameters and is able to generate inter-event times with all the
universal properties we observed in the data. We show the potential application
of SFP by proposing a framework to generate a synthetic dataset containing
realistic communication events of any one of the analyzed means of
communications (e.g. phone calls, e-mails, comments on blogs) and an algorithm
to detect anomalies.
| [
"['Pedro O. S. Vaz de Melo' 'Christos Faloutsos' 'Renato Assunção'\n 'Rodrigo Alves' 'Antonio A. F. Loureiro']",
"Pedro O.S. Vaz de Melo, Christos Faloutsos, Renato Assun\\c{c}\\~ao,\n Rodrigo Alves and Antonio A.F. Loureiro"
]
|
cs.CE cs.AI cs.LG | 10.1371/journal.pcbi.1004465 | 1403.5029 | null | null | http://arxiv.org/abs/1403.5029v3 | 2015-09-15T15:48:46Z | 2014-03-20T02:35:15Z | Network-based Isoform Quantification with RNA-Seq Data for Cancer
Transcriptome Analysis | High-throughput mRNA sequencing (RNA-Seq) is widely used for transcript
quantification of gene isoforms. Since RNA-Seq data alone is often not
sufficient to accurately identify the read origins from the isoforms for
quantification, we propose to explore protein domain-domain interactions as
prior knowledge for integrative analysis with RNA-seq data. We introduce a
Network-based method for RNA-Seq-based Transcript Quantification (Net-RSTQ) to
integrate protein domain-domain interaction network with short read alignments
for transcript abundance estimation. Based on our observation that the
abundances of the neighboring isoforms by domain-domain interactions in the
network are positively correlated, Net-RSTQ models the expression of the
neighboring transcripts as Dirichlet priors on the likelihood of the observed
read alignments against the transcripts in one gene. The transcript abundances
of all the genes are then jointly estimated with alternating optimization of
multiple EM problems. In simulation Net-RSTQ effectively improved isoform
transcript quantifications when isoform co-expressions correlate with their
interactions. qRT-PCR results on 25 multi-isoform genes in a stem cell line, an
ovarian cancer cell line, and a breast cancer cell line also showed that
Net-RSTQ estimated more consistent isoform proportions with RNA-Seq data. In
the experiments on the RNA-Seq data in The Cancer Genome Atlas (TCGA), the
transcript abundances estimated by Net-RSTQ are more informative for patient
sample classification of ovarian cancer, breast cancer and lung cancer. All
experimental results collectively support that Net-RSTQ is a promising approach
for isoform quantification.
| [
"['Wei Zhang' 'Jae-Woong Chang' 'Lilong Lin' 'Kay Minn' 'Baolin Wu'\n 'Jeremy Chien' 'Jeongsik Yong' 'Hui Zheng' 'Rui Kuang']",
"Wei Zhang, Jae-Woong Chang, Lilong Lin, Kay Minn, Baolin Wu, Jeremy\n Chien, Jeongsik Yong, Hui Zheng, Rui Kuang"
]
|
cs.LG cs.AI cs.SY stat.ML | null | 1403.5045 | null | null | http://arxiv.org/pdf/1403.5045v3 | 2014-06-16T20:23:34Z | 2014-03-20T05:52:43Z | Matroid Bandits: Fast Combinatorial Optimization with Learning | A matroid is a notion of independence in combinatorial optimization which is
closely related to computational efficiency. In particular, it is well known
that the maximum of a constrained modular function can be found greedily if and
only if the constraints are associated with a matroid. In this paper, we bring
together the ideas of bandits and matroids, and propose a new class of
combinatorial bandits, matroid bandits. The objective in these problems is to
learn how to maximize a modular function on a matroid. This function is
stochastic and initially unknown. We propose a practical algorithm for solving
our problem, Optimistic Matroid Maximization (OMM); and prove two upper bounds,
gap-dependent and gap-free, on its regret. Both bounds are sublinear in time
and at most linear in all other quantities of interest. The gap-dependent upper
bound is tight and we prove a matching lower bound on a partition matroid
bandit. Finally, we evaluate our method on three real-world problems and show
that it is practical.
| [
"Branislav Kveton, Zheng Wen, Azin Ashkan, Hoda Eydgahi, Brian Eriksson",
"['Branislav Kveton' 'Zheng Wen' 'Azin Ashkan' 'Hoda Eydgahi'\n 'Brian Eriksson']"
]
|
cs.LG | null | 1403.5115 | null | null | http://arxiv.org/pdf/1403.5115v1 | 2014-03-20T12:46:33Z | 2014-03-20T12:46:33Z | Unconfused Ultraconservative Multiclass Algorithms | We tackle the problem of learning linear classifiers from noisy datasets in a
multiclass setting. The two-class version of this problem was studied a few
years ago by, e.g. Bylander (1994) and Blum et al. (1996): in these
contributions, the proposed approaches to fight the noise revolve around a
Perceptron learning scheme fed with peculiar examples computed through a
weighted average of points from the noisy training set. We propose to build
upon these approaches and we introduce a new algorithm called UMA (for
Unconfused Multiclass additive Algorithm) which may be seen as a generalization
to the multiclass setting of the previous approaches. In order to characterize
the noise we use the confusion matrix as a multiclass extension of the
classification noise studied in the aforementioned literature. Theoretically
well-founded, UMA furthermore displays very good empirical noise robustness, as
evidenced by numerical simulations conducted on both synthetic and real data.
Keywords: Multiclass classification, Perceptron, Noisy labels, Confusion Matrix
| [
"['Ugo Louche' 'Liva Ralaivola']",
"Ugo Louche (LIF), Liva Ralaivola (LIF)"
]
|
cs.LG | null | 1403.5287 | null | null | http://arxiv.org/pdf/1403.5287v1 | 2014-03-20T20:36:18Z | 2014-03-20T20:36:18Z | Online Local Learning via Semidefinite Programming | In many online learning problems we are interested in predicting local
information about some universe of items. For example, we may want to know
whether two items are in the same cluster rather than computing an assignment
of items to clusters; we may want to know which of two teams will win a game
rather than computing a ranking of teams. Although finding the optimal
clustering or ranking is typically intractable, it may be possible to predict
the relationships between items as well as if you could solve the global
optimization problem exactly.
Formally, we consider an online learning problem in which a learner
repeatedly guesses a pair of labels (l(x), l(y)) and receives an adversarial
payoff depending on those labels. The learner's goal is to receive a payoff
nearly as good as the best fixed labeling of the items. We show that a simple
algorithm based on semidefinite programming can obtain asymptotically optimal
regret in the case where the number of possible labels is O(1), resolving an
open problem posed by Hazan, Kale, and Shalev-Schwartz. Our main technical
contribution is a novel use and analysis of the log determinant regularizer,
exploiting the observation that log det(A + I) upper bounds the entropy of any
distribution with covariance matrix A.
| [
"Paul Christiano",
"['Paul Christiano']"
]
|
cs.LG | null | 1403.5341 | null | null | http://arxiv.org/pdf/1403.5341v2 | 2015-06-08T19:05:44Z | 2014-03-21T01:42:53Z | An Information-Theoretic Analysis of Thompson Sampling | We provide an information-theoretic analysis of Thompson sampling that
applies across a broad range of online optimization problems in which a
decision-maker must learn from partial feedback. This analysis inherits the
simplicity and elegance of information theory and leads to regret bounds that
scale with the entropy of the optimal-action distribution. This strengthens
preexisting results and yields new insight into how information improves
performance.
| [
"['Daniel Russo' 'Benjamin Van Roy']",
"Daniel Russo, Benjamin Van Roy"
]
|
stat.ML cs.CV cs.LG | null | 1403.5370 | null | null | http://arxiv.org/pdf/1403.5370v1 | 2014-03-21T05:23:17Z | 2014-03-21T05:23:17Z | Using n-grams models for visual semantic place recognition | The aim of this paper is to present a new method for visual place
recognition. Our system combines global image characterization and visual
words, which allows to use efficient Bayesian filtering methods to integrate
several images. More precisely, we extend the classical HMM model with
techniques inspired by the field of Natural Language Processing. This paper
presents our system and the Bayesian filtering algorithm. The performance of
our system and the influence of the main parameters are evaluated on a standard
database. The discussion highlights the interest of using such models and
proposes improvements.
| [
"Mathieu Dubois (LIMSI), Frenoux Emmanuelle (LIMSI), Philippe Tarroux\n (LIMSI)",
"['Mathieu Dubois' 'Frenoux Emmanuelle' 'Philippe Tarroux']"
]
|
cs.NE cs.LG | null | 1403.5488 | null | null | http://arxiv.org/pdf/1403.5488v1 | 2014-03-21T15:11:52Z | 2014-03-21T15:11:52Z | Missing Data Prediction and Classification: The Use of Auto-Associative
Neural Networks and Optimization Algorithms | This paper presents methods which are aimed at finding approximations to
missing data in a dataset by using optimization algorithms to optimize the
network parameters after which prediction and classification tasks can be
performed. The optimization methods that are considered are genetic algorithm
(GA), simulated annealing (SA), particle swarm optimization (PSO), random
forest (RF) and negative selection (NS) and these methods are individually used
in combination with auto-associative neural networks (AANN) for missing data
estimation and the results obtained are compared. The methods suggested use the
optimization algorithms to minimize an error function derived from training the
auto-associative neural network during which the interrelationships between the
inputs and the outputs are obtained and stored in the weights connecting the
different layers of the network. The error function is expressed as the square
of the difference between the actual observations and predicted values from an
auto-associative neural network. In the event of missing data, all the values
of the actual observations are not known hence, the error function is
decomposed to depend on the known and unknown variable values. Multi-layer
perceptron (MLP) neural network is employed to train the neural networks using
the scaled conjugate gradient (SCG) method. Prediction accuracy is determined
by mean squared error (MSE), root mean squared error (RMSE), mean absolute
error (MAE), and correlation coefficient (r) computations. Accuracy in
classification is obtained by plotting ROC curves and calculating the areas
under these. Analysis of results depicts that the approach using RF with AANN
produces the most accurate predictions and classifications while on the other
end of the scale is the approach which entails using NS with AANN.
| [
"['Collins Leke' 'Bhekisipho Twala' 'T. Marwala']",
"Collins Leke, Bhekisipho Twala, and T. Marwala"
]
|
cs.LG | null | 1403.5556 | null | null | http://arxiv.org/pdf/1403.5556v7 | 2017-07-07T05:51:15Z | 2014-03-21T02:02:25Z | Learning to Optimize via Information-Directed Sampling | We propose information-directed sampling -- a new approach to online
optimization problems in which a decision-maker must balance between
exploration and exploitation while learning from partial feedback. Each action
is sampled in a manner that minimizes the ratio between squared expected
single-period regret and a measure of information gain: the mutual information
between the optimal action and the next observation. We establish an expected
regret bound for information-directed sampling that applies across a very
general class of models and scales with the entropy of the optimal action
distribution. We illustrate through simple analytic examples how
information-directed sampling accounts for kinds of information that
alternative approaches do not adequately address and that this can lead to
dramatic performance gains. For the widely studied Bernoulli, Gaussian, and
linear bandit problems, we demonstrate state-of-the-art simulation performance.
| [
"['Daniel Russo' 'Benjamin Van Roy']",
"Daniel Russo and Benjamin Van Roy"
]
|
cs.LG cs.SI | 10.1109/JSTSP.2014.2370942 | 1403.5603 | null | null | http://arxiv.org/abs/1403.5603v1 | 2014-03-22T02:15:39Z | 2014-03-22T02:15:39Z | Forecasting Popularity of Videos using Social Media | This paper presents a systematic online prediction method (Social-Forecast)
that is capable to accurately forecast the popularity of videos promoted by
social media. Social-Forecast explicitly considers the dynamically changing and
evolving propagation patterns of videos in social media when making popularity
forecasts, thereby being situation and context aware. Social-Forecast aims to
maximize the forecast reward, which is defined as a tradeoff between the
popularity prediction accuracy and the timeliness with which a prediction is
issued. The forecasting is performed online and requires no training phase or a
priori knowledge. We analytically bound the prediction performance loss of
Social-Forecast as compared to that obtained by an omniscient oracle and prove
that the bound is sublinear in the number of video arrivals, thereby
guaranteeing its short-term performance as well as its asymptotic convergence
to the optimal performance. In addition, we conduct extensive experiments using
real-world data traces collected from the videos shared in RenRen, one of the
largest online social networks in China. These experiments show that our
proposed method outperforms existing view-based approaches for popularity
prediction (which are not context-aware) by more than 30% in terms of
prediction rewards.
| [
"['Jie Xu' 'Mihaela van der Schaar' 'Jiangchuan Liu' 'Haitao Li']",
"Jie Xu, Mihaela van der Schaar, Jiangchuan Liu and Haitao Li"
]
|
stat.ML cs.LG | null | 1403.5607 | null | null | http://arxiv.org/pdf/1403.5607v1 | 2014-03-22T03:35:00Z | 2014-03-22T03:35:00Z | Bayesian Optimization with Unknown Constraints | Recent work on Bayesian optimization has shown its effectiveness in global
optimization of difficult black-box objective functions. Many real-world
optimization problems of interest also have constraints which are unknown a
priori. In this paper, we study Bayesian optimization for constrained problems
in the general case that noise may be present in the constraint functions, and
the objective and constraints may be evaluated independently. We provide
motivating practical examples, and present a general framework to solve such
problems. We demonstrate the effectiveness of our approach on optimizing the
performance of online latent Dirichlet allocation subject to topic sparsity
constraints, tuning a neural network given test-time memory constraints, and
optimizing Hamiltonian Monte Carlo to achieve maximal effectiveness in a fixed
time, subject to passing standard convergence diagnostics.
| [
"Michael A. Gelbart, Jasper Snoek, Ryan P. Adams",
"['Michael A. Gelbart' 'Jasper Snoek' 'Ryan P. Adams']"
]
|
cs.LG stat.ML | null | 1403.5647 | null | null | http://arxiv.org/pdf/1403.5647v1 | 2014-03-22T11:15:01Z | 2014-03-22T11:15:01Z | CUR Algorithm with Incomplete Matrix Observation | CUR matrix decomposition is a randomized algorithm that can efficiently
compute the low rank approximation for a given rectangle matrix. One limitation
with the existing CUR algorithms is that they require an access to the full
matrix A for computing U. In this work, we aim to alleviate this limitation. In
particular, we assume that besides having an access to randomly sampled d rows
and d columns from A, we only observe a subset of randomly sampled entries from
A. Our goal is to develop a low rank approximation algorithm, similar to CUR,
based on (i) randomly sampled rows and columns from A, and (ii) randomly
sampled entries from A. The proposed algorithm is able to perfectly recover the
target matrix A with only O(rn log n) number of observed entries. In addition,
instead of having to solve an optimization problem involved trace norm
regularization, the proposed algorithm only needs to solve a standard
regression problem. Finally, unlike most matrix completion theories that hold
only when the target matrix is of low rank, we show a strong guarantee for the
proposed algorithm even when the target matrix is not low rank.
| [
"Rong Jin, Shenghuo Zhu",
"['Rong Jin' 'Shenghuo Zhu']"
]
|
stat.ML cs.LG stat.CO | null | 1403.5693 | null | null | http://arxiv.org/pdf/1403.5693v1 | 2014-03-22T18:21:29Z | 2014-03-22T18:21:29Z | Firefly Monte Carlo: Exact MCMC with Subsets of Data | Markov chain Monte Carlo (MCMC) is a popular and successful general-purpose
tool for Bayesian inference. However, MCMC cannot be practically applied to
large data sets because of the prohibitive cost of evaluating every likelihood
term at every iteration. Here we present Firefly Monte Carlo (FlyMC) an
auxiliary variable MCMC algorithm that only queries the likelihoods of a
potentially small subset of the data at each iteration yet simulates from the
exact posterior distribution, in contrast to recent proposals that are
approximate even in the asymptotic limit. FlyMC is compatible with a wide
variety of modern MCMC algorithms, and only requires a lower bound on the
per-datum likelihood factors. In experiments, we find that FlyMC generates
samples from the posterior more than an order of magnitude faster than regular
MCMC, opening up MCMC methods to larger datasets than were previously
considered feasible.
| [
"Dougal Maclaurin and Ryan P. Adams",
"['Dougal Maclaurin' 'Ryan P. Adams']"
]
|
stat.ML cs.IT cs.LG math.IT stat.AP | null | 1403.5877 | null | null | http://arxiv.org/pdf/1403.5877v1 | 2014-03-24T08:26:19Z | 2014-03-24T08:26:19Z | Non-uniform Feature Sampling for Decision Tree Ensembles | We study the effectiveness of non-uniform randomized feature selection in
decision tree classification. We experimentally evaluate two feature selection
methodologies, based on information extracted from the provided dataset: $(i)$
\emph{leverage scores-based} and $(ii)$ \emph{norm-based} feature selection.
Experimental evaluation of the proposed feature selection techniques indicate
that such approaches might be more effective compared to naive uniform feature
selection and moreover having comparable performance to the random forest
algorithm [3]
| [
"Anastasios Kyrillidis and Anastasios Zouzias",
"['Anastasios Kyrillidis' 'Anastasios Zouzias']"
]
|
cs.CE cs.LG | null | 1403.5933 | null | null | http://arxiv.org/pdf/1403.5933v1 | 2014-03-24T12:37:11Z | 2014-03-24T12:37:11Z | AIS-INMACA: A Novel Integrated MACA Based Clonal Classifier for Protein
Coding and Promoter Region Prediction | Most of the problems in bioinformatics are now the challenges in computing.
This paper aims at building a classifier based on Multiple Attractor Cellular
Automata (MACA) which uses fuzzy logic. It is strengthened with an artificial
Immune System Technique (AIS), Clonal algorithm for identifying a protein
coding and promoter region in a given DNA sequence. The proposed classifier is
named as AIS-INMACA introduces a novel concept to combine CA with artificial
immune system to produce a better classifier which can address major problems
in bioinformatics. This will be the first integrated algorithm which can
predict both promoter and protein coding regions. To obtain good fitness rules
the basic concept of Clonal selection algorithm was used. The proposed
classifier can handle DNA sequences of lengths 54,108,162,252,354. This
classifier gives the exact boundaries of both protein and promoter regions with
an average accuracy of 89.6%. This classifier was tested with 97,000 data
components which were taken from Fickett & Toung, MPromDb, and other sequences
from a renowned medical university. This proposed classifier can handle huge
data sets and can find protein and promoter regions even in mixed and
overlapped DNA sequences. This work also aims at identifying the logicality
between the major problems in bioinformatics and tries to obtaining a common
frame work for addressing major problems in bioinformatics like protein
structure prediction, RNA structure prediction, predicting the splicing pattern
of any primary transcript and analysis of information content in DNA, RNA,
protein sequences and structure. This work will attract more researchers
towards application of CA as a potential pattern classifier to many important
problems in bioinformatics
| [
"['Pokkuluri Kiran Sree' 'Inampudi Ramesh Babu']",
"Pokkuluri Kiran Sree, Inampudi Ramesh Babu"
]
|
stat.ML cs.LG stat.AP | null | 1403.5997 | null | null | http://arxiv.org/pdf/1403.5997v3 | 2014-06-10T08:18:06Z | 2014-03-24T15:25:59Z | Bayesian calibration for forensic evidence reporting | We introduce a Bayesian solution for the problem in forensic speaker
recognition, where there may be very little background material for estimating
score calibration parameters. We work within the Bayesian paradigm of evidence
reporting and develop a principled probabilistic treatment of the problem,
which results in a Bayesian likelihood-ratio as the vehicle for reporting
weight of evidence. We show in contrast, that reporting a likelihood-ratio
distribution does not solve this problem. Our solution is experimentally
exercised on a simulated forensic scenario, using NIST SRE'12 scores, which
demonstrates a clear advantage for the proposed method compared to the
traditional plugin calibration recipe.
| [
"Niko Br\\\"ummer and Albert Swart",
"['Niko Brümmer' 'Albert Swart']"
]
|
cs.CL cs.LG | null | 1403.6023 | null | null | http://arxiv.org/pdf/1403.6023v1 | 2014-03-24T16:21:04Z | 2014-03-24T16:21:04Z | Ensemble Detection of Single & Multiple Events at Sentence-Level | Event classification at sentence level is an important Information Extraction
task with applications in several NLP, IR, and personalization systems.
Multi-label binary relevance (BR) are the state-of-art methods. In this work,
we explored new multi-label methods known for capturing relations between event
types. These new methods, such as the ensemble Chain of Classifiers, improve
the F1 on average across the 6 labels by 2.8% over the Binary Relevance. The
low occurrence of multi-label sentences motivated the reduction of the hard
imbalanced multi-label classification problem with low number of occurrences of
multiple labels per instance to an more tractable imbalanced multiclass problem
with better results (+ 4.6%). We report the results of adding new features,
such as sentiment strength, rhetorical signals, domain-id (source-id and date),
and key-phrases in both single-label and multi-label event classification
scenarios.
| [
"Lu\\'is Marujo, Anatole Gershman, Jaime Carbonell, Jo\\~ao P. Neto,\n David Martins de Matos",
"['Luís Marujo' 'Anatole Gershman' 'Jaime Carbonell' 'João P. Neto'\n 'David Martins de Matos']"
]
|
cs.IR cs.CY cs.LG | null | 1403.6248 | null | null | http://arxiv.org/pdf/1403.6248v1 | 2014-03-25T07:11:03Z | 2014-03-25T07:11:03Z | Classroom Video Assessment and Retrieval via Multiple Instance Learning | We propose a multiple instance learning approach to content-based retrieval
of classroom video for the purpose of supporting human assessing the learning
environment. The key element of our approach is a mapping between the semantic
concepts of the assessment system and features of the video that can be
measured using techniques from the fields of computer vision and speech
analysis. We report on a formative experiment in content-based video retrieval
involving trained experts in the Classroom Assessment Scoring System, a widely
used framework for assessment and improvement of learning environments. The
results of this experiment suggest that our approach has potential application
to productivity enhancement in assessment and to broader retrieval tasks.
| [
"['Qifeng Qiao' 'Peter A. Beling']",
"Qifeng Qiao and Peter A. Beling"
]
|
cs.AI cs.LG | null | 1403.6348 | null | null | http://arxiv.org/pdf/1403.6348v6 | 2016-07-30T10:10:10Z | 2014-03-25T14:07:21Z | Updating Formulas and Algorithms for Computing Entropy and Gini Index
from Time-Changing Data Streams | Despite growing interest in data stream mining the most successful
incremental learners, such as VFDT, still use periodic recomputation to update
attribute information gains and Gini indices. This note provides simple
incremental formulas and algorithms for computing entropy and Gini index from
time-changing data streams.
| [
"Blaz Sovdat",
"['Blaz Sovdat']"
]
|
cs.LG cs.CL cs.IR | null | 1403.6397 | null | null | http://arxiv.org/pdf/1403.6397v1 | 2014-03-25T15:44:14Z | 2014-03-25T15:44:14Z | Evaluating topic coherence measures | Topic models extract representative word sets - called topics - from word
counts in documents without requiring any semantic annotations. Topics are not
guaranteed to be well interpretable, therefore, coherence measures have been
proposed to distinguish between good and bad topics. Studies of topic coherence
so far are limited to measures that score pairs of individual words. For the
first time, we include coherence measures from scientific philosophy that score
pairs of more complex word subsets and apply them to topic scoring.
| [
"['Frank Rosner' 'Alexander Hinneburg' 'Michael Röder' 'Martin Nettling'\n 'Andreas Both']",
"Frank Rosner, Alexander Hinneburg, Michael R\\\"oder, Martin Nettling,\n Andreas Both"
]
|
cs.GT cs.AI cs.LG | 10.1109/TCIAIG.2017.2679115 | 1403.6508 | null | null | http://arxiv.org/abs/1403.6508v3 | 2019-07-30T01:28:36Z | 2014-03-25T21:03:57Z | Multi-agent Inverse Reinforcement Learning for Two-person Zero-sum Games | The focus of this paper is a Bayesian framework for solving a class of
problems termed multi-agent inverse reinforcement learning (MIRL). Compared to
the well-known inverse reinforcement learning (IRL) problem, MIRL is formalized
in the context of stochastic games, which generalize Markov decision processes
to game theoretic scenarios. We establish a theoretical foundation for
competitive two-agent zero-sum MIRL problems and propose a Bayesian solution
approach in which the generative model is based on an assumption that the two
agents follow a minimax bi-policy. Numerical results are presented comparing
the Bayesian MIRL method with two existing methods in the context of an
abstract soccer game. Investigation centers on relationships between the extent
of prior information and the quality of learned rewards. Results suggest that
covariance structure is more important than mean value in reward priors.
| [
"['Xiaomin Lin' 'Peter A. Beling' 'Randy Cogill']",
"Xiaomin Lin and Peter A. Beling and Randy Cogill"
]
|
cs.LG math.OC stat.ML | null | 1403.6530 | null | null | http://arxiv.org/pdf/1403.6530v2 | 2015-03-18T15:42:31Z | 2014-03-25T23:00:50Z | Variance-Constrained Actor-Critic Algorithms for Discounted and Average
Reward MDPs | In many sequential decision-making problems we may want to manage risk by
minimizing some measure of variability in rewards in addition to maximizing a
standard criterion. Variance related risk measures are among the most common
risk-sensitive criteria in finance and operations research. However, optimizing
many such criteria is known to be a hard problem. In this paper, we consider
both discounted and average reward Markov decision processes. For each
formulation, we first define a measure of variability for a policy, which in
turn gives us a set of risk-sensitive criteria to optimize. For each of these
criteria, we derive a formula for computing its gradient. We then devise
actor-critic algorithms that operate on three timescales - a TD critic on the
fastest timescale, a policy gradient (actor) on the intermediate timescale, and
a dual ascent for Lagrange multipliers on the slowest timescale. In the
discounted setting, we point out the difficulty in estimating the gradient of
the variance of the return and incorporate simultaneous perturbation approaches
to alleviate this. The average setting, on the other hand, allows for an actor
update using compatible features to estimate the gradient of the variance. We
establish the convergence of our algorithms to locally risk-sensitive optimal
policies. Finally, we demonstrate the usefulness of our algorithms in a traffic
signal control application.
| [
"Prashanth L.A. and Mohammad Ghavamzadeh",
"['Prashanth L. A.' 'Mohammad Ghavamzadeh']"
]
|
cs.SI cs.LG | 10.1145/2623330.2623732 | 1403.6652 | null | null | http://arxiv.org/abs/1403.6652v2 | 2014-06-27T17:17:25Z | 2014-03-26T12:30:07Z | DeepWalk: Online Learning of Social Representations | We present DeepWalk, a novel approach for learning latent representations of
vertices in a network. These latent representations encode social relations in
a continuous vector space, which is easily exploited by statistical models.
DeepWalk generalizes recent advancements in language modeling and unsupervised
feature learning (or deep learning) from sequences of words to graphs. DeepWalk
uses local information obtained from truncated random walks to learn latent
representations by treating walks as the equivalent of sentences. We
demonstrate DeepWalk's latent representations on several multi-label network
classification tasks for social networks such as BlogCatalog, Flickr, and
YouTube. Our results show that DeepWalk outperforms challenging baselines which
are allowed a global view of the network, especially in the presence of missing
information. DeepWalk's representations can provide $F_1$ scores up to 10%
higher than competing methods when labeled data is sparse. In some experiments,
DeepWalk's representations are able to outperform all baseline methods while
using 60% less training data. DeepWalk is also scalable. It is an online
learning algorithm which builds useful incremental results, and is trivially
parallelizable. These qualities make it suitable for a broad class of real
world applications such as network classification, and anomaly detection.
| [
"['Bryan Perozzi' 'Rami Al-Rfou' 'Steven Skiena']",
"Bryan Perozzi, Rami Al-Rfou and Steven Skiena"
]
|
stat.ML cs.CV cs.LG math.OC | null | 1403.6706 | null | null | http://arxiv.org/pdf/1403.6706v1 | 2014-03-26T15:16:56Z | 2014-03-26T15:16:56Z | Beyond L2-Loss Functions for Learning Sparse Models | Incorporating sparsity priors in learning tasks can give rise to simple, and
interpretable models for complex high dimensional data. Sparse models have
found widespread use in structure discovery, recovering data from corruptions,
and a variety of large scale unsupervised and supervised learning problems.
Assuming the availability of sufficient data, these methods infer dictionaries
for sparse representations by optimizing for high-fidelity reconstruction. In
most scenarios, the reconstruction quality is measured using the squared
Euclidean distance, and efficient algorithms have been developed for both batch
and online learning cases. However, new application domains motivate looking
beyond conventional loss functions. For example, robust loss functions such as
$\ell_1$ and Huber are useful in learning outlier-resilient models, and the
quantile loss is beneficial in discovering structures that are the
representative of a particular quantile. These new applications motivate our
work in generalizing sparse learning to a broad class of convex loss functions.
In particular, we consider the class of piecewise linear quadratic (PLQ) cost
functions that includes Huber, as well as $\ell_1$, quantile, Vapnik, hinge
loss, and smoothed variants of these penalties. We propose an algorithm to
learn dictionaries and obtain sparse codes when the data reconstruction
fidelity is measured using any smooth PLQ cost function. We provide convergence
guarantees for the proposed algorithm, and demonstrate the convergence behavior
using empirical experiments. Furthermore, we present three case studies that
require the use of PLQ cost functions: (i) robust image modeling, (ii) tag
refinement for image annotation and retrieval and (iii) computing empirical
confidence limits for subspace clustering.
| [
"['Karthikeyan Natesan Ramamurthy' 'Aleksandr Y. Aravkin'\n 'Jayaraman J. Thiagarajan']",
"Karthikeyan Natesan Ramamurthy, Aleksandr Y. Aravkin, Jayaraman J.\n Thiagarajan"
]
|
cs.LG cs.GT | null | 1403.6822 | null | null | http://arxiv.org/pdf/1403.6822v1 | 2014-03-26T15:27:27Z | 2014-03-26T15:27:27Z | Comparison of Multi-agent and Single-agent Inverse Learning on a
Simulated Soccer Example | We compare the performance of Inverse Reinforcement Learning (IRL) with the
relative new model of Multi-agent Inverse Reinforcement Learning (MIRL). Before
comparing the methods, we extend a published Bayesian IRL approach that is only
applicable to the case where the reward is only state dependent to a general
one capable of tackling the case where the reward depends on both state and
action. Comparison between IRL and MIRL is made in the context of an abstract
soccer game, using both a game model in which the reward depends only on state
and one in which it depends on both state and action. Results suggest that the
IRL approach performs much worse than the MIRL approach. We speculate that the
underperformance of IRL is because it fails to capture equilibrium information
in the manner possible in MIRL.
| [
"['Xiaomin Lin' 'Peter A. Beling' 'Randy Cogill']",
"Xiaomin Lin and Peter A. Beling and Randy Cogill"
]
|
cs.LG | null | 1403.6863 | null | null | http://arxiv.org/pdf/1403.6863v1 | 2014-03-26T21:17:05Z | 2014-03-26T21:17:05Z | Online Learning of k-CNF Boolean Functions | This paper revisits the problem of learning a k-CNF Boolean function from
examples in the context of online learning under the logarithmic loss. In doing
so, we give a Bayesian interpretation to one of Valiant's celebrated PAC
learning algorithms, which we then build upon to derive two efficient, online,
probabilistic, supervised learning algorithms for predicting the output of an
unknown k-CNF Boolean function. We analyze the loss of our methods, and show
that the cumulative log-loss can be upper bounded, ignoring logarithmic
factors, by a polynomial function of the size of each example.
| [
"['Joel Veness' 'Marcus Hutter']",
"Joel Veness and Marcus Hutter"
]
|
cs.SD cs.LG cs.MM | null | 1403.6901 | null | null | http://arxiv.org/pdf/1403.6901v1 | 2014-03-27T01:32:09Z | 2014-03-27T01:32:09Z | Automatic Segmentation of Broadcast News Audio using Self Similarity
Matrix | Generally audio news broadcast on radio is com- posed of music, commercials,
news from correspondents and recorded statements in addition to the actual news
read by the newsreader. When news transcripts are available, automatic
segmentation of audio news broadcast to time align the audio with the text
transcription to build frugal speech corpora is essential. We address the
problem of identifying segmentation in the audio news broadcast corresponding
to the news read by the newsreader so that they can be mapped to the text
transcripts. The existing techniques produce sub-optimal solutions when used to
extract newsreader read segments. In this paper, we propose a new technique
which is able to identify the acoustic change points reliably using an acoustic
Self Similarity Matrix (SSM). We describe the two pass technique in detail and
verify its performance on real audio news broadcast of All India Radio for
different languages.
| [
"['Sapna Soni' 'Ahmed Imran' 'Sunil Kumar Kopparapu']",
"Sapna Soni and Ahmed Imran and Sunil Kumar Kopparapu"
]
|
cs.LG cs.CV | null | 1403.7057 | null | null | http://arxiv.org/pdf/1403.7057v1 | 2014-03-27T14:38:23Z | 2014-03-27T14:38:23Z | Closed-Form Training of Conditional Random Fields for Large Scale Image
Segmentation | We present LS-CRF, a new method for very efficient large-scale training of
Conditional Random Fields (CRFs). It is inspired by existing closed-form
expressions for the maximum likelihood parameters of a generative graphical
model with tree topology. LS-CRF training requires only solving a set of
independent regression problems, for which closed-form expression as well as
efficient iterative solvers are available. This makes it orders of magnitude
faster than conventional maximum likelihood learning for CRFs that require
repeated runs of probabilistic inference. At the same time, the models learned
by our method still allow for joint inference at test time. We apply LS-CRF to
the task of semantic image segmentation, showing that it is highly efficient,
even for loopy models where probabilistic inference is problematic. It allows
the training of image segmentation models from significantly larger training
sets than had been used previously. We demonstrate this on two new datasets
that form a second contribution of this paper. They consist of over 180,000
images with figure-ground segmentation annotations. Our large-scale experiments
show that the possibilities of CRF-based image segmentation are far from
exhausted, indicating, for example, that semi-supervised learning and the use
of non-linear predictors are promising directions for achieving higher
segmentation accuracy in the future.
| [
"Alexander Kolesnikov, Matthieu Guillaumin, Vittorio Ferrari and\n Christoph H. Lampert",
"['Alexander Kolesnikov' 'Matthieu Guillaumin' 'Vittorio Ferrari'\n 'Christoph H. Lampert']"
]
|
cs.LG cs.CY physics.data-an | null | 1403.7087 | null | null | http://arxiv.org/pdf/1403.7087v1 | 2014-02-20T03:12:51Z | 2014-02-20T03:12:51Z | Conclusions from a NAIVE Bayes Operator Predicting the Medicare 2011
Transaction Data Set | Introduction: The United States Federal Government operates one of the worlds
largest medical insurance programs, Medicare, to ensure payment for clinical
services for the elderly, illegal aliens and those without the ability to pay
for their care directly. This paper evaluates the Medicare 2011 Transaction
Data Set which details the transfer of funds from Medicare to private and
public clinical care facilities for specific clinical services for the
operational year 2011. Methods: Data mining was conducted to establish the
relationships between reported and computed transaction values in the data set
to better understand the drivers of Medicare transactions at a programmatic
level. Results: The models averaged 88 for average model accuracy and 38 for
average Kappa during training. Some reported classes are highly independent
from the available data as their predictability remains stable regardless of
redaction of supporting and contradictory evidence. DRG or procedure type
appears to be unpredictable from the available financial transaction values.
Conclusions: Overlay hypotheses such as charges being driven by the volume
served or DRG being related to charges or payments is readily false in this
analysis despite 28 million Americans being billed through Medicare in 2011 and
the program distributing over 70 billion in this transaction set alone. It may
be impossible to predict the dependencies and data structures the payer of last
resort without data from payers of first and second resort. Political concerns
about Medicare would be better served focusing on these first and second order
payer systems as what Medicare costs is not dependent on Medicare itself.
| [
"Nick Williams",
"['Nick Williams']"
]
|
cs.LG | null | 1403.7100 | null | null | http://arxiv.org/pdf/1403.7100v1 | 2014-03-26T05:43:12Z | 2014-03-26T05:43:12Z | A study on cost behaviors of binary classification measures in
class-imbalanced problems | This work investigates into cost behaviors of binary classification measures
in a background of class-imbalanced problems. Twelve performance measures are
studied, such as F measure, G-means in terms of accuracy rates, and of recall
and precision, balance error rate (BER), Matthews correlation coefficient
(MCC), Kappa coefficient, etc. A new perspective is presented for those
measures by revealing their cost functions with respect to the class imbalance
ratio. Basically, they are described by four types of cost functions. The
functions provides a theoretical understanding why some measures are suitable
for dealing with class-imbalanced problems. Based on their cost functions, we
are able to conclude that G-means of accuracy rates and BER are suitable
measures because they show "proper" cost behaviors in terms of "a
misclassification from a small class will cause a greater cost than that from a
large class". On the contrary, F1 measure, G-means of recall and precision, MCC
and Kappa coefficient measures do not produce such behaviors so that they are
unsuitable to serve our goal in dealing with the problems properly.
| [
"['Bao-Gang Hu' 'Wei-Ming Dong']",
"Bao-Gang Hu and Wei-Ming Dong"
]
|
stat.ML cs.AI cs.LG | 10.1109/TNNLS.2015.2429711 | 1403.7308 | null | null | null | null | null | Data Generators for Learning Systems Based on RBF Networks | There are plenty of problems where the data available is scarce and
expensive. We propose a generator of semi-artificial data with similar
properties to the original data which enables development and testing of
different data mining algorithms and optimization of their parameters. The
generated data allow a large scale experimentation and simulations without
danger of overfitting. The proposed generator is based on RBF networks, which
learn sets of Gaussian kernels. These Gaussian kernels can be used in a
generative mode to generate new data from the same distributions. To assess
quality of the generated data we evaluated the statistical properties of the
generated data, structural similarity and predictive similarity using
supervised and unsupervised learning techniques. To determine usability of the
proposed generator we conducted a large scale evaluation using 51 UCI data
sets. The results show a considerable similarity between the original and
generated data and indicate that the method can be useful in several
development and simulation scenarios. We analyze possible improvements in
classification performance by adding different amounts of generated data to the
training set, performance on high dimensional data sets, and conditions when
the proposed approach is successful.
| [
"Marko Robnik-\\v{S}ikonja"
]
|
math.OC cs.DC cs.LG cs.SY | null | 1403.7429 | null | null | http://arxiv.org/pdf/1403.7429v1 | 2014-03-28T16:11:57Z | 2014-03-28T16:11:57Z | Distributed Reconstruction of Nonlinear Networks: An ADMM Approach | In this paper, we present a distributed algorithm for the reconstruction of
large-scale nonlinear networks. In particular, we focus on the identification
from time-series data of the nonlinear functional forms and associated
parameters of large-scale nonlinear networks. Recently, a nonlinear network
reconstruction problem was formulated as a nonconvex optimisation problem based
on the combination of a marginal likelihood maximisation procedure with
sparsity inducing priors. Using a convex-concave procedure (CCCP), an iterative
reweighted lasso algorithm was derived to solve the initial nonconvex
optimisation problem. By exploiting the structure of the objective function of
this reweighted lasso algorithm, a distributed algorithm can be designed. To
this end, we apply the alternating direction method of multipliers (ADMM) to
decompose the original problem into several subproblems. To illustrate the
effectiveness of the proposed methods, we use our approach to identify a
network of interconnected Kuramoto oscillators with different network sizes
(500~100,000 nodes).
| [
"['Wei Pan' 'Aivar Sootla' 'Guy-Bart Stan']",
"Wei Pan, Aivar Sootla and Guy-Bart Stan"
]
|
cs.LG | null | 1403.7471 | null | null | http://arxiv.org/pdf/1403.7471v3 | 2014-06-12T13:11:53Z | 2014-03-28T18:07:21Z | Approximate Decentralized Bayesian Inference | This paper presents an approximate method for performing Bayesian inference
in models with conditional independence over a decentralized network of
learning agents. The method first employs variational inference on each
individual learning agent to generate a local approximate posterior, the agents
transmit their local posteriors to other agents in the network, and finally
each agent combines its set of received local posteriors. The key insight in
this work is that, for many Bayesian models, approximate inference schemes
destroy symmetry and dependencies in the model that are crucial to the correct
application of Bayes' rule when combining the local posteriors. The proposed
method addresses this issue by including an additional optimization step in the
combination procedure that accounts for these broken dependencies. Experiments
on synthetic and real data demonstrate that the decentralized method provides
advantages in computational performance and predictive test likelihood over
previous batch and distributed methods.
| [
"['Trevor Campbell' 'Jonathan P. How']",
"Trevor Campbell and Jonathan P. How"
]
|
cs.DB cs.LG math.OC stat.ML | null | 1403.7550 | null | null | http://arxiv.org/pdf/1403.7550v3 | 2014-07-07T17:20:20Z | 2014-03-28T21:48:00Z | DimmWitted: A Study of Main-Memory Statistical Analytics | We perform the first study of the tradeoff space of access methods and
replication to support statistical analytics using first-order methods executed
in the main memory of a Non-Uniform Memory Access (NUMA) machine. Statistical
analytics systems differ from conventional SQL-analytics in the amount and
types of memory incoherence they can tolerate. Our goal is to understand
tradeoffs in accessing the data in row- or column-order and at what granularity
one should share the model and data for a statistical task. We study this new
tradeoff space, and discover there are tradeoffs between hardware and
statistical efficiency. We argue that our tradeoff study may provide valuable
information for designers of analytics engines: for each system we consider,
our prototype engine can run at least one popular task at least 100x faster. We
conduct our study across five architectures using popular models including
SVMs, logistic regression, Gibbs sampling, and neural networks.
| [
"Ce Zhang and Christopher R\\'e",
"['Ce Zhang' 'Christopher Ré']"
]
|
cs.CR cs.LG | 10.14445/22315381/IJETT-V9P296 | 1403.7726 | null | null | http://arxiv.org/abs/1403.7726v1 | 2014-03-30T09:41:17Z | 2014-03-30T09:41:17Z | Relevant Feature Selection Model Using Data Mining for Intrusion
Detection System | Network intrusions have become a significant threat in recent years as a
result of the increased demand of computer networks for critical systems.
Intrusion detection system (IDS) has been widely deployed as a defense measure
for computer networks. Features extracted from network traffic can be used as
sign to detect anomalies. However with the huge amount of network traffic,
collected data contains irrelevant and redundant features that affect the
detection rate of the IDS, consumes high amount of system resources, and
slowdown the training and testing process of the IDS. In this paper, a new
feature selection model is proposed; this model can effectively select the most
relevant features for intrusion detection. Our goal is to build a lightweight
intrusion detection system by using a reduced features set. Deleting irrelevant
and redundant features helps to build a faster training and testing process, to
have less resource consumption as well as to maintain high detection rates. The
effectiveness and the feasibility of our feature selection model were verified
by several experiments on KDD intrusion detection dataset. The experimental
results strongly showed that our model is not only able to yield high detection
rates but also to speed up the detection process.
| [
"Ayman I. Madbouly, Amr M. Gody, Tamer M. Barakat",
"['Ayman I. Madbouly' 'Amr M. Gody' 'Tamer M. Barakat']"
]
|
cs.NI cs.IT cs.LG math.IT | null | 1403.7735 | null | null | http://arxiv.org/pdf/1403.7735v2 | 2014-07-08T21:40:13Z | 2014-03-30T10:59:58Z | Optimal Cooperative Cognitive Relaying and Spectrum Access for an Energy
Harvesting Cognitive Radio: Reinforcement Learning Approach | In this paper, we consider a cognitive setting under the context of
cooperative communications, where the cognitive radio (CR) user is assumed to
be a self-organized relay for the network. The CR user and the PU are assumed
to be energy harvesters. The CR user cooperatively relays some of the
undelivered packets of the primary user (PU). Specifically, the CR user stores
a fraction of the undelivered primary packets in a relaying queue (buffer). It
manages the flow of the undelivered primary packets to its relaying queue using
the appropriate actions over time slots. Moreover, it has the decision of
choosing the used queue for channel accessing at idle time slots (slots where
the PU's queue is empty). It is assumed that one data packet transmission
dissipates one energy packet. The optimal policy changes according to the
primary and CR users arrival rates to the data and energy queues as well as the
channels connectivity. The CR user saves energy for the PU by taking the
responsibility of relaying the undelivered primary packets. It optimally
organizes its own energy packets to maximize its payoff as time progresses.
| [
"['Ahmed El Shafie' 'Tamer Khattab' 'Hussien Saad' 'Amr Mohamed']",
"Ahmed El Shafie and Tamer Khattab and Hussien Saad and Amr Mohamed"
]
|
cs.LG cs.NA stat.ML | null | 1403.7737 | null | null | http://arxiv.org/pdf/1403.7737v2 | 2014-04-05T05:56:04Z | 2014-03-30T11:21:39Z | Sharpened Error Bounds for Random Sampling Based $\ell_2$ Regression | Given a data matrix $X \in R^{n\times d}$ and a response vector $y \in
R^{n}$, suppose $n>d$, it costs $O(n d^2)$ time and $O(n d)$ space to solve the
least squares regression (LSR) problem. When $n$ and $d$ are both large,
exactly solving the LSR problem is very expensive. When $n \gg d$, one feasible
approach to speeding up LSR is to randomly embed $y$ and all columns of $X$
into a smaller subspace $R^c$; the induced LSR problem has the same number of
columns but much fewer number of rows, and it can be solved in $O(c d^2)$ time
and $O(c d)$ space.
We discuss in this paper two random sampling based methods for solving LSR
more efficiently. Previous work showed that the leverage scores based sampling
based LSR achieves $1+\epsilon$ accuracy when $c \geq O(d \epsilon^{-2} \log
d)$. In this paper we sharpen this error bound, showing that $c = O(d \log d +
d \epsilon^{-1})$ is enough for achieving $1+\epsilon$ accuracy. We also show
that when $c \geq O(\mu d \epsilon^{-2} \log d)$, the uniform sampling based
LSR attains a $2+\epsilon$ bound with positive probability.
| [
"Shusen Wang",
"['Shusen Wang']"
]
|
cs.LG cs.SD | null | 1403.7746 | null | null | http://arxiv.org/pdf/1403.7746v1 | 2014-03-30T12:22:36Z | 2014-03-30T12:22:36Z | Multi-label Ferns for Efficient Recognition of Musical Instruments in
Recordings | In this paper we introduce multi-label ferns, and apply this technique for
automatic classification of musical instruments in audio recordings. We compare
the performance of our proposed method to a set of binary random ferns, using
jazz recordings as input data. Our main result is obtaining much faster
classification and higher F-score. We also achieve substantial reduction of the
model size.
| [
"Miron B. Kursa, Alicja A. Wieczorkowska",
"['Miron B. Kursa' 'Alicja A. Wieczorkowska']"
]
|
cs.NE cs.IT cs.LG math.IT | null | 1403.7752 | null | null | http://arxiv.org/pdf/1403.7752v2 | 2015-01-23T19:12:05Z | 2014-03-30T13:11:55Z | Auto-encoders: reconstruction versus compression | We discuss the similarities and differences between training an auto-encoder
to minimize the reconstruction error, and training the same auto-encoder to
compress the data via a generative model. Minimizing a codelength for the data
using an auto-encoder is equivalent to minimizing the reconstruction error plus
some correcting terms which have an interpretation as either a denoising or
contractive property of the decoding function. These terms are related but not
identical to those used in denoising or contractive auto-encoders [Vincent et
al. 2010, Rifai et al. 2011]. In particular, the codelength viewpoint fully
determines an optimal noise level for the denoising criterion.
| [
"Yann Ollivier",
"['Yann Ollivier']"
]
|
stat.ML cs.LG stat.ME | null | 1403.7890 | null | null | http://arxiv.org/pdf/1403.7890v1 | 2014-03-31T07:18:55Z | 2014-03-31T07:18:55Z | Sparse K-Means with $\ell_{\infty}/\ell_0$ Penalty for High-Dimensional
Data Clustering | Sparse clustering, which aims to find a proper partition of an extremely
high-dimensional data set with redundant noise features, has been attracted
more and more interests in recent years. The existing studies commonly solve
the problem in a framework of maximizing the weighted feature contributions
subject to a $\ell_2/\ell_1$ penalty. Nevertheless, this framework has two
serious drawbacks: One is that the solution of the framework unavoidably
involves a considerable portion of redundant noise features in many situations,
and the other is that the framework neither offers intuitive explanations on
why this framework can select relevant features nor leads to any theoretical
guarantee for feature selection consistency.
In this article, we attempt to overcome those drawbacks through developing a
new sparse clustering framework which uses a $\ell_{\infty}/\ell_0$ penalty.
First, we introduce new concepts on optimal partitions and noise features for
the high-dimensional data clustering problems, based on which the previously
known framework can be intuitively explained in principle. Then, we apply the
suggested $\ell_{\infty}/\ell_0$ framework to formulate a new sparse k-means
model with the $\ell_{\infty}/\ell_0$ penalty ($\ell_0$-k-means for short). We
propose an efficient iterative algorithm for solving the $\ell_0$-k-means. To
deeply understand the behavior of $\ell_0$-k-means, we prove that the solution
yielded by the $\ell_0$-k-means algorithm has feature selection consistency
whenever the data matrix is generated from a high-dimensional Gaussian mixture
model. Finally, we provide experiments with both synthetic data and the Allen
Developing Mouse Brain Atlas data to support that the proposed $\ell_0$-k-means
exhibits better noise feature detection capacity over the previously known
sparse k-means with the $\ell_2/\ell_1$ penalty ($\ell_1$-k-means for short).
| [
"['Xiangyu Chang' 'Yu Wang' 'Rongjian Li' 'Zongben Xu']",
"Xiangyu Chang, Yu Wang, Rongjian Li, Zongben Xu"
]
|
cs.CR cs.LG | null | 1403.8084 | null | null | http://arxiv.org/pdf/1403.8084v1 | 2014-03-31T16:53:04Z | 2014-03-31T16:53:04Z | Privacy Tradeoffs in Predictive Analytics | Online services routinely mine user data to predict user preferences, make
recommendations, and place targeted ads. Recent research has demonstrated that
several private user attributes (such as political affiliation, sexual
orientation, and gender) can be inferred from such data. Can a
privacy-conscious user benefit from personalization while simultaneously
protecting her private attributes? We study this question in the context of a
rating prediction service based on matrix factorization. We construct a
protocol of interactions between the service and users that has remarkable
optimality properties: it is privacy-preserving, in that no inference algorithm
can succeed in inferring a user's private attribute with a probability better
than random guessing; it has maximal accuracy, in that no other
privacy-preserving protocol improves rating prediction; and, finally, it
involves a minimal disclosure, as the prediction accuracy strictly decreases
when the service reveals less information. We extensively evaluate our protocol
using several rating datasets, demonstrating that it successfully blocks the
inference of gender, age and political affiliation, while incurring less than
5% decrease in the accuracy of rating prediction.
| [
"Stratis Ioannidis, Andrea Montanari, Udi Weinsberg, Smriti Bhagat,\n Nadia Fawaz, Nina Taft",
"['Stratis Ioannidis' 'Andrea Montanari' 'Udi Weinsberg' 'Smriti Bhagat'\n 'Nadia Fawaz' 'Nina Taft']"
]
|
cs.LG cs.DB cs.DS stat.CO | null | 1403.8144 | null | null | http://arxiv.org/pdf/1403.8144v1 | 2014-03-31T19:43:53Z | 2014-03-31T19:43:53Z | Coding for Random Projections and Approximate Near Neighbor Search | This technical note compares two coding (quantization) schemes for random
projections in the context of sub-linear time approximate near neighbor search.
The first scheme is based on uniform quantization while the second scheme
utilizes a uniform quantization plus a uniformly random offset (which has been
popular in practice). The prior work compared the two schemes in the context of
similarity estimation and training linear classifiers, with the conclusion that
the step of random offset is not necessary and may hurt the performance
(depending on the similarity level). The task of near neighbor search is
related to similarity estimation with importance distinctions and requires own
study. In this paper, we demonstrate that in the context of near neighbor
search, the step of random offset is not needed either and may hurt the
performance (sometimes significantly so, depending on the similarity and other
parameters).
| [
"Ping Li, Michael Mitzenmacher, Anshumali Shrivastava",
"['Ping Li' 'Michael Mitzenmacher' 'Anshumali Shrivastava']"
]
|
cs.GT cs.IR cs.LG | 10.4204/EPTCS.144.6 | 1404.0086 | null | null | http://arxiv.org/abs/1404.0086v1 | 2014-04-01T00:39:19Z | 2014-04-01T00:39:19Z | Using HMM in Strategic Games | In this paper we describe an approach to resolve strategic games in which
players can assume different types along the game. Our goal is to infer which
type the opponent is adopting at each moment so that we can increase the
player's odds. To achieve that we use Markov games combined with hidden Markov
model. We discuss a hypothetical example of a tennis game whose solution can be
applied to any game with similar characteristics.
| [
"['Mario Benevides' 'Isaque Lima' 'Rafael Nader' 'Pedro Rougemont']",
"Mario Benevides (Federal University of Rio de Janeiro), Isaque Lima\n (Federal University of Rio de Janeiro), Rafael Nader (Federal University of\n Rio de Janeiro), Pedro Rougemont (Federal University of Rio de Janeiro)"
]
|
cs.LG | null | 1404.0138 | null | null | http://arxiv.org/pdf/1404.0138v1 | 2014-04-01T06:26:55Z | 2014-04-01T06:26:55Z | Efficient Algorithms and Error Analysis for the Modified Nystrom Method | Many kernel methods suffer from high time and space complexities and are thus
prohibitive in big-data applications. To tackle the computational challenge,
the Nystr\"om method has been extensively used to reduce time and space
complexities by sacrificing some accuracy. The Nystr\"om method speedups
computation by constructing an approximation of the kernel matrix using only a
few columns of the matrix. Recently, a variant of the Nystr\"om method called
the modified Nystr\"om method has demonstrated significant improvement over the
standard Nystr\"om method in approximation accuracy, both theoretically and
empirically.
In this paper, we propose two algorithms that make the modified Nystr\"om
method practical. First, we devise a simple column selection algorithm with a
provable error bound. Our algorithm is more efficient and easier to implement
than and nearly as accurate as the state-of-the-art algorithm. Second, with the
selected columns at hand, we propose an algorithm that computes the
approximation in lower time complexity than the approach in the previous work.
Furthermore, we prove that the modified Nystr\"om method is exact under certain
conditions, and we establish a lower error bound for the modified Nystr\"om
method.
| [
"['Shusen Wang' 'Zhihua Zhang']",
"Shusen Wang, Zhihua Zhang"
]
|
cs.LG stat.AP | null | 1404.0200 | null | null | http://arxiv.org/pdf/1404.0200v1 | 2014-04-01T11:32:53Z | 2014-04-01T11:32:53Z | Household Electricity Demand Forecasting -- Benchmarking
State-of-the-Art Methods | The increasing use of renewable energy sources with variable output, such as
solar photovoltaic and wind power generation, calls for Smart Grids that
effectively manage flexible loads and energy storage. The ability to forecast
consumption at different locations in distribution systems will be a key
capability of Smart Grids. The goal of this paper is to benchmark
state-of-the-art methods for forecasting electricity demand on the household
level across different granularities and time scales in an explorative way,
thereby revealing potential shortcomings and find promising directions for
future research in this area. We apply a number of forecasting methods
including ARIMA, neural networks, and exponential smoothening using several
strategies for training data selection, in particular day type and sliding
window based strategies. We consider forecasting horizons ranging between 15
minutes and 24 hours. Our evaluation is based on two data sets containing the
power usage of individual appliances at second time granularity collected over
the course of several months. The results indicate that forecasting accuracy
varies significantly depending on the choice of forecasting methods/strategy
and the parameter configuration. Measured by the Mean Absolute Percentage Error
(MAPE), the considered state-of-the-art forecasting methods rarely beat
corresponding persistence forecasts. Overall, we observed MAPEs in the range
between 5 and >100%. The average MAPE for the first data set was ~30%, while it
was ~85% for the other data set. These results show big room for improvement.
Based on the identified trends and experiences from our experiments, we
contribute a detailed discussion of promising future research.
| [
"['Andreas Veit' 'Christoph Goebel' 'Rohit Tidke' 'Christoph Doblander'\n 'Hans-Arno Jacobsen']",
"Andreas Veit, Christoph Goebel, Rohit Tidke, Christoph Doblander and\n Hans-Arno Jacobsen"
]
|
cs.CV cs.LG | null | 1404.0334 | null | null | http://arxiv.org/pdf/1404.0334v2 | 2014-04-02T19:00:29Z | 2014-04-01T18:07:58Z | Active Deformable Part Models | This paper presents an active approach for part-based object detection, which
optimizes the order of part filter evaluations and the time at which to stop
and make a prediction. Statistics, describing the part responses, are learned
from training data and are used to formalize the part scheduling problem as an
offline optimization. Dynamic programming is applied to obtain a policy, which
balances the number of part evaluations with the classification accuracy.
During inference, the policy is used as a look-up table to choose the part
order and the stopping time based on the observed filter responses. The method
is faster than cascade detection with deformable part models (which does not
optimize the part order) with negligible loss in accuracy when evaluated on the
PASCAL VOC 2007 and 2010 datasets.
| [
"['Menglong Zhu' 'Nikolay Atanasov' 'George J. Pappas' 'Kostas Daniilidis']",
"Menglong Zhu, Nikolay Atanasov, George J. Pappas, Kostas Daniilidis"
]
|
cs.SD cs.LG stat.ML | 10.1109/ICASSP.2014.6854954 | 1404.0400 | null | null | http://arxiv.org/abs/1404.0400v1 | 2014-04-01T21:15:32Z | 2014-04-01T21:15:32Z | A Deep Representation for Invariance And Music Classification | Representations in the auditory cortex might be based on mechanisms similar
to the visual ventral stream; modules for building invariance to
transformations and multiple layers for compositionality and selectivity. In
this paper we propose the use of such computational modules for extracting
invariant and discriminative audio representations. Building on a theory of
invariance in hierarchical architectures, we propose a novel, mid-level
representation for acoustical signals, using the empirical distributions of
projections on a set of templates and their transformations. Under the
assumption that, by construction, this dictionary of templates is composed from
similar classes, and samples the orbit of variance-inducing signal
transformations (such as shift and scale), the resulting signature is
theoretically guaranteed to be unique, invariant to transformations and stable
to deformations. Modules of projection and pooling can then constitute layers
of deep networks, for learning composite representations. We present the main
theoretical and computational aspects of a framework for unsupervised learning
of invariant audio representations, empirically evaluated on music genre
classification.
| [
"Chiyuan Zhang, Georgios Evangelopoulos, Stephen Voinea, Lorenzo\n Rosasco, Tomaso Poggio",
"['Chiyuan Zhang' 'Georgios Evangelopoulos' 'Stephen Voinea'\n 'Lorenzo Rosasco' 'Tomaso Poggio']"
]
|
q-bio.MN cs.LG | 10.1007/978-3-319-08123-6_2 | 1404.0427 | null | null | http://arxiv.org/abs/1404.0427v2 | 2014-07-13T04:18:00Z | 2014-04-02T01:00:48Z | Learning Two-input Linear and Nonlinear Analog Functions with a Simple
Chemical System | The current biochemical information processing systems behave in a
predetermined manner because all features are defined during the design phase.
To make such unconventional computing systems reusable and programmable for
biomedical applications, adaptation, learning, and self-modification based on
external stimuli would be highly desirable. However, so far, it has been too
challenging to implement these in wet chemistries. In this paper we extend the
chemical perceptron, a model previously proposed by the authors, to function as
an analog instead of a binary system. The new analog asymmetric signal
perceptron learns through feedback and supports Michaelis-Menten kinetics. The
results show that our perceptron is able to learn linear and nonlinear
(quadratic) functions of two inputs. To the best of our knowledge, it is the
first simulated chemical system capable of doing so. The small number of
species and reactions and their simplicity allows for a mapping to an actual
wet implementation using DNA-strand displacement or deoxyribozymes. Our results
are an important step toward actual biochemical systems that can learn and
adapt.
| [
"Peter Banda, Christof Teuscher",
"['Peter Banda' 'Christof Teuscher']"
]
|
cs.CE cs.LG | null | 1404.0453 | null | null | http://arxiv.org/pdf/1404.0453v1 | 2014-04-02T04:18:06Z | 2014-04-02T04:18:06Z | Cellular Automata and Its Applications in Bioinformatics: A Review | This paper aims at providing a survey on the problems that can be easily
addressed by cellular automata in bioinformatics. Some of the authors have
proposed algorithms for addressing some problems in bioinformatics but the
application of cellular automata in bioinformatics is a virgin field in
research. None of the researchers has tried to relate the major problems in
bioinformatics and find a common solution. Extensive literature surveys were
conducted. We have considered some papers in various journals and conferences
for conduct of our research. This paper provides intuition towards relating
various problems in bioinformatics logically and tries to attain a common frame
work for addressing the same.
| [
"['Pokkuluri Kiran Sree' 'Inampudi Ramesh Babu' 'SSSN Usha Devi N']",
"Pokkuluri Kiran Sree, Inampudi Ramesh Babu, SSSN Usha Devi N"
]
|
cs.LG cs.NA | null | 1404.0466 | null | null | http://arxiv.org/pdf/1404.0466v2 | 2015-06-10T18:20:16Z | 2014-04-02T05:33:41Z | piCholesky: Polynomial Interpolation of Multiple Cholesky Factors for
Efficient Approximate Cross-Validation | The dominant cost in solving least-square problems using Newton's method is
often that of factorizing the Hessian matrix over multiple values of the
regularization parameter ($\lambda$). We propose an efficient way to
interpolate the Cholesky factors of the Hessian matrix computed over a small
set of $\lambda$ values. This approximation enables us to optimally minimize
the hold-out error while incurring only a fraction of the cost compared to
exact cross-validation. We provide a formal error bound for our approximation
scheme and present solutions to a set of key implementation challenges that
allow our approach to maximally exploit the compute power of modern
architectures. We present a thorough empirical analysis over multiple datasets
to show the effectiveness of our approach.
| [
"Da Kuang, Alex Gittens, Raffay Hamid",
"['Da Kuang' 'Alex Gittens' 'Raffay Hamid']"
]
|
cs.LG | null | 1404.0649 | null | null | http://arxiv.org/pdf/1404.0649v1 | 2014-03-30T20:49:33Z | 2014-03-30T20:49:33Z | A probabilistic estimation and prediction technique for dynamic
continuous social science models: The evolution of the attitude of the Basque
Country population towards ETA as a case study | In this paper, we present a computational technique to deal with uncertainty
in dynamic continuous models in Social Sciences. Considering data from surveys,
the method consists of determining the probability distribution of the survey
output and this allows to sample data and fit the model to the sampled data
using a goodness-of-fit criterion based on the chi-square-test. Taking the
fitted parameters non-rejected by the chi-square-test, substituting them into
the model and computing their outputs, we build 95% confidence intervals in
each time instant capturing uncertainty of the survey data (probabilistic
estimation). Using the same set of obtained model parameters, we also provide a
prediction over the next few years with 95% confidence intervals (probabilistic
prediction). This technique is applied to a dynamic social model describing the
evolution of the attitude of the Basque Country population towards the
revolutionary organization ETA.
| [
"Juan-Carlos Cort\\'es, Francisco-J. Santonja, Ana-C. Tarazona,\n Rafael-J. Villanueva, Javier Villanueva-Oller",
"['Juan-Carlos Cortés' 'Francisco-J. Santonja' 'Ana-C. Tarazona'\n 'Rafael-J. Villanueva' 'Javier Villanueva-Oller']"
]
|
cs.CV cs.LG | null | 1404.0736 | null | null | http://arxiv.org/pdf/1404.0736v2 | 2014-06-09T15:53:55Z | 2014-04-02T23:31:12Z | Exploiting Linear Structure Within Convolutional Networks for Efficient
Evaluation | We present techniques for speeding up the test-time evaluation of large
convolutional networks, designed for object recognition tasks. These models
deliver impressive accuracy but each image evaluation requires millions of
floating point operations, making their deployment on smartphones and
Internet-scale clusters problematic. The computation is dominated by the
convolution operations in the lower layers of the model. We exploit the linear
structure present within the convolutional filters to derive approximations
that significantly reduce the required computation. Using large
state-of-the-art models, we demonstrate we demonstrate speedups of
convolutional layers on both CPU and GPU by a factor of 2x, while keeping the
accuracy within 1% of the original model.
| [
"Emily Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, Rob Fergus",
"['Remi Denton' 'Wojciech Zaremba' 'Joan Bruna' 'Yann LeCun' 'Rob Fergus']"
]
|
stat.ML cs.LG | null | 1404.0751 | null | null | http://arxiv.org/pdf/1404.0751v2 | 2016-12-12T15:36:21Z | 2014-04-03T02:58:37Z | Subspace Learning from Extremely Compressed Measurements | We consider learning the principal subspace of a large set of vectors from an
extremely small number of compressive measurements of each vector. Our
theoretical results show that even a constant number of measurements per column
suffices to approximate the principal subspace to arbitrary precision, provided
that the number of vectors is large. This result is achieved by a simple
algorithm that computes the eigenvectors of an estimate of the covariance
matrix. The main insight is to exploit an averaging effect that arises from
applying a different random projection to each vector. We provide a number of
simulations confirming our theoretical results.
| [
"Akshay Krishnamurthy, Martin Azizyan, Aarti Singh",
"['Akshay Krishnamurthy' 'Martin Azizyan' 'Aarti Singh']"
]
|
cs.LG | null | 1404.0789 | null | null | http://arxiv.org/pdf/1404.0789v3 | 2014-04-17T19:53:46Z | 2014-04-03T07:41:46Z | The Least Wrong Model Is Not in the Data | The true process that generated data cannot be determined when multiple
explanations are possible. Prediction requires a model of the probability that
a process, chosen randomly from the set of candidate explanations, generates
some future observation. The best model includes all of the information
contained in the minimal description of the data that is not contained in the
data. It is closely related to the Halting Problem and is logarithmic in the
size of the data. Prediction is difficult because the ideal model is not
computable, and the best computable model is not "findable." However, the error
from any approximation can be bounded by the size of the description using the
model.
| [
"Oscar Stiffelman",
"['Oscar Stiffelman']"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.