categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG | null | 1404.0933 | null | null | http://arxiv.org/pdf/1404.0933v1 | 2014-04-03T14:34:47Z | 2014-04-03T14:34:47Z | Bayes and Naive Bayes Classifier | The Bayesian Classification represents a supervised learning method as well
as a statistical method for classification. Assumes an underlying probabilistic
model and it allows us to capture uncertainty about the model in a principled
way by determining probabilities of the outcomes. This Classification is named
after Thomas Bayes (1702-1761), who proposed the Bayes Theorem. Bayesian
classification provides practical learning algorithms and prior knowledge and
observed data can be combined. Bayesian Classification provides a useful
perspective for understanding and evaluating many learning algorithms. It
calculates explicit probabilities for hypothesis and it is robust to noise in
input data. In statistical classification the Bayes classifier minimises the
probability of misclassification. That was a visual intuition for a simple case
of the Bayes classifier, also called: 1)Idiot Bayes 2)Naive Bayes 3)Simple
Bayes
| [
"['Vikramkumar' 'Vijaykumar B' 'Trilochan']",
"Vikramkumar (B092633), Vijaykumar B (B091956), Trilochan (B092654)"
]
|
cs.NI cs.LG stat.ML | 10.1109/TVT.2015.2453391 | 1404.0979 | null | null | http://arxiv.org/abs/1404.0979v4 | 2019-10-10T17:56:35Z | 2014-04-03T15:46:54Z | Kernel-Based Adaptive Online Reconstruction of Coverage Maps With Side
Information | In this paper, we address the problem of reconstructing coverage maps from
path-loss measurements in cellular networks. We propose and evaluate two
kernel-based adaptive online algorithms as an alternative to typical offline
methods. The proposed algorithms are application-tailored extensions of
powerful iterative methods such as the adaptive projected subgradient method
and a state-of-the-art adaptive multikernel method. Assuming that the moving
trajectories of users are available, it is shown how side information can be
incorporated in the algorithms to improve their convergence performance and the
quality of the estimation. The complexity is significantly reduced by imposing
sparsity-awareness in the sense that the algorithms exploit the compressibility
of the measurement data to reduce the amount of data which is saved and
processed. Finally, we present extensive simulations based on realistic data to
show that our algorithms provide fast, robust estimates of coverage maps in
real-world scenarios. Envisioned applications include path-loss prediction
along trajectories of mobile users as a building block for anticipatory
buffering or traffic offloading.
| [
"['Martin Kasparick' 'Renato L. G. Cavalcante' 'Stefan Valentin'\n 'Slawomir Stanczak' 'Masahiro Yukawa']",
"Martin Kasparick, Renato L. G. Cavalcante, Stefan Valentin, Slawomir\n Stanczak, Masahiro Yukawa"
]
|
cs.LG | null | 1404.1066 | null | null | http://arxiv.org/pdf/1404.1066v1 | 2014-04-03T19:49:57Z | 2014-04-03T19:49:57Z | Parallel Support Vector Machines in Practice | In this paper, we evaluate the performance of various parallel optimization
methods for Kernel Support Vector Machines on multicore CPUs and GPUs. In
particular, we provide the first comparison of algorithms with explicit and
implicit parallelization. Most existing parallel implementations for multi-core
or GPU architectures are based on explicit parallelization of Sequential
Minimal Optimization (SMO)---the programmers identified parallelizable
components and hand-parallelized them, specifically tuned for a particular
architecture. We compare these approaches with each other and with implicitly
parallelized algorithms---where the algorithm is expressed such that most of
the work is done within few iterations with large dense linear algebra
operations. These can be computed with highly-optimized libraries, that are
carefully parallelized for a large variety of parallel platforms. We highlight
the advantages and disadvantages of both approaches and compare them on various
benchmark data sets. We find an approximate implicitly parallel algorithm which
is surprisingly efficient, permits a much simpler implementation, and leads to
unprecedented speedups in SVM training.
| [
"Stephen Tyree, Jacob R. Gardner, Kilian Q. Weinberger, Kunal Agrawal,\n John Tran",
"['Stephen Tyree' 'Jacob R. Gardner' 'Kilian Q. Weinberger' 'Kunal Agrawal'\n 'John Tran']"
]
|
cs.LG stat.ML | null | 1404.1100 | null | null | http://arxiv.org/pdf/1404.1100v1 | 2014-04-03T21:16:49Z | 2014-04-03T21:16:49Z | A Tutorial on Principal Component Analysis | Principal component analysis (PCA) is a mainstay of modern data analysis - a
black box that is widely used but (sometimes) poorly understood. The goal of
this paper is to dispel the magic behind this black box. This manuscript
focuses on building a solid intuition for how and why principal component
analysis works. This manuscript crystallizes this knowledge by deriving from
simple intuitions, the mathematics behind PCA. This tutorial does not shy away
from explaining the ideas informally, nor does it shy away from the
mathematics. The hope is that by addressing both aspects, readers of all levels
will be able to gain a better understanding of PCA as well as the when, the how
and the why of applying this technique.
| [
"Jonathon Shlens",
"['Jonathon Shlens']"
]
|
cs.AI cs.LG | null | 1404.1140 | null | null | http://arxiv.org/pdf/1404.1140v2 | 2014-12-20T03:28:34Z | 2014-04-04T03:02:44Z | Scalable Planning and Learning for Multiagent POMDPs: Extended Version | Online, sample-based planning algorithms for POMDPs have shown great promise
in scaling to problems with large state spaces, but they become intractable for
large action and observation spaces. This is particularly problematic in
multiagent POMDPs where the action and observation space grows exponentially
with the number of agents. To combat this intractability, we propose a novel
scalable approach based on sample-based planning and factored value functions
that exploits structure present in many multiagent settings. This approach
applies not only in the planning case, but also in the Bayesian reinforcement
learning setting. Experimental results show that we are able to provide high
quality solutions to large multiagent planning and learning problems.
| [
"Christopher Amato, Frans A. Oliehoek",
"['Christopher Amato' 'Frans A. Oliehoek']"
]
|
cs.CE cs.LG | null | 1404.1144 | null | null | http://arxiv.org/pdf/1404.1144v1 | 2014-04-04T03:31:29Z | 2014-04-04T03:31:29Z | AIS-MACA- Z: MACA based Clonal Classifier for Splicing Site, Protein
Coding and Promoter Region Identification in Eukaryotes | Bioinformatics incorporates information regarding biological data storage,
accessing mechanisms and presentation of characteristics within this data. Most
of the problems in bioinformatics and be addressed efficiently by computer
techniques. This paper aims at building a classifier based on Multiple
Attractor Cellular Automata (MACA) which uses fuzzy logic with version Z to
predict splicing site, protein coding and promoter region identification in
eukaryotes. It is strengthened with an artificial immune system technique
(AIS), Clonal algorithm for choosing rules of best fitness. The proposed
classifier can handle DNA sequences of lengths 54,108,162,252,354. This
classifier gives the exact boundaries of both protein and promoter regions with
an average accuracy of 90.6%. This classifier can predict the splicing site
with 97% accuracy. This classifier was tested with 1, 97,000 data components
which were taken from Fickett & Toung , EPDnew, and other sequences from a
renowned medical university.
| [
"['Pokkuluri Kiran Sree' 'Inampudi Ramesh Babu' 'SSSN Usha Devi N']",
"Pokkuluri Kiran Sree, Inampudi Ramesh Babu, SSSN Usha Devi N"
]
|
cs.LG | 10.1007/s10994-016-5621-5 | 1404.1282 | null | null | http://arxiv.org/abs/1404.1282v3 | 2015-02-11T05:17:27Z | 2014-03-22T06:25:51Z | Hierarchical Dirichlet Scaling Process | We present the \textit{hierarchical Dirichlet scaling process} (HDSP), a
Bayesian nonparametric mixed membership model. The HDSP generalizes the
hierarchical Dirichlet process (HDP) to model the correlation structure between
metadata in the corpus and mixture components. We construct the HDSP based on
the normalized gamma representation of the Dirichlet process, and this
construction allows incorporating a scaling function that controls the
membership probabilities of the mixture components. We develop two scaling
methods to demonstrate that different modeling assumptions can be expressed in
the HDSP. We also derive the corresponding approximate posterior inference
algorithms using variational Bayes. Through experiments on datasets of
newswire, medical journal articles, conference proceedings, and product
reviews, we show that the HDSP results in a better predictive performance than
labeled LDA, partially labeled LDA, and author topic model and a better
negative review classification performance than the supervised topic model and
SVM.
| [
"['Dongwoo Kim' 'Alice Oh']",
"Dongwoo Kim, Alice Oh"
]
|
physics.chem-ph cs.LG physics.comp-ph stat.ML | null | 1404.1333 | null | null | http://arxiv.org/pdf/1404.1333v2 | 2014-05-27T01:23:13Z | 2014-04-04T18:20:23Z | Understanding Machine-learned Density Functionals | Kernel ridge regression is used to approximate the kinetic energy of
non-interacting fermions in a one-dimensional box as a functional of their
density. The properties of different kernels and methods of cross-validation
are explored, and highly accurate energies are achieved. Accurate {\em
constrained optimal densities} are found via a modified Euler-Lagrange
constrained minimization of the total energy. A projected gradient descent
algorithm is derived using local principal component analysis. Additionally, a
sparse grid representation of the density can be used without degrading the
performance of the methods. The implications for machine-learned density
functional approximations are discussed.
| [
"['Li Li' 'John C. Snyder' 'Isabelle M. Pelaschier' 'Jessica Huang'\n 'Uma-Naresh Niranjan' 'Paul Duncan' 'Matthias Rupp' 'Klaus-Robert Müller'\n 'Kieron Burke']",
"Li Li, John C. Snyder, Isabelle M. Pelaschier, Jessica Huang,\n Uma-Naresh Niranjan, Paul Duncan, Matthias Rupp, Klaus-Robert M\\\"uller,\n Kieron Burke"
]
|
stat.ML cs.LG math.ST stat.TH | null | 1404.1356 | null | null | http://arxiv.org/pdf/1404.1356v5 | 2016-09-13T14:23:48Z | 2014-04-04T19:33:55Z | Optimal learning with Bernstein Online Aggregation | We introduce a new recursive aggregation procedure called Bernstein Online
Aggregation (BOA). The exponential weights include an accuracy term and a
second order term that is a proxy of the quadratic variation as in Hazan and
Kale (2010). This second term stabilizes the procedure that is optimal in
different senses. We first obtain optimal regret bounds in the deterministic
context. Then, an adaptive version is the first exponential weights algorithm
that exhibits a second order bound with excess losses that appears first in
Gaillard et al. (2014). The second order bounds in the deterministic context
are extended to a general stochastic context using the cumulative predictive
risk. Such conversion provides the main result of the paper, an inequality of a
novel type comparing the procedure with any deterministic aggregation procedure
for an integrated criteria. Then we obtain an observable estimate of the excess
of risk of the BOA procedure. To assert the optimality, we consider finally the
iid case for strongly convex and Lipschitz continuous losses and we prove that
the optimal rate of aggregation of Tsybakov (2003) is achieved. The batch
version of the BOA procedure is then the first adaptive explicit algorithm that
satisfies an optimal oracle inequality with high probability.
| [
"Olivier Wintenberger (LSTA)",
"['Olivier Wintenberger']"
]
|
cs.LG math.NA stat.ML | null | 1404.1377 | null | null | http://arxiv.org/pdf/1404.1377v2 | 2014-04-16T19:09:09Z | 2014-04-04T20:00:30Z | Orthogonal Rank-One Matrix Pursuit for Low Rank Matrix Completion | In this paper, we propose an efficient and scalable low rank matrix
completion algorithm. The key idea is to extend orthogonal matching pursuit
method from the vector case to the matrix case. We further propose an economic
version of our algorithm by introducing a novel weight updating rule to reduce
the time and storage complexity. Both versions are computationally inexpensive
for each matrix pursuit iteration, and find satisfactory results in a few
iterations. Another advantage of our proposed algorithm is that it has only one
tunable parameter, which is the rank. It is easy to understand and to use by
the user. This becomes especially important in large-scale learning problems.
In addition, we rigorously show that both versions achieve a linear convergence
rate, which is significantly better than the previous known results. We also
empirically compare the proposed algorithms with several state-of-the-art
matrix completion algorithms on many real-world datasets, including the
large-scale recommendation dataset Netflix as well as the MovieLens datasets.
Numerical results show that our proposed algorithm is more efficient than
competing algorithms while achieving similar or better prediction performance.
| [
"['Zheng Wang' 'Ming-Jun Lai' 'Zhaosong Lu' 'Wei Fan' 'Hasan Davulcu'\n 'Jieping Ye']",
"Zheng Wang, Ming-Jun Lai, Zhaosong Lu, Wei Fan, Hasan Davulcu and\n Jieping Ye"
]
|
cs.LG | null | 1404.1491 | null | null | http://arxiv.org/pdf/1404.1491v1 | 2014-03-24T16:05:26Z | 2014-03-24T16:05:26Z | An Efficient Feature Selection in Classification of Audio Files | In this paper we have focused on an efficient feature selection method in
classification of audio files. The main objective is feature selection and
extraction. We have selected a set of features for further analysis, which
represents the elements in feature vector. By extraction method we can compute
a numerical representation that can be used to characterize the audio using the
existing toolbox. In this study Gain Ratio (GR) is used as a feature selection
measure. GR is used to select splitting attribute which will separate the
tuples into different classes. The pulse clarity is considered as a subjective
measure and it is used to calculate the gain of features of audio files. The
splitting criterion is employed in the application to identify the class or the
music genre of a specific audio file from testing database. Experimental
results indicate that by using GR the application can produce a satisfactory
result for music genre classification. After dimensionality reduction best
three features have been selected out of various features of audio file and in
this technique we will get more than 90% successful classification result.
| [
"['Jayita Mitra' 'Diganta Saha']",
"Jayita Mitra and Diganta Saha"
]
|
stat.ML cs.LG | null | 1404.1492 | null | null | http://arxiv.org/pdf/1404.1492v1 | 2014-04-05T17:09:05Z | 2014-04-05T17:09:05Z | Ensemble Committees for Stock Return Classification and Prediction | This paper considers a portfolio trading strategy formulated by algorithms in
the field of machine learning. The profitability of the strategy is measured by
the algorithm's capability to consistently and accurately identify stock
indices with positive or negative returns, and to generate a preferred
portfolio allocation on the basis of a learned model. Stocks are characterized
by time series data sets consisting of technical variables that reflect market
conditions in a previous time interval, which are utilized produce binary
classification decisions in subsequent intervals. The learned model is
constructed as a committee of random forest classifiers, a non-linear support
vector machine classifier, a relevance vector machine classifier, and a
constituent ensemble of k-nearest neighbors classifiers. The Global Industry
Classification Standard (GICS) is used to explore the ensemble model's efficacy
within the context of various fields of investment including Energy, Materials,
Financials, and Information Technology. Data from 2006 to 2012, inclusive, are
considered, which are chosen for providing a range of market circumstances for
evaluating the model. The model is observed to achieve an accuracy of
approximately 70% when predicting stock price returns three months in advance.
| [
"James Brofos",
"['James Brofos']"
]
|
cs.LG stat.ML | null | 1404.1504 | null | null | http://arxiv.org/pdf/1404.1504v1 | 2014-04-05T18:58:12Z | 2014-04-05T18:58:12Z | A Compression Technique for Analyzing Disagreement-Based Active Learning | We introduce a new and improved characterization of the label complexity of
disagreement-based active learning, in which the leading quantity is the
version space compression set size. This quantity is defined as the size of the
smallest subset of the training data that induces the same version space. We
show various applications of the new characterization, including a tight
analysis of CAL and refined label complexity bounds for linear separators under
mixtures of Gaussians and axis-aligned rectangles under product densities. The
version space compression set size, as well as the new characterization of the
label complexity, can be naturally extended to agnostic learning problems, for
which we show new speedup results for two well known active learning
algorithms.
| [
"Yair Wiener, Steve Hanneke, Ran El-Yaniv",
"['Yair Wiener' 'Steve Hanneke' 'Ran El-Yaniv']"
]
|
cs.LG cs.CL | null | 1404.1521 | null | null | http://arxiv.org/pdf/1404.1521v3 | 2014-04-15T13:18:37Z | 2014-04-05T21:25:54Z | Exploring the power of GPU's for training Polyglot language models | One of the major research trends currently is the evolution of heterogeneous
parallel computing. GP-GPU computing is being widely used and several
applications have been designed to exploit the massive parallelism that
GP-GPU's have to offer. While GPU's have always been widely used in areas of
computer vision for image processing, little has been done to investigate
whether the massive parallelism provided by GP-GPU's can be utilized
effectively for Natural Language Processing(NLP) tasks. In this work, we
investigate and explore the power of GP-GPU's in the task of learning language
models. More specifically, we investigate the performance of training Polyglot
language models using deep belief neural networks. We evaluate the performance
of training the model on the GPU and present optimizations that boost the
performance on the GPU.One of the key optimizations, we propose increases the
performance of a function involved in calculating and updating the gradient by
approximately 50 times on the GPU for sufficiently large batch sizes. We show
that with the above optimizations, the GP-GPU's performance on the task
increases by factor of approximately 3-4. The optimizations we made are generic
Theano optimizations and hence potentially boost the performance of other
models which rely on these operations.We also show that these optimizations
result in the GPU's performance at this task being now comparable to that on
the CPU. We conclude by presenting a thorough evaluation of the applicability
of GP-GPU's for this task and highlight the factors limiting the performance of
training a Polyglot model on the GPU.
| [
"Vivek Kulkarni, Rami Al-Rfou', Bryan Perozzi, Steven Skiena",
"['Vivek Kulkarni' \"Rami Al-Rfou'\" 'Bryan Perozzi' 'Steven Skiena']"
]
|
cs.LG cs.NE | 10.1109/WCCCT.2014.69 | 1404.1559 | null | null | http://arxiv.org/abs/1404.1559v1 | 2014-04-06T09:50:45Z | 2014-04-06T09:50:45Z | Sparse Coding: A Deep Learning using Unlabeled Data for High - Level
Representation | Sparse coding algorithm is an learning algorithm mainly for unsupervised
feature for finding succinct, a little above high - level Representation of
inputs, and it has successfully given a way for Deep learning. Our objective is
to use High - Level Representation data in form of unlabeled category to help
unsupervised learning task. when compared with labeled data, unlabeled data is
easier to acquire because, unlike labeled data it does not follow some
particular class labels. This really makes the Deep learning wider and
applicable to practical problems and learning. The main problem with sparse
coding is it uses Quadratic loss function and Gaussian noise mode. So, its
performs is very poor when binary or integer value or other Non- Gaussian type
data is applied. Thus first we propose an algorithm for solving the L1 -
regularized convex optimization algorithm for the problem to allow High - Level
Representation of unlabeled data. Through this we derive a optimal solution for
describing an approach to Deep learning algorithm by using sparse code.
| [
"R. Vidya, Dr.G.M.Nasira, R. P. Jaia Priyankka",
"['R. Vidya' 'Dr. G. M. Nasira' 'R. P. Jaia Priyankka']"
]
|
cs.CV cs.LG | 10.1109/CVPR.2014.253 | 1404.1561 | null | null | http://arxiv.org/abs/1404.1561v2 | 2014-05-28T00:25:43Z | 2014-04-06T10:42:36Z | Fast Supervised Hashing with Decision Trees for High-Dimensional Data | Supervised hashing aims to map the original features to compact binary codes
that are able to preserve label based similarity in the Hamming space.
Non-linear hash functions have demonstrated the advantage over linear ones due
to their powerful generalization capability. In the literature, kernel
functions are typically used to achieve non-linearity in hashing, which achieve
encouraging retrieval performance at the price of slow evaluation and training
time. Here we propose to use boosted decision trees for achieving non-linearity
in hashing, which are fast to train and evaluate, hence more suitable for
hashing with high dimensional data. In our approach, we first propose
sub-modular formulations for the hashing binary code inference problem and an
efficient GraphCut based block search method for solving large-scale inference.
Then we learn hash functions by training boosted decision trees to fit the
binary codes. Experiments demonstrate that our proposed method significantly
outperforms most state-of-the-art methods in retrieval precision and training
time. Especially for high-dimensional data, our method is orders of magnitude
faster than many methods in terms of training time.
| [
"['Guosheng Lin' 'Chunhua Shen' 'Qinfeng Shi' 'Anton van den Hengel'\n 'David Suter']",
"Guosheng Lin, Chunhua Shen, Qinfeng Shi, Anton van den Hengel, David\n Suter"
]
|
math.OC cs.LG cs.SY | null | 1404.1592 | null | null | http://arxiv.org/pdf/1404.1592v2 | 2014-07-29T15:48:24Z | 2014-04-06T15:59:05Z | The Power of Online Learning in Stochastic Network Optimization | In this paper, we investigate the power of online learning in stochastic
network optimization with unknown system statistics {\it a priori}. We are
interested in understanding how information and learning can be efficiently
incorporated into system control techniques, and what are the fundamental
benefits of doing so. We propose two \emph{Online Learning-Aided Control}
techniques, $\mathtt{OLAC}$ and $\mathtt{OLAC2}$, that explicitly utilize the
past system information in current system control via a learning procedure
called \emph{dual learning}. We prove strong performance guarantees of the
proposed algorithms: $\mathtt{OLAC}$ and $\mathtt{OLAC2}$ achieve the
near-optimal $[O(\epsilon), O([\log(1/\epsilon)]^2)]$ utility-delay tradeoff
and $\mathtt{OLAC2}$ possesses an $O(\epsilon^{-2/3})$ convergence time.
$\mathtt{OLAC}$ and $\mathtt{OLAC2}$ are probably the first algorithms that
simultaneously possess explicit near-optimal delay guarantee and sub-linear
convergence time. Simulation results also confirm the superior performance of
the proposed algorithms in practice. To the best of our knowledge, our attempt
is the first to explicitly incorporate online learning into stochastic network
optimization and to demonstrate its power in both theory and practice.
| [
"['Longbo Huang' 'Xin Liu' 'Xiaohong Hao']",
"Longbo Huang, Xin Liu, Xiaohong Hao"
]
|
cs.NE cs.LG | null | 1404.1614 | null | null | http://arxiv.org/pdf/1404.1614v1 | 2014-04-06T20:10:37Z | 2014-04-06T20:10:37Z | A Denoising Autoencoder that Guides Stochastic Search | An algorithm is described that adaptively learns a non-linear mutation
distribution. It works by training a denoising autoencoder (DA) online at each
generation of a genetic algorithm to reconstruct a slowly decaying memory of
the best genotypes so far. A compressed hidden layer forces the autoencoder to
learn hidden features in the training set that can be used to accelerate search
on novel problems with similar structure. Its output neurons define a
probability distribution that we sample from to produce offspring solutions.
The algorithm outperforms a canonical genetic algorithm on several
combinatorial optimisation problems, e.g. multidimensional 0/1 knapsack
problem, MAXSAT, HIFF, and on parameter optimisation problems, e.g. Rastrigin
and Rosenbrock functions.
| [
"['Alexander W. Churchill' 'Siddharth Sigtia' 'Chrisantha Fernando']",
"Alexander W. Churchill and Siddharth Sigtia and Chrisantha Fernando"
]
|
cs.NE cs.LG q-bio.NC | null | 1404.1999 | null | null | http://arxiv.org/pdf/1404.1999v1 | 2014-04-08T03:41:50Z | 2014-04-08T03:41:50Z | Notes on Generalized Linear Models of Neurons | Experimental neuroscience increasingly requires tractable models for
analyzing and predicting the behavior of neurons and networks. The generalized
linear model (GLM) is an increasingly popular statistical framework for
analyzing neural data that is flexible, exhibits rich dynamic behavior and is
computationally tractable (Paninski, 2004; Pillow et al., 2008; Truccolo et
al., 2005). What follows is a brief summary of the primary equations governing
the application of GLM's to spike trains with a few sentences linking this work
to the larger statistical literature. Latter sections include extensions of a
basic GLM to model spatio-temporal receptive fields as well as network activity
in an arbitrary numbers of neurons.
| [
"Jonathon Shlens",
"['Jonathon Shlens']"
]
|
cs.LG q-bio.NC | null | 1404.2078 | null | null | http://arxiv.org/pdf/1404.2078v2 | 2015-02-03T14:09:51Z | 2014-04-08T10:26:27Z | Optimistic Risk Perception in the Temporal Difference error Explains the
Relation between Risk-taking, Gambling, Sensation-seeking and Low Fear | Understanding the affective, cognitive and behavioural processes involved in
risk taking is essential for treatment and for setting environmental conditions
to limit damage. Using Temporal Difference Reinforcement Learning (TDRL) we
computationally investigated the effect of optimism in risk perception in a
variety of goal-oriented tasks. Optimism in risk perception was studied by
varying the calculation of the Temporal Difference error, i.e., delta, in three
ways: realistic (stochastically correct), optimistic (assuming action control),
and overly optimistic (assuming outcome control). We show that for the gambling
task individuals with 'healthy' perception of control, i.e., action optimism,
do not develop gambling behaviour while individuals with 'unhealthy' perception
of control, i.e., outcome optimism, do. We show that high intensity of
sensations and low levels of fear co-occur due to optimistic risk perception.
We found that overly optimistic risk perception (outcome optimism) results in
risk taking and in persistent gambling behaviour in addition to high intensity
of sensations. We discuss how our results replicate risk-taking related
phenomena.
| [
"Joost Broekens and Tim Baarslag",
"['Joost Broekens' 'Tim Baarslag']"
]
|
cs.LG stat.ML | null | 1404.2083 | null | null | http://arxiv.org/pdf/1404.2083v1 | 2014-04-08T10:49:08Z | 2014-04-08T10:49:08Z | Efficiency of conformalized ridge regression | Conformal prediction is a method of producing prediction sets that can be
applied on top of a wide range of prediction algorithms. The method has a
guaranteed coverage probability under the standard IID assumption regardless of
whether the assumptions (often considerably more restrictive) of the underlying
algorithm are satisfied. However, for the method to be really useful it is
desirable that in the case where the assumptions of the underlying algorithm
are satisfied, the conformal predictor loses little in efficiency as compared
with the underlying algorithm (whereas being a conformal predictor, it has the
stronger guarantee of validity). In this paper we explore the degree to which
this additional requirement of efficiency is satisfied in the case of Bayesian
ridge regression; we find that asymptotically conformal prediction sets differ
little from ridge regression prediction intervals when the standard Bayesian
assumptions are satisfied.
| [
"Evgeny Burnaev and Vladimir Vovk",
"['Evgeny Burnaev' 'Vladimir Vovk']"
]
|
cs.RO cs.LG | 10.1109/ROMAN.2014.6926328 | 1404.2229 | null | null | http://arxiv.org/abs/1404.2229v3 | 2014-06-04T23:59:27Z | 2014-04-08T17:44:40Z | Towards the Safety of Human-in-the-Loop Robotics: Challenges and
Opportunities for Safety Assurance of Robotic Co-Workers | The success of the human-robot co-worker team in a flexible manufacturing
environment where robots learn from demonstration heavily relies on the correct
and safe operation of the robot. How this can be achieved is a challenge that
requires addressing both technical as well as human-centric research questions.
In this paper we discuss the state of the art in safety assurance, existing as
well as emerging standards in this area, and the need for new approaches to
safety assurance in the context of learning machines. We then focus on robotic
learning from demonstration, the challenges these techniques pose to safety
assurance and indicate opportunities to integrate safety considerations into
algorithms "by design". Finally, from a human-centric perspective, we stipulate
that, to achieve high levels of safety and ultimately trust, the robotic
co-worker must meet the innate expectations of the humans it works with. It is
our aim to stimulate a discussion focused on the safety aspects of
human-in-the-loop robotics, and to foster multidisciplinary collaboration to
address the research challenges identified.
| [
"Kerstin Eder, Chris Harper, Ute Leonards",
"['Kerstin Eder' 'Chris Harper' 'Ute Leonards']"
]
|
cs.LG stat.ML | null | 1404.2353 | null | null | http://arxiv.org/pdf/1404.2353v1 | 2014-04-09T02:11:17Z | 2014-04-09T02:11:17Z | Power System Parameters Forecasting Using Hilbert-Huang Transform and
Machine Learning | A novel hybrid data-driven approach is developed for forecasting power system
parameters with the goal of increasing the efficiency of short-term forecasting
studies for non-stationary time-series. The proposed approach is based on mode
decomposition and a feature analysis of initial retrospective data using the
Hilbert-Huang transform and machine learning algorithms. The random forests and
gradient boosting trees learning techniques were examined. The decision tree
techniques were used to rank the importance of variables employed in the
forecasting models. The Mean Decrease Gini index is employed as an impurity
function. The resulting hybrid forecasting models employ the radial basis
function neural network and support vector regression. Apart from introduction
and references the paper is organized as follows. The section 2 presents the
background and the review of several approaches for short-term forecasting of
power system parameters. In the third section a hybrid machine learning-based
algorithm using Hilbert-Huang transform is developed for short-term forecasting
of power system parameters. Fourth section describes the decision tree learning
algorithms used for the issue of variables importance. Finally in section six
the experimental results in the following electric power problems are
presented: active power flow forecasting, electricity price forecasting and for
the wind speed and direction forecasting.
| [
"['Victor Kurbatsky' 'Nikita Tomin' 'Vadim Spiryaev' 'Paul Leahy'\n 'Denis Sidorov' 'Alexei Zhukov']",
"Victor Kurbatsky, Nikita Tomin, Vadim Spiryaev, Paul Leahy, Denis\n Sidorov and Alexei Zhukov"
]
|
cs.DC cs.AI cs.LG stat.ML | null | 1404.2644 | null | null | http://arxiv.org/pdf/1404.2644v3 | 2015-01-12T15:14:19Z | 2014-04-09T22:16:39Z | A Distributed Frank-Wolfe Algorithm for Communication-Efficient Sparse
Learning | Learning sparse combinations is a frequent theme in machine learning. In this
paper, we study its associated optimization problem in the distributed setting
where the elements to be combined are not centrally located but spread over a
network. We address the key challenges of balancing communication costs and
optimization errors. To this end, we propose a distributed Frank-Wolfe (dFW)
algorithm. We obtain theoretical guarantees on the optimization error
$\epsilon$ and communication cost that do not depend on the total number of
combining elements. We further show that the communication cost of dFW is
optimal by deriving a lower-bound on the communication cost required to
construct an $\epsilon$-approximate solution. We validate our theoretical
analysis with empirical studies on synthetic and real-world data, which
demonstrate that dFW outperforms both baselines and competing methods. We also
study the performance of dFW when the conditions of our analysis are relaxed,
and show that dFW is fairly robust.
| [
"['Aurélien Bellet' 'Yingyu Liang' 'Alireza Bagheri Garakani'\n 'Maria-Florina Balcan' 'Fei Sha']",
"Aur\\'elien Bellet, Yingyu Liang, Alireza Bagheri Garakani,\n Maria-Florina Balcan, Fei Sha"
]
|
math.OC cs.LG stat.ML | null | 1404.2655 | null | null | http://arxiv.org/pdf/1404.2655v1 | 2014-04-10T00:19:17Z | 2014-04-10T00:19:17Z | Open problem: Tightness of maximum likelihood semidefinite relaxations | We have observed an interesting, yet unexplained, phenomenon: Semidefinite
programming (SDP) based relaxations of maximum likelihood estimators (MLE) tend
to be tight in recovery problems with noisy data, even when MLE cannot exactly
recover the ground truth. Several results establish tightness of SDP based
relaxations in the regime where exact recovery from MLE is possible. However,
to the best of our knowledge, their tightness is not understood beyond this
regime. As an illustrative example, we focus on the generalized Procrustes
problem.
| [
"Afonso S. Bandeira and Yuehaw Khoo and Amit Singer",
"['Afonso S. Bandeira' 'Yuehaw Khoo' 'Amit Singer']"
]
|
cs.DC cs.CR cs.LG | 10.5121/ijdkp.2014.4203 | 1404.2772 | null | null | http://arxiv.org/abs/1404.2772v1 | 2014-04-10T11:22:17Z | 2014-04-10T11:22:17Z | A New Clustering Approach for Anomaly Intrusion Detection | Recent advances in technology have made our work easier compare to earlier
times. Computer network is growing day by day but while discussing about the
security of computers and networks it has always been a major concerns for
organizations varying from smaller to larger enterprises. It is true that
organizations are aware of the possible threats and attacks so they always
prepare for the safer side but due to some loopholes attackers are able to make
attacks. Intrusion detection is one of the major fields of research and
researchers are trying to find new algorithms for detecting intrusions.
Clustering techniques of data mining is an interested area of research for
detecting possible intrusions and attacks. This paper presents a new clustering
approach for anomaly intrusion detection by using the approach of K-medoids
method of clustering and its certain modifications. The proposed algorithm is
able to achieve high detection rate and overcomes the disadvantages of K-means
algorithm.
| [
"Ravi Ranjan and G. Sahoo",
"['Ravi Ranjan' 'G. Sahoo']"
]
|
stat.AP cs.LG cs.SI | null | 1404.2885 | null | null | http://arxiv.org/pdf/1404.2885v1 | 2014-04-08T22:04:53Z | 2014-04-08T22:04:53Z | A Networks and Machine Learning Approach to Determine the Best College
Coaches of the 20th-21st Centuries | Our objective is to find the five best college sports coaches of past century
for three different sports. We decided to look at men's basketball, football,
and baseball. We wanted to use an approach that could definitively determine
team skill from the games played, and then use a machine-learning algorithm to
calculate the correct coach skills for each team in a given year. We created a
networks-based model to calculate team skill from historical game data. A
digraph was created for each year in each sport. Nodes represented teams, and
edges represented a game played between two teams. The arrowhead pointed
towards the losing team. We calculated the team skill of each graph using a
right-hand eigenvector centrality measure. This way, teams that beat good teams
will be ranked higher than teams that beat mediocre teams. The eigenvector
centrality rankings for most years were well correlated with tournament
performance and poll-based rankings. We assumed that the relationship between
coach skill $C_s$, player skill $P_s$, and team skill $T_s$ was $C_s \cdot P_s
= T_s$. We then created a function to describe the probability that a given
score difference would occur based on player skill and coach skill. We
multiplied the probabilities of all edges in the network together to find the
probability that the correct network would occur with any given player skill
and coach skill matrix. We was able to determine player skill as a function of
team skill and coach skill, eliminating the need to optimize two unknown
matrices. The top five coaches in each year were noted, and the top coach of
all time was calculated by dividing the number of times that coach ranked in
the yearly top five by the years said coach had been active.
| [
"['Tian-Shun Jiang' 'Zachary Polizzi' 'Christopher Yuan']",
"Tian-Shun Jiang, Zachary Polizzi, Christopher Yuan"
]
|
cs.CV cs.LG cs.NE | null | 1404.2903 | null | null | http://arxiv.org/pdf/1404.2903v1 | 2014-04-02T11:38:35Z | 2014-04-02T11:38:35Z | Thoughts on a Recursive Classifier Graph: a Multiclass Network for Deep
Object Recognition | We propose a general multi-class visual recognition model, termed the
Classifier Graph, which aims to generalize and integrate ideas from many of
today's successful hierarchical recognition approaches. Our graph-based model
has the advantage of enabling rich interactions between classes from different
levels of interpretation and abstraction. The proposed multi-class system is
efficiently learned using step by step updates. The structure consists of
simple logistic linear layers with inputs from features that are automatically
selected from a large pool. Each newly learned classifier becomes a potential
new feature. Thus, our feature pool can consist both of initial manually
designed features as well as learned classifiers from previous steps (graph
nodes), each copied many times at different scales and locations. In this
manner we can learn and grow both a deep, complex graph of classifiers and a
rich pool of features at different levels of abstraction and interpretation.
Our proposed graph of classifiers becomes a multi-class system with a recursive
structure, suitable for deep detection and recognition of several classes
simultaneously.
| [
"['Marius Leordeanu' 'Rahul Sukthankar']",
"Marius Leordeanu and Rahul Sukthankar"
]
|
cs.LG | null | 1404.2948 | null | null | http://arxiv.org/pdf/1404.2948v1 | 2014-04-10T20:49:35Z | 2014-04-10T20:49:35Z | Gradient-based Laplacian Feature Selection | Analysis of high dimensional noisy data is of essence across a variety of
research fields. Feature selection techniques are designed to find the relevant
feature subset that can facilitate classification or pattern detection.
Traditional (supervised) feature selection methods utilize label information to
guide the identification of relevant feature subsets. In this paper, however,
we consider the unsupervised feature selection problem. Without the label
information, it is particularly difficult to identify a small set of relevant
features due to the noisy nature of real-world data which corrupts the
intrinsic structure of the data. Our Gradient-based Laplacian Feature Selection
(GLFS) selects important features by minimizing the variance of the Laplacian
regularized least squares regression model. With $\ell_1$ relaxation, GLFS can
find a sparse subset of features that is relevant to the Laplacian manifolds.
Extensive experiments on simulated, three real-world object recognition and two
computational biology datasets, have illustrated the power and superior
performance of our approach over multiple state-of-the-art unsupervised feature
selection methods. Additionally, we show that GLFS selects a sparser set of
more relevant features in a supervised setting outperforming the popular
elastic net methodology.
| [
"Bo Wang and Anna Goldenberg",
"['Bo Wang' 'Anna Goldenberg']"
]
|
cs.LG stat.ML | null | 1404.2986 | null | null | http://arxiv.org/pdf/1404.2986v1 | 2014-04-11T02:37:11Z | 2014-04-11T02:37:11Z | A Tutorial on Independent Component Analysis | Independent component analysis (ICA) has become a standard data analysis
technique applied to an array of problems in signal processing and machine
learning. This tutorial provides an introduction to ICA based on linear algebra
formulating an intuition for ICA from first principles. The goal of this
tutorial is to provide a solid foundation on this advanced topic so that one
might learn the motivation behind ICA, learn why and when to apply this
technique and in the process gain an introduction to this exciting field of
active research.
| [
"Jonathon Shlens",
"['Jonathon Shlens']"
]
|
cs.CV cond-mat.dis-nn cond-mat.stat-mech cs.LG stat.ML | 10.7566/JPSJ.83.124002 | 1404.3012 | null | null | http://arxiv.org/abs/1404.3012v5 | 2014-08-18T04:45:26Z | 2014-04-11T06:31:03Z | Bayesian image segmentations by Potts prior and loopy belief propagation | This paper presents a Bayesian image segmentation model based on Potts prior
and loopy belief propagation. The proposed Bayesian model involves several
terms, including the pairwise interactions of Potts models, and the average
vectors and covariant matrices of Gauss distributions in color image modeling.
These terms are often referred to as hyperparameters in statistical machine
learning theory. In order to determine these hyperparameters, we propose a new
scheme for hyperparameter estimation based on conditional maximization of
entropy in the Potts prior. The algorithm is given based on loopy belief
propagation. In addition, we compare our conditional maximum entropy framework
with the conventional maximum likelihood framework, and also clarify how the
first order phase transitions in LBP's for Potts models influence our
hyperparameter estimation procedures.
| [
"['Kazuyuki Tanaka' 'Shun Kataoka' 'Muneki Yasuda' 'Yuji Waizumi'\n 'Chiou-Ting Hsu']",
"Kazuyuki Tanaka, Shun Kataoka, Muneki Yasuda, Yuji Waizumi and\n Chiou-Ting Hsu"
]
|
cs.SI cs.CL cs.LG | 10.1145/2567948.2579272 | 1404.3026 | null | null | http://arxiv.org/abs/1404.3026v1 | 2014-04-11T07:55:51Z | 2014-04-11T07:55:51Z | On the Ground Validation of Online Diagnosis with Twitter and Medical
Records | Social media has been considered as a data source for tracking disease.
However, most analyses are based on models that prioritize strong correlation
with population-level disease rates over determining whether or not specific
individual users are actually sick. Taking a different approach, we develop a
novel system for social-media based disease detection at the individual level
using a sample of professionally diagnosed individuals. Specifically, we
develop a system for making an accurate influenza diagnosis based on an
individual's publicly available Twitter data. We find that about half (17/35 =
48.57%) of the users in our sample that were sick explicitly discuss their
disease on Twitter. By developing a meta classifier that combines text
analysis, anomaly detection, and social network analysis, we are able to
diagnose an individual with greater than 99% accuracy even if she does not
discuss her health.
| [
"Todd Bodnar, Victoria C Barclay, Nilam Ram, Conrad S Tucker, Marcel\n Salath\\'e",
"['Todd Bodnar' 'Victoria C Barclay' 'Nilam Ram' 'Conrad S Tucker'\n 'Marcel Salathé']"
]
|
null | null | 1404.3184 | null | null | http://arxiv.org/pdf/1404.3184v1 | 2014-04-11T18:50:34Z | 2014-04-11T18:50:34Z | Decreasing Weighted Sorted $\ell_1$ Regularization | We consider a new family of regularizers, termed {it weighted sorted $ell_1$ norms} (WSL1), which generalizes the recently introduced {it octagonal shrinkage and clustering algorithm for regression} (OSCAR) and also contains the $ell_1$ and $ell_{infty}$ norms as particular instances. We focus on a special case of the WSL1, the {sl decreasing WSL1} (DWSL1), where the elements of the argument vector are sorted in non-increasing order and the weights are also non-increasing. In this paper, after showing that the DWSL1 is indeed a norm, we derive two key tools for its use as a regularizer: the dual norm and the Moreau proximity operator. | [
"['Xiangrong Zeng' 'Mário A. T. Figueiredo']"
]
|
cs.LG | 10.1109/TNNLS.2014.2309939 | 1404.3190 | null | null | http://arxiv.org/abs/1404.3190v1 | 2014-04-11T19:15:22Z | 2014-04-11T19:15:22Z | Pareto-Path Multi-Task Multiple Kernel Learning | A traditional and intuitively appealing Multi-Task Multiple Kernel Learning
(MT-MKL) method is to optimize the sum (thus, the average) of objective
functions with (partially) shared kernel function, which allows information
sharing amongst tasks. We point out that the obtained solution corresponds to a
single point on the Pareto Front (PF) of a Multi-Objective Optimization (MOO)
problem, which considers the concurrent optimization of all task objectives
involved in the Multi-Task Learning (MTL) problem. Motivated by this last
observation and arguing that the former approach is heuristic, we propose a
novel Support Vector Machine (SVM) MT-MKL framework, that considers an
implicitly-defined set of conic combinations of task objectives. We show that
solving our framework produces solutions along a path on the aforementioned PF
and that it subsumes the optimization of the average of objective functions as
a special case. Using algorithms we derived, we demonstrate through a series of
experimental results that the framework is capable of achieving better
classification performance, when compared to other similar MTL approaches.
| [
"Cong Li, Michael Georgiopoulos, Georgios C. Anagnostopoulos",
"['Cong Li' 'Michael Georgiopoulos' 'Georgios C. Anagnostopoulos']"
]
|
cs.LG cs.IT math.IT math.ST stat.TH | null | 1404.3203 | null | null | http://arxiv.org/pdf/1404.3203v1 | 2014-04-11T19:49:05Z | 2014-04-11T19:49:05Z | Compressive classification and the rare eclipse problem | This paper addresses the fundamental question of when convex sets remain
disjoint after random projection. We provide an analysis using ideas from
high-dimensional convex geometry. For ellipsoids, we provide a bound in terms
of the distance between these ellipsoids and simple functions of their
polynomial coefficients. As an application, this theorem provides bounds for
compressive classification of convex sets. Rather than assuming that the data
to be classified is sparse, our results show that the data can be acquired via
very few measurements yet will remain linearly separable. We demonstrate the
feasibility of this approach in the context of hyperspectral imaging.
| [
"['Afonso S. Bandeira' 'Dustin G. Mixon' 'Benjamin Recht']",
"Afonso S. Bandeira and Dustin G. Mixon and Benjamin Recht"
]
|
cs.CV cs.LG | null | 1404.3291 | null | null | http://arxiv.org/pdf/1404.3291v1 | 2014-04-12T14:33:18Z | 2014-04-12T14:33:18Z | Cost-Effective HITs for Relative Similarity Comparisons | Similarity comparisons of the form "Is object a more similar to b than to c?"
are useful for computer vision and machine learning applications.
Unfortunately, an embedding of $n$ points is specified by $n^3$ triplets,
making collecting every triplet an expensive task. In noticing this difficulty,
other researchers have investigated more intelligent triplet sampling
techniques, but they do not study their effectiveness or their potential
drawbacks. Although it is important to reduce the number of collected triplets,
it is also important to understand how best to display a triplet collection
task to a user. In this work we explore an alternative display for collecting
triplets and analyze the monetary cost and speed of the display. We propose
best practices for creating cost effective human intelligence tasks for
collecting triplets. We show that rather than changing the sampling algorithm,
simple changes to the crowdsourcing UI can lead to much higher quality
embeddings. We also provide a dataset as well as the labels collected from
crowd workers.
| [
"['Michael J. Wilber' 'Iljung S. Kwak' 'Serge J. Belongie']",
"Michael J. Wilber and Iljung S. Kwak and Serge J. Belongie"
]
|
cs.LG cs.CC | null | 1404.3368 | null | null | http://arxiv.org/pdf/1404.3368v4 | 2018-03-26T08:54:17Z | 2014-04-13T11:13:02Z | Near-optimal sample compression for nearest neighbors | We present the first sample compression algorithm for nearest neighbors with
non-trivial performance guarantees. We complement these guarantees by
demonstrating almost matching hardness lower bounds, which show that our bound
is nearly optimal. Our result yields new insight into margin-based nearest
neighbor classification in metric spaces and allows us to significantly sharpen
and simplify existing bounds. Some encouraging empirical results are also
presented.
| [
"['Lee-Ad Gottlieb' 'Aryeh Kontorovich' 'Pinhas Nisnevitch']",
"Lee-Ad Gottlieb and Aryeh Kontorovich and Pinhas Nisnevitch"
]
|
cs.LG cs.CC | null | 1404.3378 | null | null | http://arxiv.org/pdf/1404.3378v2 | 2014-11-04T18:28:50Z | 2014-04-13T12:42:10Z | Complexity theoretic limitations on learning DNF's | Using the recently developed framework of [Daniely et al, 2014], we show that
under a natural assumption on the complexity of refuting random K-SAT formulas,
learning DNF formulas is hard. Furthermore, the same assumption implies the
hardness of learning intersections of $\omega(\log(n))$ halfspaces,
agnostically learning conjunctions, as well as virtually all (distribution
free) learning problems that were previously shown hard (under complexity
assumptions).
| [
"Amit Daniely and Shai Shalev-Shwatz",
"['Amit Daniely' 'Shai Shalev-Shwatz']"
]
|
cs.LG stat.ML | null | 1404.3415 | null | null | http://arxiv.org/pdf/1404.3415v2 | 2014-04-15T17:37:11Z | 2014-04-13T18:57:30Z | Generalized version of the support vector machine for binary
classification problems: supporting hyperplane machine | In this paper there is proposed a generalized version of the SVM for binary
classification problems in the case of using an arbitrary transformation x ->
y. An approach similar to the classic SVM method is used. The problem is widely
explained. Various formulations of primal and dual problems are proposed. For
one of the most important cases the formulae are derived in detail. A simple
computational example is demonstrated. The algorithm and its implementation is
presented in Octave language.
| [
"E. G. Abramov, A. B. Komissarov, D. A. Kornyakov",
"['E. G. Abramov' 'A. B. Komissarov' 'D. A. Kornyakov']"
]
|
stat.ML cs.IR cs.LG | null | 1404.3439 | null | null | http://arxiv.org/pdf/1404.3439v1 | 2014-04-13T23:07:20Z | 2014-04-13T23:07:20Z | Anytime Hierarchical Clustering | We propose a new anytime hierarchical clustering method that iteratively
transforms an arbitrary initial hierarchy on the configuration of measurements
along a sequence of trees we prove for a fixed data set must terminate in a
chain of nested partitions that satisfies a natural homogeneity requirement.
Each recursive step re-edits the tree so as to improve a local measure of
cluster homogeneity that is compatible with a number of commonly used (e.g.,
single, average, complete) linkage functions. As an alternative to the standard
batch algorithms, we present numerical evidence to suggest that appropriate
adaptations of this method can yield decentralized, scalable algorithms
suitable for distributed/parallel computation of clustering hierarchies and
online tracking of clustering trees applicable to large, dynamically changing
databases and anomaly detection.
| [
"['Omur Arslan' 'Daniel E. Koditschek']",
"Omur Arslan and Daniel E. Koditschek"
]
|
stat.ML cs.LG | 10.1007/978-3-662-44848-9_39 | 1404.3581 | null | null | http://arxiv.org/abs/1404.3581v4 | 2014-09-29T16:01:50Z | 2014-04-14T13:52:29Z | Random forests with random projections of the output space for high
dimensional multi-label classification | We adapt the idea of random projections applied to the output space, so as to
enhance tree-based ensemble methods in the context of multi-label
classification. We show how learning time complexity can be reduced without
affecting computational complexity and accuracy of predictions. We also show
that random output space projections may be used in order to reach different
bias-variance tradeoffs, over a broad panel of benchmark problems, and that
this may lead to improved accuracy while reducing significantly the
computational burden of the learning stage.
| [
"Arnaud Joly, Pierre Geurts, Louis Wehenkel",
"['Arnaud Joly' 'Pierre Geurts' 'Louis Wehenkel']"
]
|
math.OC cs.LG stat.ML | null | 1404.3591 | null | null | http://arxiv.org/pdf/1404.3591v2 | 2014-04-15T19:29:31Z | 2014-04-14T14:09:43Z | Hybrid Conditional Gradient - Smoothing Algorithms with Applications to
Sparse and Low Rank Regularization | We study a hybrid conditional gradient - smoothing algorithm (HCGS) for
solving composite convex optimization problems which contain several terms over
a bounded set. Examples of these include regularization problems with several
norms as penalties and a norm constraint. HCGS extends conditional gradient
methods to cases with multiple nonsmooth terms, in which standard conditional
gradient methods may be difficult to apply. The HCGS algorithm borrows
techniques from smoothing proximal methods and requires first-order
computations (subgradients and proximity operations). Unlike proximal methods,
HCGS benefits from the advantages of conditional gradient methods, which render
it more efficient on certain large scale optimization problems. We demonstrate
these advantages with simulations on two matrix optimization problems:
regularization of matrices with combined $\ell_1$ and trace norm penalties; and
a convex relaxation of sparse PCA.
| [
"['Andreas Argyriou' 'Marco Signoretto' 'Johan Suykens']",
"Andreas Argyriou and Marco Signoretto and Johan Suykens"
]
|
cs.CV cs.LG cs.NE | 10.1109/TIP.2015.2475625 | 1404.3606 | null | null | http://arxiv.org/abs/1404.3606v2 | 2014-08-28T15:20:44Z | 2014-04-14T15:02:17Z | PCANet: A Simple Deep Learning Baseline for Image Classification? | In this work, we propose a very simple deep learning network for image
classification which comprises only the very basic data processing components:
cascaded principal component analysis (PCA), binary hashing, and block-wise
histograms. In the proposed architecture, PCA is employed to learn multistage
filter banks. It is followed by simple binary hashing and block histograms for
indexing and pooling. This architecture is thus named as a PCA network (PCANet)
and can be designed and learned extremely easily and efficiently. For
comparison and better understanding, we also introduce and study two simple
variations to the PCANet, namely the RandNet and LDANet. They share the same
topology of PCANet but their cascaded filters are either selected randomly or
learned from LDA. We have tested these basic networks extensively on many
benchmark visual datasets for different tasks, such as LFW for face
verification, MultiPIE, Extended Yale B, AR, FERET datasets for face
recognition, as well as MNIST for hand-written digits recognition.
Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with
the state of the art features, either prefixed, highly hand-crafted or
carefully learned (by DNNs). Even more surprisingly, it sets new records for
many classification tasks in Extended Yale B, AR, FERET datasets, and MNIST
variations. Additional experiments on other public datasets also demonstrate
the potential of the PCANet serving as a simple but highly competitive baseline
for texture classification and object recognition.
| [
"Tsung-Han Chan, Kui Jia, Shenghua Gao, Jiwen Lu, Zinan Zeng and Yi Ma",
"['Tsung-Han Chan' 'Kui Jia' 'Shenghua Gao' 'Jiwen Lu' 'Zinan Zeng' 'Yi Ma']"
]
|
cs.LG cs.IR | null | 1404.3656 | null | null | http://arxiv.org/pdf/1404.3656v1 | 2014-04-14T17:09:32Z | 2014-04-14T17:09:32Z | Methods for Ordinal Peer Grading | MOOCs have the potential to revolutionize higher education with their wide
outreach and accessibility, but they require instructors to come up with
scalable alternates to traditional student evaluation. Peer grading -- having
students assess each other -- is a promising approach to tackling the problem
of evaluation at scale, since the number of "graders" naturally scales with the
number of students. However, students are not trained in grading, which means
that one cannot expect the same level of grading skills as in traditional
settings. Drawing on broad evidence that ordinal feedback is easier to provide
and more reliable than cardinal feedback, it is therefore desirable to allow
peer graders to make ordinal statements (e.g. "project X is better than project
Y") and not require them to make cardinal statements (e.g. "project X is a
B-"). Thus, in this paper we study the problem of automatically inferring
student grades from ordinal peer feedback, as opposed to existing methods that
require cardinal peer feedback. We formulate the ordinal peer grading problem
as a type of rank aggregation problem, and explore several probabilistic models
under which to estimate student grades and grader reliability. We study the
applicability of these methods using peer grading data collected from a real
class -- with instructor and TA grades as a baseline -- and demonstrate the
efficacy of ordinal feedback techniques in comparison to existing cardinal peer
grading methods. Finally, we compare these peer-grading techniques to
traditional evaluation techniques.
| [
"['Karthik Raman' 'Thorsten Joachims']",
"Karthik Raman and Thorsten Joachims"
]
|
cs.CV cs.LG stat.ML | null | 1404.3840 | null | null | http://arxiv.org/pdf/1404.3840v3 | 2014-12-20T03:37:36Z | 2014-04-15T07:51:23Z | Surpassing Human-Level Face Verification Performance on LFW with
GaussianFace | Face verification remains a challenging problem in very complex conditions
with large variations such as pose, illumination, expression, and occlusions.
This problem is exacerbated when we rely unrealistically on a single training
data source, which is often insufficient to cover the intrinsically complex
face variations. This paper proposes a principled multi-task learning approach
based on Discriminative Gaussian Process Latent Variable Model, named
GaussianFace, to enrich the diversity of training data. In comparison to
existing methods, our model exploits additional data from multiple
source-domains to improve the generalization performance of face verification
in an unknown target-domain. Importantly, our model can adapt automatically to
complex data distributions, and therefore can well capture complex face
variations inherent in multiple sources. Extensive experiments demonstrate the
effectiveness of the proposed model in learning from diverse data sources and
generalize to unseen domain. Specifically, the accuracy of our algorithm
achieves an impressive accuracy rate of 98.52% on the well-known and
challenging Labeled Faces in the Wild (LFW) benchmark. For the first time, the
human-level performance in face verification (97.53%) on LFW is surpassed.
| [
"Chaochao Lu, Xiaoou Tang",
"['Chaochao Lu' 'Xiaoou Tang']"
]
|
stat.ML cs.AI cs.LG | null | 1404.3862 | null | null | http://arxiv.org/pdf/1404.3862v4 | 2014-11-22T14:44:54Z | 2014-04-15T10:32:05Z | Optimizing the CVaR via Sampling | Conditional Value at Risk (CVaR) is a prominent risk measure that is being
used extensively in various domains. We develop a new formula for the gradient
of the CVaR in the form of a conditional expectation. Based on this formula, we
propose a novel sampling-based estimator for the CVaR gradient, in the spirit
of the likelihood-ratio method. We analyze the bias of the estimator, and prove
the convergence of a corresponding stochastic gradient descent algorithm to a
local CVaR optimum. Our method allows to consider CVaR optimization in new
domains. As an example, we consider a reinforcement learning application, and
learn a risk-sensitive controller for the game of Tetris.
| [
"Aviv Tamar, Yonatan Glassner, Shie Mannor",
"['Aviv Tamar' 'Yonatan Glassner' 'Shie Mannor']"
]
|
stat.ME cs.IT cs.LG math.IT math.ST stat.TH | null | 1404.4032 | null | null | http://arxiv.org/pdf/1404.4032v2 | 2014-07-16T17:57:02Z | 2014-04-15T19:35:15Z | Recovery of Coherent Data via Low-Rank Dictionary Pursuit | The recently established RPCA method provides us a convenient way to restore
low-rank matrices from grossly corrupted observations. While elegant in theory
and powerful in reality, RPCA may be not an ultimate solution to the low-rank
matrix recovery problem. Indeed, its performance may not be perfect even when
data are strictly low-rank. This is because conventional RPCA ignores the
clustering structures of the data which are ubiquitous in modern applications.
As the number of cluster grows, the coherence of data keeps increasing, and
accordingly, the recovery performance of RPCA degrades. We show that the
challenges raised by coherent data (i.e., the data with high coherence) could
be alleviated by Low-Rank Representation (LRR), provided that the dictionary in
LRR is configured appropriately. More precisely, we mathematically prove that
if the dictionary itself is low-rank then LRR is immune to the coherence
parameter which increases with the underlying cluster number. This provides an
elementary principle for dealing with coherent data. Subsequently, we devise a
practical algorithm to obtain proper dictionaries in unsupervised environments.
Our extensive experiments on randomly generated matrices verify our claims.
| [
"['Guangcan Liu' 'Ping Li']",
"Guangcan Liu and Ping Li"
]
|
cs.LG | null | 1404.4038 | null | null | http://arxiv.org/pdf/1404.4038v2 | 2014-04-17T16:05:57Z | 2014-04-15T19:47:15Z | Discovering and Exploiting Entailment Relationships in Multi-Label
Learning | This work presents a sound probabilistic method for enforcing adherence of
the marginal probabilities of a multi-label model to automatically discovered
deterministic relationships among labels. In particular we focus on discovering
two kinds of relationships among the labels. The first one concerns pairwise
positive entailement: pairs of labels, where the presence of one implies the
presence of the other in all instances of a dataset. The second concerns
exclusion: sets of labels that do not coexist in the same instances of the
dataset. These relationships are represented with a Bayesian network. Marginal
probabilities are entered as soft evidence in the network and adjusted through
probabilistic inference. Our approach offers robust improvements in mean
average precision compared to the standard binary relavance approach across all
12 datasets involved in our experiments. The discovery process helps
interesting implicit knowledge to emerge, which could be useful in itself.
| [
"Christina Papagiannopoulou, Grigorios Tsoumakas, Ioannis Tsamardinos",
"['Christina Papagiannopoulou' 'Grigorios Tsoumakas' 'Ioannis Tsamardinos']"
]
|
cs.LG | 10.14445/22312803/IJCTT-V10P107 | 1404.4088 | null | null | http://arxiv.org/abs/1404.4088v1 | 2014-04-15T21:35:48Z | 2014-04-15T21:35:48Z | Ensemble Classifiers and Their Applications: A Review | Ensemble classifier refers to a group of individual classifiers that are
cooperatively trained on data set in a supervised classification problem. In
this paper we present a review of commonly used ensemble classifiers in the
literature. Some ensemble classifiers are also developed targeting specific
applications. We also present some application driven ensemble classifiers in
this paper.
| [
"Akhlaqur Rahman, Sumaira Tasnim",
"['Akhlaqur Rahman' 'Sumaira Tasnim']"
]
|
stat.ML cs.LG | null | 1404.4095 | null | null | http://arxiv.org/pdf/1404.4095v3 | 2014-05-19T03:43:42Z | 2014-04-15T22:06:35Z | Multi-borders classification | The number of possible methods of generalizing binary classification to
multi-class classification increases exponentially with the number of class
labels. Often, the best method of doing so will be highly problem dependent.
Here we present classification software in which the partitioning of
multi-class classification problems into binary classification problems is
specified using a recursive control language.
| [
"['Peter Mills']",
"Peter Mills"
]
|
math.OC cs.CV cs.LG | null | 1404.4104 | null | null | http://arxiv.org/pdf/1404.4104v1 | 2014-04-15T22:54:21Z | 2014-04-15T22:54:21Z | Sparse Bilinear Logistic Regression | In this paper, we introduce the concept of sparse bilinear logistic
regression for decision problems involving explanatory variables that are
two-dimensional matrices. Such problems are common in computer vision,
brain-computer interfaces, style/content factorization, and parallel factor
analysis. The underlying optimization problem is bi-convex; we study its
solution and develop an efficient algorithm based on block coordinate descent.
We provide a theoretical guarantee for global convergence and estimate the
asymptotical convergence rate using the Kurdyka-{\L}ojasiewicz inequality. A
range of experiments with simulated and real data demonstrate that sparse
bilinear logistic regression outperforms current techniques in several
important applications.
| [
"['Jianing V. Shi' 'Yangyang Xu' 'Richard G. Baraniuk']",
"Jianing V. Shi, Yangyang Xu, and Richard G. Baraniuk"
]
|
cs.LG cs.AI stat.ML | null | 1404.4105 | null | null | http://arxiv.org/pdf/1404.4105v1 | 2014-04-15T22:55:53Z | 2014-04-15T22:55:53Z | Sparse Compositional Metric Learning | We propose a new approach for metric learning by framing it as learning a
sparse combination of locally discriminative metrics that are inexpensive to
generate from the training data. This flexible framework allows us to naturally
derive formulations for global, multi-task and local metric learning. The
resulting algorithms have several advantages over existing methods in the
literature: a much smaller number of parameters to be estimated and a
principled way to generalize learned metrics to new testing data points. To
analyze the approach theoretically, we derive a generalization bound that
justifies the sparse combination. Empirically, we evaluate our algorithms on
several datasets against state-of-the-art metric learning methods. The results
are consistent with our theoretical findings and demonstrate the superiority of
our approach in terms of classification performance and scalability.
| [
"Yuan Shi and Aur\\'elien Bellet and Fei Sha",
"['Yuan Shi' 'Aurélien Bellet' 'Fei Sha']"
]
|
cs.LG | null | 1404.4108 | null | null | http://arxiv.org/pdf/1404.4108v2 | 2014-07-09T07:17:54Z | 2014-02-24T15:17:39Z | Representation as a Service | Consider a Machine Learning Service Provider (MLSP) designed to rapidly
create highly accurate learners for a never-ending stream of new tasks. The
challenge is to produce task-specific learners that can be trained from few
labeled samples, even if tasks are not uniquely identified, and the number of
tasks and input dimensionality are large. In this paper, we argue that the MLSP
should exploit knowledge from previous tasks to build a good representation of
the environment it is in, and more precisely, that useful representations for
such a service are ones that minimize generalization error for a new hypothesis
trained on a new task. We formalize this intuition with a novel method that
minimizes an empirical proxy of the intra-task small-sample generalization
error. We present several empirical results showing state-of-the art
performance on single-task transfer, multitask learning, and the full lifelong
learning problem.
| [
"['Ouais Alsharif' 'Philip Bachman' 'Joelle Pineau']",
"Ouais Alsharif, Philip Bachman, Joelle Pineau"
]
|
cs.LG | null | 1404.4114 | null | null | http://arxiv.org/pdf/1404.4114v3 | 2014-11-26T04:14:16Z | 2014-04-16T00:12:03Z | Structured Stochastic Variational Inference | Stochastic variational inference makes it possible to approximate posterior
distributions induced by large datasets quickly using stochastic optimization.
The algorithm relies on the use of fully factorized variational distributions.
However, this "mean-field" independence approximation limits the fidelity of
the posterior approximation, and introduces local optima. We show how to relax
the mean-field approximation to allow arbitrary dependencies between global
parameters and local hidden variables, producing better parameter estimates by
reducing bias, sensitivity to local optima, and sensitivity to hyperparameters.
| [
"Matthew D. Hoffman and David M. Blei",
"['Matthew D. Hoffman' 'David M. Blei']"
]
|
cs.LG | null | 1404.4171 | null | null | http://arxiv.org/pdf/1404.4171v1 | 2014-04-16T08:54:01Z | 2014-04-16T08:54:01Z | Dropout Training for Support Vector Machines | Dropout and other feature noising schemes have shown promising results in
controlling over-fitting by artificially corrupting the training data. Though
extensive theoretical and empirical studies have been performed for generalized
linear models, little work has been done for support vector machines (SVMs),
one of the most successful approaches for supervised learning. This paper
presents dropout training for linear SVMs. To deal with the intractable
expectation of the non-smooth hinge loss under corrupting distributions, we
develop an iteratively re-weighted least square (IRLS) algorithm by exploring
data augmentation techniques. Our algorithm iteratively minimizes the
expectation of a re-weighted least square problem, where the re-weights have
closed-form solutions. The similar ideas are applied to develop a new IRLS
algorithm for the expected logistic loss under corrupting distributions. Our
algorithms offer insights on the connection and difference between the hinge
loss and logistic loss in dropout training. Empirical results on several real
datasets demonstrate the effectiveness of dropout training on significantly
boosting the classification accuracy of linear SVMs.
| [
"Ning Chen, Jun Zhu, Jianfei Chen, Bo Zhang",
"['Ning Chen' 'Jun Zhu' 'Jianfei Chen' 'Bo Zhang']"
]
|
stat.ML cs.LG q-bio.NC | null | 1404.4175 | null | null | http://arxiv.org/pdf/1404.4175v1 | 2014-04-16T09:21:26Z | 2014-04-16T09:21:26Z | MEG Decoding Across Subjects | Brain decoding is a data analysis paradigm for neuroimaging experiments that
is based on predicting the stimulus presented to the subject from the
concurrent brain activity. In order to make inference at the group level, a
straightforward but sometimes unsuccessful approach is to train a classifier on
the trials of a group of subjects and then to test it on unseen trials from new
subjects. The extreme difficulty is related to the structural and functional
variability across the subjects. We call this approach "decoding across
subjects". In this work, we address the problem of decoding across subjects for
magnetoencephalographic (MEG) experiments and we provide the following
contributions: first, we formally describe the problem and show that it belongs
to a machine learning sub-field called transductive transfer learning (TTL).
Second, we propose to use a simple TTL technique that accounts for the
differences between train data and test data. Third, we propose the use of
ensemble learning, and specifically of stacked generalization, to address the
variability across subjects within train data, with the aim of producing more
stable classifiers. On a face vs. scramble task MEG dataset of 16 subjects, we
compare the standard approach of not modelling the differences across subjects,
to the proposed one of combining TTL and ensemble learning. We show that the
proposed approach is consistently more accurate than the standard one.
| [
"['Emanuele Olivetti' 'Seyed Mostafa Kia' 'Paolo Avesani']",
"Emanuele Olivetti, Seyed Mostafa Kia, Paolo Avesani"
]
|
cs.CL cs.LG | null | 1404.4326 | null | null | http://arxiv.org/pdf/1404.4326v1 | 2014-04-16T17:57:01Z | 2014-04-16T17:57:01Z | Open Question Answering with Weakly Supervised Embedding Models | Building computers able to answer questions on any subject is a long standing
goal of artificial intelligence. Promising progress has recently been achieved
by methods that learn to map questions to logical forms or database queries.
Such approaches can be effective but at the cost of either large amounts of
human-labeled data or by defining lexicons and grammars tailored by
practitioners. In this paper, we instead take the radical approach of learning
to map questions to vectorial feature representations. By mapping answers into
the same space one can query any knowledge base independent of its schema,
without requiring any grammar or lexicon. Our method is trained with a new
optimization procedure combining stochastic gradient descent followed by a
fine-tuning step using the weak supervision provided by blending automatically
and collaboratively generated resources. We empirically demonstrate that our
model can capture meaningful signals from its noisy supervision leading to
major improvements over paralex, the only existing method able to be trained on
similar weakly labeled data.
| [
"['Antoine Bordes' 'Jason Weston' 'Nicolas Usunier']",
"Antoine Bordes, Jason Weston and Nicolas Usunier"
]
|
cs.LG stat.ML | null | 1404.4351 | null | null | http://arxiv.org/pdf/1404.4351v1 | 2014-04-16T19:12:47Z | 2014-04-16T19:12:47Z | Stable Graphical Models | Stable random variables are motivated by the central limit theorem for
densities with (potentially) unbounded variance and can be thought of as
natural generalizations of the Gaussian distribution to skewed and heavy-tailed
phenomenon. In this paper, we introduce stable graphical (SG) models, a class
of multivariate stable densities that can also be represented as Bayesian
networks whose edges encode linear dependencies between random variables. One
major hurdle to the extensive use of stable distributions is the lack of a
closed-form analytical expression for their densities. This makes penalized
maximum-likelihood based learning computationally demanding. We establish
theoretically that the Bayesian information criterion (BIC) can asymptotically
be reduced to the computationally more tractable minimum dispersion criterion
(MDC) and develop StabLe, a structure learning algorithm based on MDC. We use
simulated datasets for five benchmark network topologies to empirically
demonstrate how StabLe improves upon ordinary least squares (OLS) regression.
We also apply StabLe to microarray gene expression data for lymphoblastoid
cells from 727 individuals belonging to eight global population groups. We
establish that StabLe improves test set performance relative to OLS via
ten-fold cross-validation. Finally, we develop SGEX, a method for quantifying
differential expression of genes between different population groups.
| [
"['Navodit Misra' 'Ercan E. Kuruoglu']",
"Navodit Misra and Ercan E. Kuruoglu"
]
|
cs.LG cs.CV stat.ML | 10.1109/TIP.2015.2478396 | 1404.4412 | null | null | http://arxiv.org/abs/1404.4412v2 | 2015-09-16T08:58:14Z | 2014-04-17T01:52:09Z | Efficient Nonnegative Tucker Decompositions: Algorithms and Uniqueness | Nonnegative Tucker decomposition (NTD) is a powerful tool for the extraction
of nonnegative parts-based and physically meaningful latent components from
high-dimensional tensor data while preserving the natural multilinear structure
of data. However, as the data tensor often has multiple modes and is
large-scale, existing NTD algorithms suffer from a very high computational
complexity in terms of both storage and computation time, which has been one
major obstacle for practical applications of NTD. To overcome these
disadvantages, we show how low (multilinear) rank approximation (LRA) of
tensors is able to significantly simplify the computation of the gradients of
the cost function, upon which a family of efficient first-order NTD algorithms
are developed. Besides dramatically reducing the storage complexity and running
time, the new algorithms are quite flexible and robust to noise because any
well-established LRA approaches can be applied. We also show how nonnegativity
incorporating sparsity substantially improves the uniqueness property and
partially alleviates the curse of dimensionality of the Tucker decompositions.
Simulation results on synthetic and real-world data justify the validity and
high efficiency of the proposed NTD algorithms.
| [
"Guoxu Zhou and Andrzej Cichocki and Qibin Zhao and Shengli Xie",
"['Guoxu Zhou' 'Andrzej Cichocki' 'Qibin Zhao' 'Shengli Xie']"
]
|
cs.LG cs.CL cs.IR | null | 1404.4606 | null | null | http://arxiv.org/pdf/1404.4606v3 | 2014-06-19T12:58:13Z | 2014-04-16T12:59:29Z | How Many Topics? Stability Analysis for Topic Models | Topic modeling refers to the task of discovering the underlying thematic
structure in a text corpus, where the output is commonly presented as a report
of the top terms appearing in each topic. Despite the diversity of topic
modeling algorithms that have been proposed, a common challenge in successfully
applying these techniques is the selection of an appropriate number of topics
for a given corpus. Choosing too few topics will produce results that are
overly broad, while choosing too many will result in the "over-clustering" of a
corpus into many small, highly-similar topics. In this paper, we propose a
term-centric stability analysis strategy to address this issue, the idea being
that a model with an appropriate number of topics will be more robust to
perturbations in the data. Using a topic modeling approach based on matrix
factorization, evaluations performed on a range of corpora show that this
strategy can successfully guide the model selection process.
| [
"['Derek Greene' \"Derek O'Callaghan\" 'Pádraig Cunningham']",
"Derek Greene, Derek O'Callaghan, P\\'adraig Cunningham"
]
|
stat.ME cs.IR cs.LG stat.ML | null | 1404.4644 | null | null | http://arxiv.org/pdf/1404.4644v1 | 2014-04-17T20:39:24Z | 2014-04-17T20:39:24Z | A New Space for Comparing Graphs | Finding a new mathematical representations for graph, which allows direct
comparison between different graph structures, is an open-ended research
direction. Having such a representation is the first prerequisite for a variety
of machine learning algorithms like classification, clustering, etc., over
graph datasets. In this paper, we propose a symmetric positive semidefinite
matrix with the $(i,j)$-{th} entry equal to the covariance between normalized
vectors $A^ie$ and $A^je$ ($e$ being vector of all ones) as a representation
for graph with adjacency matrix $A$. We show that the proposed matrix
representation encodes the spectrum of the underlying adjacency matrix and it
also contains information about the counts of small sub-structures present in
the graph such as triangles and small paths. In addition, we show that this
matrix is a \emph{"graph invariant"}. All these properties make the proposed
matrix a suitable object for representing graphs.
The representation, being a covariance matrix in a fixed dimensional metric
space, gives a mathematical embedding for graphs. This naturally leads to a
measure of similarity on graph objects. We define similarity between two given
graphs as a Bhattacharya similarity measure between their corresponding
covariance matrix representations. As shown in our experimental study on the
task of social network classification, such a similarity measure outperforms
other widely used state-of-the-art methodologies. Our proposed method is also
computationally efficient. The computation of both the matrix representation
and the similarity value can be performed in operations linear in the number of
edges. This makes our method scalable in practice.
We believe our theoretical and empirical results provide evidence for
studying truncated power iterations, of the adjacency matrix, to characterize
social networks.
| [
"['Anshumali Shrivastava' 'Ping Li']",
"Anshumali Shrivastava and Ping Li"
]
|
stat.ME cs.IT cs.LG math.IT math.ST stat.TH | null | 1404.4646 | null | null | http://arxiv.org/pdf/1404.4646v2 | 2014-07-16T18:04:35Z | 2014-04-17T20:50:26Z | Advancing Matrix Completion by Modeling Extra Structures beyond
Low-Rankness | A well-known method for completing low-rank matrices based on convex
optimization has been established by Cand{\`e}s and Recht. Although
theoretically complete, the method may not entirely solve the low-rank matrix
completion problem. This is because the method captures only the low-rankness
property which gives merely a rough constraint that the data points locate on
some low-dimensional subspace, but generally ignores the extra structures which
specify in more detail how the data points locate on the subspace. Whenever the
geometric distribution of the data points is not uniform, the coherence
parameters of data might be large and, accordingly, the method might fail even
if the latent matrix we want to recover is fairly low-rank. To better handle
non-uniform data, in this paper we propose a method termed Low-Rank Factor
Decomposition (LRFD), which imposes an additional restriction that the data
points must be represented as linear combinations of the bases in a dictionary
constructed or learnt in advance. We show that LRFD can well handle non-uniform
data, provided that the dictionary is configured properly: We mathematically
prove that if the dictionary itself is low-rank then LRFD is immune to the
coherence parameters which might be large on non-uniform data. This provides an
elementary principle for learning the dictionary in LRFD and, naturally, leads
to a practical algorithm for advancing matrix completion. Extensive experiments
on randomly generated matrices and motion datasets show encouraging results.
| [
"['Guangcan Liu' 'Ping Li']",
"Guangcan Liu and Ping Li"
]
|
cs.LG stat.ML | null | 1404.4655 | null | null | http://arxiv.org/pdf/1404.4655v1 | 2014-04-17T21:16:13Z | 2014-04-17T21:16:13Z | Hierarchical Quasi-Clustering Methods for Asymmetric Networks | This paper introduces hierarchical quasi-clustering methods, a generalization
of hierarchical clustering for asymmetric networks where the output structure
preserves the asymmetry of the input data. We show that this output structure
is equivalent to a finite quasi-ultrametric space and study admissibility with
respect to two desirable properties. We prove that a modified version of single
linkage is the only admissible quasi-clustering method. Moreover, we show
stability of the proposed method and we establish invariance properties
fulfilled by it. Algorithms are further developed and the value of
quasi-clustering analysis is illustrated with a study of internal migration
within United States.
| [
"['Gunnar Carlsson' 'Facundo Mémoli' 'Alejandro Ribeiro' 'Santiago Segarra']",
"Gunnar Carlsson, Facundo M\\'emoli, Alejandro Ribeiro, Santiago Segarra"
]
|
stat.ML cs.IT cs.LG math.IT | 10.1109/TSP.2015.2417491 | 1404.4667 | null | null | http://arxiv.org/abs/1404.4667v1 | 2014-04-17T22:55:08Z | 2014-04-17T22:55:08Z | Subspace Learning and Imputation for Streaming Big Data Matrices and
Tensors | Extracting latent low-dimensional structure from high-dimensional data is of
paramount importance in timely inference tasks encountered with `Big Data'
analytics. However, increasingly noisy, heterogeneous, and incomplete datasets
as well as the need for {\em real-time} processing of streaming data pose major
challenges to this end. In this context, the present paper permeates benefits
from rank minimization to scalable imputation of missing data, via tracking
low-dimensional subspaces and unraveling latent (possibly multi-way) structure
from \emph{incomplete streaming} data. For low-rank matrix data, a subspace
estimator is proposed based on an exponentially-weighted least-squares
criterion regularized with the nuclear norm. After recasting the non-separable
nuclear norm into a form amenable to online optimization, real-time algorithms
with complementary strengths are developed and their convergence is established
under simplifying technical assumptions. In a stationary setting, the
asymptotic estimates obtained offer the well-documented performance guarantees
of the {\em batch} nuclear-norm regularized estimator. Under the same unifying
framework, a novel online (adaptive) algorithm is developed to obtain multi-way
decompositions of \emph{low-rank tensors} with missing entries, and perform
imputation as a byproduct. Simulated tests with both synthetic as well as real
Internet and cardiac magnetic resonance imagery (MRI) data confirm the efficacy
of the proposed algorithms, and their superior performance relative to
state-of-the-art alternatives.
| [
"Morteza Mardani, Gonzalo Mateos, and Georgios B. Giannakis",
"['Morteza Mardani' 'Gonzalo Mateos' 'Georgios B. Giannakis']"
]
|
cs.LG cs.DS | null | 1404.4702 | null | null | http://arxiv.org/pdf/1404.4702v3 | 2019-06-01T20:01:23Z | 2014-04-18T06:49:49Z | Tight Bounds on $\ell_1$ Approximation and Learning of Self-Bounding
Functions | We study the complexity of learning and approximation of self-bounding
functions over the uniform distribution on the Boolean hypercube ${0,1}^n$.
Informally, a function $f:{0,1}^n \rightarrow \mathbb{R}$ is self-bounding if
for every $x \in {0,1}^n$, $f(x)$ upper bounds the sum of all the $n$ marginal
decreases in the value of the function at $x$. Self-bounding functions include
such well-known classes of functions as submodular and fractionally-subadditive
(XOS) functions. They were introduced by Boucheron et al. (2000) in the context
of concentration of measure inequalities. Our main result is a nearly tight
$\ell_1$-approximation of self-bounding functions by low-degree juntas.
Specifically, all self-bounding functions can be $\epsilon$-approximated in
$\ell_1$ by a polynomial of degree $\tilde{O}(1/\epsilon)$ over
$2^{\tilde{O}(1/\epsilon)}$ variables. We show that both the degree and
junta-size are optimal up to logarithmic terms. Previous techniques considered
stronger $\ell_2$ approximation and proved nearly tight bounds of
$\Theta(1/\epsilon^{2})$ on the degree and $2^{\Theta(1/\epsilon^2)}$ on the
number of variables. Our bounds rely on the analysis of noise stability of
self-bounding functions together with a stronger connection between noise
stability and $\ell_1$ approximation by low-degree polynomials. This technique
can also be used to get tighter bounds on $\ell_1$ approximation by low-degree
polynomials and faster learning algorithm for halfspaces.
These results lead to improved and in several cases almost tight bounds for
PAC and agnostic learning of self-bounding functions relative to the uniform
distribution. In particular, assuming hardness of learning juntas, we show that
PAC and agnostic learning of self-bounding functions have complexity of
$n^{\tilde{\Theta}(1/\epsilon)}$.
| [
"Vitaly Feldman, Pravesh Kothari and Jan Vondr\\'ak",
"['Vitaly Feldman' 'Pravesh Kothari' 'Jan Vondrák']"
]
|
cs.CE astro-ph.IM cs.LG | 10.1088/0004-637X/793/1/23 | 1404.4888 | null | null | http://arxiv.org/abs/1404.4888v3 | 2015-05-27T21:27:11Z | 2014-04-18T21:12:13Z | Supervised detection of anomalous light-curves in massive astronomical
catalogs | The development of synoptic sky surveys has led to a massive amount of data
for which resources needed for analysis are beyond human capabilities. To
process this information and to extract all possible knowledge, machine
learning techniques become necessary. Here we present a new method to
automatically discover unknown variable objects in large astronomical catalogs.
With the aim of taking full advantage of all the information we have about
known objects, our method is based on a supervised algorithm. In particular, we
train a random forest classifier using known variability classes of objects and
obtain votes for each of the objects in the training set. We then model this
voting distribution with a Bayesian network and obtain the joint voting
distribution among the training objects. Consequently, an unknown object is
considered as an outlier insofar it has a low joint probability. Our method is
suitable for exploring massive datasets given that the training process is
performed offline. We tested our algorithm on 20 millions light-curves from the
MACHO catalog and generated a list of anomalous candidates. We divided the
candidates into two main classes of outliers: artifacts and intrinsic outliers.
Artifacts were principally due to air mass variation, seasonal variation, bad
calibration or instrumental errors and were consequently removed from our
outlier list and added to the training set. After retraining, we selected about
4000 objects, which we passed to a post analysis stage by perfoming a
cross-match with all publicly available catalogs. Within these candidates we
identified certain known but rare objects such as eclipsing Cepheids, blue
variables, cataclysmic variables and X-ray sources. For some outliers there
were no additional information. Among them we identified three unknown
variability types and few individual outliers that will be followed up for a
deeper analysis.
| [
"['Isadora Nun' 'Karim Pichara' 'Pavlos Protopapas' 'Dae-Won Kim']",
"Isadora Nun, Karim Pichara, Pavlos Protopapas, Dae-Won Kim"
]
|
cs.AI cs.LG cs.MS | null | 1404.4893 | null | null | http://arxiv.org/pdf/1404.4893v1 | 2014-04-18T21:48:34Z | 2014-04-18T21:48:34Z | CTBNCToolkit: Continuous Time Bayesian Network Classifier Toolkit | Continuous time Bayesian network classifiers are designed for temporal
classification of multivariate streaming data when time duration of events
matters and the class does not change over time. This paper introduces the
CTBNCToolkit: an open source Java toolkit which provides a stand-alone
application for temporal classification and a library for continuous time
Bayesian network classifiers. CTBNCToolkit implements the inference algorithm,
the parameter learning algorithm, and the structural learning algorithm for
continuous time Bayesian network classifiers. The structural learning algorithm
is based on scoring functions: the marginal log-likelihood score and the
conditional log-likelihood score are provided. CTBNCToolkit provides also an
implementation of the expectation maximization algorithm for clustering
purpose. The paper introduces continuous time Bayesian network classifiers. How
to use the CTBNToolkit from the command line is described in a specific
section. Tutorial examples are included to facilitate users to understand how
the toolkit must be used. A section dedicate to the Java library is proposed to
help further code extensions.
| [
"['Daniele Codecasa' 'Fabio Stella']",
"Daniele Codecasa and Fabio Stella"
]
|
cs.LG | null | 1404.4960 | null | null | http://arxiv.org/pdf/1404.4960v2 | 2014-07-11T06:30:18Z | 2014-04-19T14:57:54Z | Agent Behavior Prediction and Its Generalization Analysis | Machine learning algorithms have been applied to predict agent behaviors in
real-world dynamic systems, such as advertiser behaviors in sponsored search
and worker behaviors in crowdsourcing. The behavior data in these systems are
generated by live agents: once the systems change due to the adoption of the
prediction models learnt from the behavior data, agents will observe and
respond to these changes by changing their own behaviors accordingly. As a
result, the behavior data will evolve and will not be identically and
independently distributed, posing great challenges to the theoretical analysis
on the machine learning algorithms for behavior prediction. To tackle this
challenge, in this paper, we propose to use Markov Chain in Random Environments
(MCRE) to describe the behavior data, and perform generalization analysis of
the machine learning algorithms on its basis. Since the one-step transition
probability matrix of MCRE depends on both previous states and the random
environment, conventional techniques for generalization analysis cannot be
directly applied. To address this issue, we propose a novel technique that
transforms the original MCRE into a higher-dimensional time-homogeneous Markov
chain. The new Markov chain involves more variables but is more regular, and
thus easier to deal with. We prove the convergence of the new Markov chain when
time approaches infinity. Then we prove a generalization bound for the machine
learning algorithms on the behavior data generated by the new Markov chain,
which depends on both the Markovian parameters and the covering number of the
function class compounded by the loss function for behavior prediction and the
behavior prediction model. To the best of our knowledge, this is the first work
that performs the generalization analysis on data generated by complex
processes in real-world dynamic systems.
| [
"Fei Tian, Haifang Li, Wei Chen, Tao Qin, Enhong Chen, Tie-Yan Liu",
"['Fei Tian' 'Haifang Li' 'Wei Chen' 'Tao Qin' 'Enhong Chen' 'Tie-Yan Liu']"
]
|
cs.LG cs.DS stat.ML | null | 1404.4997 | null | null | http://arxiv.org/pdf/1404.4997v3 | 2015-05-17T04:47:58Z | 2014-04-19T23:59:35Z | Tight bounds for learning a mixture of two gaussians | We consider the problem of identifying the parameters of an unknown mixture
of two arbitrary $d$-dimensional gaussians from a sequence of independent
random samples. Our main results are upper and lower bounds giving a
computationally efficient moment-based estimator with an optimal convergence
rate, thus resolving a problem introduced by Pearson (1894). Denoting by
$\sigma^2$ the variance of the unknown mixture, we prove that
$\Theta(\sigma^{12})$ samples are necessary and sufficient to estimate each
parameter up to constant additive error when $d=1.$ Our upper bound extends to
arbitrary dimension $d>1$ up to a (provably necessary) logarithmic loss in $d$
using a novel---yet simple---dimensionality reduction technique. We further
identify several interesting special cases where the sample complexity is
notably smaller than our optimal worst-case bound. For instance, if the means
of the two components are separated by $\Omega(\sigma)$ the sample complexity
reduces to $O(\sigma^2)$ and this is again optimal.
Our results also apply to learning each component of the mixture up to small
error in total variation distance, where our algorithm gives strong
improvements in sample complexity over previous work. We also extend our lower
bound to mixtures of $k$ Gaussians, showing that $\Omega(\sigma^{6k-2})$
samples are necessary to estimate each parameter up to constant additive error.
| [
"['Moritz Hardt' 'Eric Price']",
"Moritz Hardt and Eric Price"
]
|
cs.CV cs.LG cs.NA | null | 1404.5009 | null | null | http://arxiv.org/pdf/1404.5009v4 | 2015-09-09T04:35:30Z | 2014-04-20T04:47:04Z | Efficient Semidefinite Branch-and-Cut for MAP-MRF Inference | We propose a Branch-and-Cut (B&C) method for solving general MAP-MRF
inference problems. The core of our method is a very efficient bounding
procedure, which combines scalable semidefinite programming (SDP) and a
cutting-plane method for seeking violated constraints. In order to further
speed up the computation, several strategies have been exploited, including
model reduction, warm start and removal of inactive constraints.
We analyze the performance of the proposed method under different settings,
and demonstrate that our method either outperforms or performs on par with
state-of-the-art approaches. Especially when the connectivities are dense or
when the relative magnitudes of the unary costs are low, we achieve the best
reported results. Experiments show that the proposed algorithm achieves better
approximation than the state-of-the-art methods within a variety of time
budgets on challenging non-submodular MAP-MRF inference problems.
| [
"['Peng Wang' 'Chunhua Shen' 'Anton van den Hengel' 'Philip Torr']",
"Peng Wang, Chunhua Shen, Anton van den Hengel, Philip Torr"
]
|
cs.LG | 10.1007/978-3-662-44845-8_15 | 1404.5065 | null | null | http://arxiv.org/abs/1404.5065v1 | 2014-04-20T19:17:23Z | 2014-04-20T19:17:23Z | Multi-Target Regression via Random Linear Target Combinations | Multi-target regression is concerned with the simultaneous prediction of
multiple continuous target variables based on the same set of input variables.
It arises in several interesting industrial and environmental application
domains, such as ecological modelling and energy forecasting. This paper
presents an ensemble method for multi-target regression that constructs new
target variables via random linear combinations of existing targets. We discuss
the connection of our approach with multi-label classification algorithms, in
particular RA$k$EL, which originally inspired this work, and a family of recent
multi-label classification algorithms that involve output coding. Experimental
results on 12 multi-target datasets show that it performs significantly better
than a strong baseline that learns a single model for each target using
gradient boosting and compares favourably to multi-objective random forest
approach, which is a state-of-the-art approach. The experiments further show
that our approach improves more when stronger unconditional dependencies exist
among the targets.
| [
"Grigorios Tsoumakas, Eleftherios Spyromitros-Xioufis, Aikaterini\n Vrekou, Ioannis Vlahavas",
"['Grigorios Tsoumakas' 'Eleftherios Spyromitros-Xioufis'\n 'Aikaterini Vrekou' 'Ioannis Vlahavas']"
]
|
cs.IT cs.LG math.IT stat.ML | 10.1109/TNSRE.2014.2319334 | 1404.5122 | null | null | http://arxiv.org/abs/1404.5122v2 | 2014-11-15T01:53:29Z | 2014-04-21T06:35:57Z | Spatiotemporal Sparse Bayesian Learning with Applications to Compressed
Sensing of Multichannel Physiological Signals | Energy consumption is an important issue in continuous wireless
telemonitoring of physiological signals. Compressed sensing (CS) is a promising
framework to address it, due to its energy-efficient data compression
procedure. However, most CS algorithms have difficulty in data recovery due to
non-sparsity characteristic of many physiological signals. Block sparse
Bayesian learning (BSBL) is an effective approach to recover such signals with
satisfactory recovery quality. However, it is time-consuming in recovering
multichannel signals, since its computational load almost linearly increases
with the number of channels.
This work proposes a spatiotemporal sparse Bayesian learning algorithm to
recover multichannel signals simultaneously. It not only exploits temporal
correlation within each channel signal, but also exploits inter-channel
correlation among different channel signals. Furthermore, its computational
load is not significantly affected by the number of channels. The proposed
algorithm was applied to brain computer interface (BCI) and EEG-based driver's
drowsiness estimation. Results showed that the algorithm had both better
recovery performance and much higher speed than BSBL. Particularly, the
proposed algorithm ensured that the BCI classification and the drowsiness
estimation had little degradation even when data were compressed by 80%, making
it very suitable for continuous wireless telemonitoring of multichannel
signals.
| [
"['Zhilin Zhang' 'Tzyy-Ping Jung' 'Scott Makeig' 'Zhouyue Pi'\n 'Bhaskar D. Rao']",
"Zhilin Zhang, Tzyy-Ping Jung, Scott Makeig, Zhouyue Pi, Bhaskar D. Rao"
]
|
cs.RO cs.LG stat.ML | null | 1404.5165 | null | null | http://arxiv.org/pdf/1404.5165v2 | 2014-04-22T08:03:33Z | 2014-04-21T10:28:00Z | GP-Localize: Persistent Mobile Robot Localization using Online Sparse
Gaussian Process Observation Model | Central to robot exploration and mapping is the task of persistent
localization in environmental fields characterized by spatially correlated
measurements. This paper presents a Gaussian process localization (GP-Localize)
algorithm that, in contrast to existing works, can exploit the spatially
correlated field measurements taken during a robot's exploration (instead of
relying on prior training data) for efficiently and scalably learning the GP
observation model online through our proposed novel online sparse GP. As a
result, GP-Localize is capable of achieving constant time and memory (i.e.,
independent of the size of the data) per filtering step, which demonstrates the
practical feasibility of using GPs for persistent robot localization and
autonomy. Empirical evaluation via simulated experiments with real-world
datasets and a real robot experiment shows that GP-Localize outperforms
existing GP localization algorithms.
| [
"Nuo Xu, Kian Hsiang Low, Jie Chen, Keng Kiat Lim, Etkin Baris Ozgul",
"['Nuo Xu' 'Kian Hsiang Low' 'Jie Chen' 'Keng Kiat Lim' 'Etkin Baris Ozgul']"
]
|
cs.LG cs.AI stat.ML | null | 1404.5214 | null | null | http://arxiv.org/pdf/1404.5214v1 | 2014-04-21T14:56:17Z | 2014-04-21T14:56:17Z | Graph Kernels via Functional Embedding | We propose a representation of graph as a functional object derived from the
power iteration of the underlying adjacency matrix. The proposed functional
representation is a graph invariant, i.e., the functional remains unchanged
under any reordering of the vertices. This property eliminates the difficulty
of handling exponentially many isomorphic forms. Bhattacharyya kernel
constructed between these functionals significantly outperforms the
state-of-the-art graph kernels on 3 out of the 4 standard benchmark graph
classification datasets, demonstrating the superiority of our approach. The
proposed methodology is simple and runs in time linear in the number of edges,
which makes our kernel more efficient and scalable compared to many widely
adopted graph kernels with running time cubic in the number of vertices.
| [
"['Anshumali Shrivastava' 'Ping Li']",
"Anshumali Shrivastava and Ping Li"
]
|
cs.DS cs.CC cs.LG math.OC | null | 1404.5236 | null | null | http://arxiv.org/pdf/1404.5236v2 | 2014-05-27T17:52:52Z | 2014-04-21T16:24:13Z | Sum-of-squares proofs and the quest toward optimal algorithms | In order to obtain the best-known guarantees, algorithms are traditionally
tailored to the particular problem we want to solve. Two recent developments,
the Unique Games Conjecture (UGC) and the Sum-of-Squares (SOS) method,
surprisingly suggest that this tailoring is not necessary and that a single
efficient algorithm could achieve best possible guarantees for a wide range of
different problems.
The Unique Games Conjecture (UGC) is a tantalizing conjecture in
computational complexity, which, if true, will shed light on the complexity of
a great many problems. In particular this conjecture predicts that a single
concrete algorithm provides optimal guarantees among all efficient algorithms
for a large class of computational problems.
The Sum-of-Squares (SOS) method is a general approach for solving systems of
polynomial constraints. This approach is studied in several scientific
disciplines, including real algebraic geometry, proof complexity, control
theory, and mathematical programming, and has found applications in fields as
diverse as quantum information theory, formal verification, game theory and
many others.
We survey some connections that were recently uncovered between the Unique
Games Conjecture and the Sum-of-Squares method. In particular, we discuss new
tools to rigorously bound the running time of the SOS method for obtaining
approximate solutions to hard optimization problems, and how these tools give
the potential for the sum-of-squares method to provide new guarantees for many
problems of interest, and possibly to even refute the UGC.
| [
"Boaz Barak and David Steurer",
"['Boaz Barak' 'David Steurer']"
]
|
cs.LG cs.MA | null | 1404.5421 | null | null | http://arxiv.org/pdf/1404.5421v1 | 2014-04-22T08:30:56Z | 2014-04-22T08:30:56Z | Concurrent bandits and cognitive radio networks | We consider the problem of multiple users targeting the arms of a single
multi-armed stochastic bandit. The motivation for this problem comes from
cognitive radio networks, where selfish users need to coexist without any side
communication between them, implicit cooperation or common control. Even the
number of users may be unknown and can vary as users join or leave the network.
We propose an algorithm that combines an $\epsilon$-greedy learning rule with a
collision avoidance mechanism. We analyze its regret with respect to the
system-wide optimum and show that sub-linear regret can be obtained in this
setting. Experiments show dramatic improvement compared to other algorithms for
this setting.
| [
"Orly Avner and Shie Mannor",
"['Orly Avner' 'Shie Mannor']"
]
|
cs.FL cs.DS cs.LG | null | 1404.5475 | null | null | http://arxiv.org/pdf/1404.5475v2 | 2014-11-01T13:29:52Z | 2014-04-22T12:44:42Z | Combining pattern-based CRFs and weighted context-free grammars | We consider two models for the sequence labeling (tagging) problem. The first
one is a {\em Pattern-Based Conditional Random Field }(\PB), in which the
energy of a string (chain labeling) $x=x_1\ldots x_n\in D^n$ is a sum of terms
over intervals $[i,j]$ where each term is non-zero only if the substring
$x_i\ldots x_j$ equals a prespecified word $w\in \Lambda$. The second model is
a {\em Weighted Context-Free Grammar }(\WCFG) frequently used for natural
language processing. \PB and \WCFG encode local and non-local interactions
respectively, and thus can be viewed as complementary.
We propose a {\em Grammatical Pattern-Based CRF model }(\GPB) that combines
the two in a natural way. We argue that it has certain advantages over existing
approaches such as the {\em Hybrid model} of Bened{\'i} and Sanchez that
combines {\em $\mbox{$N$-grams}$} and \WCFGs. The focus of this paper is to
analyze the complexity of inference tasks in a \GPB such as computing MAP. We
present a polynomial-time algorithm for general \GPBs and a faster version for
a special case that we call {\em Interaction Grammars}.
| [
"['Rustem Takhanov' 'Vladimir Kolmogorov']",
"Rustem Takhanov and Vladimir Kolmogorov"
]
|
cs.LG | null | 1404.5511 | null | null | http://arxiv.org/pdf/1404.5511v1 | 2014-04-18T21:17:04Z | 2014-04-18T21:17:04Z | Coactive Learning for Locally Optimal Problem Solving | Coactive learning is an online problem solving setting where the solutions
provided by a solver are interactively improved by a domain expert, which in
turn drives learning. In this paper we extend the study of coactive learning to
problems where obtaining a globally optimal or near-optimal solution may be
intractable or where an expert can only be expected to make small, local
improvements to a candidate solution. The goal of learning in this new setting
is to minimize the cost as measured by the expert effort over time. We first
establish theoretical bounds on the average cost of the existing coactive
Perceptron algorithm. In addition, we consider new online algorithms that use
cost-sensitive and Passive-Aggressive (PA) updates, showing similar or improved
theoretical bounds. We provide an empirical evaluation of the learners in
various domains, which show that the Perceptron based algorithms are quite
effective and that unlike the case for online classification, the PA algorithms
do not yield significant performance gains.
| [
"Robby Goetschalckx, Alan Fern, Prasad Tadepalli",
"['Robby Goetschalckx' 'Alan Fern' 'Prasad Tadepalli']"
]
|
cs.SI cs.CY cs.LG cs.MA | 10.1109/ICADIWT.2014.6814694 | 1404.5521 | null | null | http://arxiv.org/abs/1404.5521v1 | 2014-04-22T15:12:17Z | 2014-04-22T15:12:17Z | Together we stand, Together we fall, Together we win: Dynamic Team
Formation in Massive Open Online Courses | Massive Open Online Courses (MOOCs) offer a new scalable paradigm for
e-learning by providing students with global exposure and opportunities for
connecting and interacting with millions of people all around the world. Very
often, students work as teams to effectively accomplish course related tasks.
However, due to lack of face to face interaction, it becomes difficult for MOOC
students to collaborate. Additionally, the instructor also faces challenges in
manually organizing students into teams because students flock to these MOOCs
in huge numbers. Thus, the proposed research is aimed at developing a robust
methodology for dynamic team formation in MOOCs, the theoretical framework for
which is grounded at the confluence of organizational team theory, social
network analysis and machine learning. A prerequisite for such an undertaking
is that we understand the fact that, each and every informal tie established
among students offers the opportunities to influence and be influenced.
Therefore, we aim to extract value from the inherent connectedness of students
in the MOOC. These connections carry with them radical implications for the way
students understand each other in the networked learning community. Our
approach will enable course instructors to automatically group students in
teams that have fairly balanced social connections with their peers, well
defined in terms of appropriately selected qualitative and quantitative network
metrics.
| [
"Tanmay Sinha",
"['Tanmay Sinha']"
]
|
cs.DS cs.LG math.OC stat.ML | 10.1109/TSP.2015.2461515 | 1404.5692 | null | null | http://arxiv.org/abs/1404.5692v2 | 2015-07-23T01:33:16Z | 2014-04-23T03:31:45Z | Forward - Backward Greedy Algorithms for Atomic Norm Regularization | In many signal processing applications, the aim is to reconstruct a signal
that has a simple representation with respect to a certain basis or frame.
Fundamental elements of the basis known as "atoms" allow us to define "atomic
norms" that can be used to formulate convex regularizations for the
reconstruction problem. Efficient algorithms are available to solve these
formulations in certain special cases, but an approach that works well for
general atomic norms, both in terms of speed and reconstruction accuracy,
remains to be found. This paper describes an optimization algorithm called
CoGEnT that produces solutions with succinct atomic representations for
reconstruction problems, generally formulated with atomic-norm constraints.
CoGEnT combines a greedy selection scheme based on the conditional gradient
approach with a backward (or "truncation") step that exploits the quadratic
nature of the objective to reduce the basis size. We establish convergence
properties and validate the algorithm via extensive numerical experiments on a
suite of signal processing applications. Our algorithm and analysis also allow
for inexact forward steps and for occasional enhancements of the current
representation to be performed. CoGEnT can outperform the basic conditional
gradient method, and indeed many methods that are tailored to specific
applications, when the enhancement and truncation steps are defined
appropriately. We also introduce several novel applications that are enabled by
the atomic-norm framework, including tensor completion, moment problems in
signal processing, and graph deconvolution.
| [
"['Nikhil Rao' 'Parikshit Shah' 'Stephen Wright']",
"Nikhil Rao, Parikshit Shah, Stephen Wright"
]
|
cs.IR cs.LG cs.NE | null | 1404.5772 | null | null | http://arxiv.org/pdf/1404.5772v3 | 2014-07-28T13:59:03Z | 2014-04-23T10:14:41Z | Sequential Click Prediction for Sponsored Search with Recurrent Neural
Networks | Click prediction is one of the fundamental problems in sponsored search. Most
of existing studies took advantage of machine learning approaches to predict ad
click for each event of ad view independently. However, as observed in the
real-world sponsored search system, user's behaviors on ads yield high
dependency on how the user behaved along with the past time, especially in
terms of what queries she submitted, what ads she clicked or ignored, and how
long she spent on the landing pages of clicked ads, etc. Inspired by these
observations, we introduce a novel framework based on Recurrent Neural Networks
(RNN). Compared to traditional methods, this framework directly models the
dependency on user's sequential behaviors into the click prediction process
through the recurrent structure in RNN. Large scale evaluations on the
click-through logs from a commercial search engine demonstrate that our
approach can significantly improve the click prediction accuracy, compared to
sequence-independent approaches.
| [
"['Yuyu Zhang' 'Hanjun Dai' 'Chang Xu' 'Jun Feng' 'Taifeng Wang'\n 'Jiang Bian' 'Bin Wang' 'Tie-Yan Liu']",
"Yuyu Zhang, Hanjun Dai, Chang Xu, Jun Feng, Taifeng Wang, Jiang Bian,\n Bin Wang and Tie-Yan Liu"
]
|
math.NA cs.LG | null | 1404.5899 | null | null | http://arxiv.org/pdf/1404.5899v1 | 2014-04-22T05:04:00Z | 2014-04-22T05:04:00Z | A Comparison of Clustering and Missing Data Methods for Health Sciences | In this paper, we compare and analyze clustering methods with missing data in
health behavior research. In particular, we propose and analyze the use of
compressive sensing's matrix completion along with spectral clustering to
cluster health related data. The empirical tests and real data results show
that these methods can outperform standard methods like LPA and FIML, in terms
of lower misclassification rates in clustering and better matrix completion
performance in missing data problems. According to our examination, a possible
explanation of these improvements is that spectral clustering takes advantage
of high data dimension and compressive sensing methods utilize the
near-to-low-rank property of health data.
| [
"['Ran Zhao' 'Deanna Needell' 'Christopher Johansen' 'Jerry L. Grenard']",
"Ran Zhao, Deanna Needell, Christopher Johansen, Jerry L. Grenard"
]
|
stat.ML cs.LG | null | 1404.5903 | null | null | http://arxiv.org/pdf/1404.5903v1 | 2014-04-23T17:25:02Z | 2014-04-23T17:25:02Z | Most Correlated Arms Identification | We study the problem of finding the most mutually correlated arms among many
arms. We show that adaptive arms sampling strategies can have significant
advantages over the non-adaptive uniform sampling strategy. Our proposed
algorithms rely on a novel correlation estimator. The use of this accurate
estimator allows us to get improved results for a wide range of problem
instances.
| [
"['Che-Yu Liu' 'Sébastien Bubeck']",
"Che-Yu Liu, S\\'ebastien Bubeck"
]
|
cs.NE cs.DC cs.LG | null | 1404.5997 | null | null | http://arxiv.org/pdf/1404.5997v2 | 2014-04-26T23:10:51Z | 2014-04-23T22:37:56Z | One weird trick for parallelizing convolutional neural networks | I present a new way to parallelize the training of convolutional neural
networks across multiple GPUs. The method scales significantly better than all
alternatives when applied to modern convolutional neural networks.
| [
"['Alex Krizhevsky']",
"Alex Krizhevsky"
]
|
cs.LG stat.ML | null | 1404.6074 | null | null | http://arxiv.org/pdf/1404.6074v1 | 2014-04-24T10:22:33Z | 2014-04-24T10:22:33Z | Classifying pairs with trees for supervised biological network inference | Networks are ubiquitous in biology and computational approaches have been
largely investigated for their inference. In particular, supervised machine
learning methods can be used to complete a partially known network by
integrating various measurements. Two main supervised frameworks have been
proposed: the local approach, which trains a separate model for each network
node, and the global approach, which trains a single model over pairs of nodes.
Here, we systematically investigate, theoretically and empirically, the
exploitation of tree-based ensemble methods in the context of these two
approaches for biological network inference. We first formalize the problem of
network inference as classification of pairs, unifying in the process
homogeneous and bipartite graphs and discussing two main sampling schemes. We
then present the global and the local approaches, extending the later for the
prediction of interactions between two unseen network nodes, and discuss their
specializations to tree-based ensemble methods, highlighting their
interpretability and drawing links with clustering techniques. Extensive
computational experiments are carried out with these methods on various
biological networks that clearly highlight that these methods are competitive
with existing methods.
| [
"Marie Schrynemackers, Louis Wehenkel, M. Madan Babu and Pierre Geurts",
"['Marie Schrynemackers' 'Louis Wehenkel' 'M. Madan Babu' 'Pierre Geurts']"
]
|
cs.LG | null | 1404.6163 | null | null | http://arxiv.org/pdf/1404.6163v2 | 2014-04-27T14:59:53Z | 2014-04-24T15:50:02Z | Overlapping Trace Norms in Multi-View Learning | Multi-view learning leverages correlations between different sources of data
to make predictions in one view based on observations in another view. A
popular approach is to assume that, both, the correlations between the views
and the view-specific covariances have a low-rank structure, leading to
inter-battery factor analysis, a model closely related to canonical correlation
analysis. We propose a convex relaxation of this model using structured norm
regularization. Further, we extend the convex formulation to a robust version
by adding an l1-penalized matrix to our estimator, similarly to convex robust
PCA. We develop and compare scalable algorithms for several convex multi-view
models. We show experimentally that the view-specific correlations are
improving data imputation performances, as well as labeling accuracy in
real-world multi-label prediction tasks.
| [
"['Behrouz Behmardi' 'Cedric Archambeau' 'Guillaume Bouchard']",
"Behrouz Behmardi, Cedric Archambeau, Guillaume Bouchard"
]
|
stat.ML cs.DS cs.LG stat.ME | null | 1404.6216 | null | null | http://arxiv.org/pdf/1404.6216v1 | 2014-04-24T18:35:37Z | 2014-04-24T18:35:37Z | CoRE Kernels | The term "CoRE kernel" stands for correlation-resemblance kernel. In many
applications (e.g., vision), the data are often high-dimensional, sparse, and
non-binary. We propose two types of (nonlinear) CoRE kernels for non-binary
sparse data and demonstrate the effectiveness of the new kernels through a
classification experiment. CoRE kernels are simple with no tuning parameters.
However, training nonlinear kernel SVM can be (very) costly in time and memory
and may not be suitable for truly large-scale industrial applications (e.g.
search). In order to make the proposed CoRE kernels more practical, we develop
basic probabilistic hashing algorithms which transform nonlinear kernels into
linear kernels.
| [
"Ping Li",
"['Ping Li']"
]
|
cs.CV cs.LG | null | 1404.6272 | null | null | http://arxiv.org/pdf/1404.6272v1 | 2014-04-24T21:23:41Z | 2014-04-24T21:23:41Z | Scalable Similarity Learning using Large Margin Neighborhood Embedding | Classifying large-scale image data into object categories is an important
problem that has received increasing research attention. Given the huge amount
of data, non-parametric approaches such as nearest neighbor classifiers have
shown promising results, especially when they are underpinned by a learned
distance or similarity measurement. Although metric learning has been well
studied in the past decades, most existing algorithms are impractical to handle
large-scale data sets. In this paper, we present an image similarity learning
method that can scale well in both the number of images and the dimensionality
of image descriptors. To this end, similarity comparison is restricted to each
sample's local neighbors and a discriminative similarity measure is induced
from large margin neighborhood embedding. We also exploit the ensemble of
projections so that high-dimensional features can be processed in a set of
lower-dimensional subspaces in parallel without much performance compromise.
The similarity function is learned online using a stochastic gradient descent
algorithm in which the triplet sampling strategy is customized for quick
convergence of classification performance. The effectiveness of our proposed
model is validated on several data sets with scales varying from tens of
thousands to one million images. Recognition accuracies competitive with the
state-of-the-art performance are achieved with much higher efficiency and
scalability.
| [
"Zhaowen Wang, Jianchao Yang, Zhe Lin, Jonathan Brandt, Shiyu Chang,\n Thomas Huang",
"['Zhaowen Wang' 'Jianchao Yang' 'Zhe Lin' 'Jonathan Brandt' 'Shiyu Chang'\n 'Thomas Huang']"
]
|
cs.SC cs.LG | 10.1007/978-3-319-08434-3_8 | 1404.6369 | null | null | http://arxiv.org/abs/1404.6369v1 | 2014-04-25T09:43:05Z | 2014-04-25T09:43:05Z | Applying machine learning to the problem of choosing a heuristic to
select the variable ordering for cylindrical algebraic decomposition | Cylindrical algebraic decomposition(CAD) is a key tool in computational
algebraic geometry, particularly for quantifier elimination over real-closed
fields. When using CAD, there is often a choice for the ordering placed on the
variables. This can be important, with some problems infeasible with one
variable ordering but easy with another. Machine learning is the process of
fitting a computer model to a complex function based on properties learned from
measured data. In this paper we use machine learning (specifically a support
vector machine) to select between heuristics for choosing a variable ordering,
outperforming each of the separate heuristics.
| [
"['Zongyan Huang' 'Matthew England' 'David Wilson' 'James H. Davenport'\n 'Lawrence C. Paulson' 'James Bridge']",
"Zongyan Huang, Matthew England, David Wilson, James H. Davenport,\n Lawrence C. Paulson and James Bridge"
]
|
cs.LG | null | 1404.6580 | null | null | http://arxiv.org/pdf/1404.6580v2 | 2016-05-09T14:35:51Z | 2014-04-25T22:59:34Z | Multitask Learning for Sequence Labeling Tasks | In this paper, we present a learning method for sequence labeling tasks in
which each example sequence has multiple label sequences. Our method learns
multiple models, one model for each label sequence. Each model computes the
joint probability of all label sequences given the example sequence. Although
each model considers all label sequences, its primary focus is only one label
sequence, and therefore, each model becomes a task-specific model, for the task
belonging to that primary label. Such multiple models are learned {\it
simultaneously} by facilitating the learning transfer among models through {\it
explicit parameter sharing}. We experiment the proposed method on two
applications and show that our method significantly outperforms the
state-of-the-art method.
| [
"Arvind Agarwal, Saurabh Kataria",
"['Arvind Agarwal' 'Saurabh Kataria']"
]
|
cs.LG | null | 1404.6674 | null | null | http://arxiv.org/pdf/1404.6674v1 | 2014-04-26T19:24:24Z | 2014-04-26T19:24:24Z | A Comparison of First-order Algorithms for Machine Learning | Using an optimization algorithm to solve a machine learning problem is one of
mainstreams in the field of science. In this work, we demonstrate a
comprehensive comparison of some state-of-the-art first-order optimization
algorithms for convex optimization problems in machine learning. We concentrate
on several smooth and non-smooth machine learning problems with a loss function
plus a regularizer. The overall experimental results show the superiority of
primal-dual algorithms in solving a machine learning problem from the
perspectives of the ease to construct, running time and accuracy.
| [
"Yu Wei and Pock Thomas",
"['Yu Wei' 'Pock Thomas']"
]
|
stat.ML cs.LG | null | 1404.6702 | null | null | http://arxiv.org/pdf/1404.6702v1 | 2014-04-27T01:46:49Z | 2014-04-27T01:46:49Z | A Constrained Matrix-Variate Gaussian Process for Transposable Data | Transposable data represents interactions among two sets of entities, and are
typically represented as a matrix containing the known interaction values.
Additional side information may consist of feature vectors specific to entities
corresponding to the rows and/or columns of such a matrix. Further information
may also be available in the form of interactions or hierarchies among entities
along the same mode (axis). We propose a novel approach for modeling
transposable data with missing interactions given additional side information.
The interactions are modeled as noisy observations from a latent noise free
matrix generated from a matrix-variate Gaussian process. The construction of
row and column covariances using side information provides a flexible mechanism
for specifying a-priori knowledge of the row and column correlations in the
data. Further, the use of such a prior combined with the side information
enables predictions for new rows and columns not observed in the training data.
In this work, we combine the matrix-variate Gaussian process model with low
rank constraints. The constrained Gaussian process approach is applied to the
prediction of hidden associations between genes and diseases using a small set
of observed associations as well as prior covariances induced by gene-gene
interaction networks and disease ontologies. The proposed approach is also
applied to recommender systems data which involves predicting the item ratings
of users using known associations as well as prior covariances induced by
social networks. We present experimental results that highlight the performance
of constrained matrix-variate Gaussian process as compared to state of the art
approaches in each domain.
| [
"['Oluwasanmi Koyejo' 'Cheng Lee' 'Joydeep Ghosh']",
"Oluwasanmi Koyejo, Cheng Lee, Joydeep Ghosh"
]
|
cs.LG stat.ML | null | 1404.6876 | null | null | http://arxiv.org/pdf/1404.6876v1 | 2014-04-28T06:30:39Z | 2014-04-28T06:30:39Z | Conditional Density Estimation with Dimensionality Reduction via
Squared-Loss Conditional Entropy Minimization | Regression aims at estimating the conditional mean of output given input.
However, regression is not informative enough if the conditional density is
multimodal, heteroscedastic, and asymmetric. In such a case, estimating the
conditional density itself is preferable, but conditional density estimation
(CDE) is challenging in high-dimensional space. A naive approach to coping with
high-dimensionality is to first perform dimensionality reduction (DR) and then
execute CDE. However, such a two-step process does not perform well in practice
because the error incurred in the first DR step can be magnified in the second
CDE step. In this paper, we propose a novel single-shot procedure that performs
CDE and DR simultaneously in an integrated way. Our key idea is to formulate DR
as the problem of minimizing a squared-loss variant of conditional entropy, and
this is solved via CDE. Thus, an additional CDE step is not needed after DR. We
demonstrate the usefulness of the proposed method through extensive experiments
on various datasets including humanoid robot transition and computer art.
| [
"['Voot Tangkaratt' 'Ning Xie' 'Masashi Sugiyama']",
"Voot Tangkaratt, Ning Xie, and Masashi Sugiyama"
]
|
cs.LG cs.IT cs.NE math.IT | 10.1117/12.2050759 | 1404.6955 | null | null | http://arxiv.org/abs/1404.6955v1 | 2014-04-23T19:25:48Z | 2014-04-23T19:25:48Z | Probabilistic graphs using coupled random variables | Neural network design has utilized flexible nonlinear processes which can
mimic biological systems, but has suffered from a lack of traceability in the
resulting network. Graphical probabilistic models ground network design in
probabilistic reasoning, but the restrictions reduce the expressive capability
of each node making network designs complex. The ability to model coupled
random variables using the calculus of nonextensive statistical mechanics
provides a neural node design incorporating nonlinear coupling between input
states while maintaining the rigor of probabilistic reasoning. A generalization
of Bayes rule using the coupled product enables a single node to model
correlation between hundreds of random variables. A coupled Markov random field
is designed for the inferencing and classification of UCI's MLR 'Multiple
Features Data Set' such that thousands of linear correlation parameters can be
replaced with a single coupling parameter with just a (3%, 4%) percent
reduction in (classification, inference) performance.
| [
"['Kenric P. Nelson' 'Madalina Barbu' 'Brian J. Scannell']",
"Kenric P. Nelson, Madalina Barbu, Brian J. Scannell"
]
|
cs.SI cs.LG physics.soc-ph stat.ML | 10.1007/s10618-015-0421-2 | 1404.7048 | null | null | http://arxiv.org/abs/1404.7048v2 | 2015-02-06T00:15:42Z | 2014-04-25T13:28:37Z | Multiscale Event Detection in Social Media | Event detection has been one of the most important research topics in social
media analysis. Most of the traditional approaches detect events based on fixed
temporal and spatial resolutions, while in reality events of different scales
usually occur simultaneously, namely, they span different intervals in time and
space. In this paper, we propose a novel approach towards multiscale event
detection using social media data, which takes into account different temporal
and spatial scales of events in the data. Specifically, we explore the
properties of the wavelet transform, which is a well-developed multiscale
transform in signal processing, to enable automatic handling of the interaction
between temporal and spatial scales. We then propose a novel algorithm to
compute a data similarity graph at appropriate scales and detect events of
different scales simultaneously by a single graph-based clustering process.
Furthermore, we present spatiotemporal statistical analysis of the noisy
information present in the data stream, which allows us to define a novel
term-filtering procedure for the proposed event detection algorithm and helps
us study its behavior using simulated noisy data. Experimental results on both
synthetically generated data and real world data collected from Twitter
demonstrate the meaningfulness and effectiveness of the proposed approach. Our
framework further extends to numerous application domains that involve
multiscale and multiresolution data analysis.
| [
"['Xiaowen Dong' 'Dimitrios Mavroeidis' 'Francesco Calabrese'\n 'Pascal Frossard']",
"Xiaowen Dong, Dimitrios Mavroeidis, Francesco Calabrese, Pascal\n Frossard"
]
|
cs.SY cs.LG cs.LO cs.RO | null | 1404.7073 | null | null | http://arxiv.org/pdf/1404.7073v2 | 2014-04-30T17:20:57Z | 2014-04-28T17:57:48Z | Probably Approximately Correct MDP Learning and Control With Temporal
Logic Constraints | We consider synthesis of control policies that maximize the probability of
satisfying given temporal logic specifications in unknown, stochastic
environments. We model the interaction between the system and its environment
as a Markov decision process (MDP) with initially unknown transition
probabilities. The solution we develop builds on the so-called model-based
probably approximately correct Markov decision process (PAC-MDP) methodology.
The algorithm attains an $\varepsilon$-approximately optimal policy with
probability $1-\delta$ using samples (i.e. observations), time and space that
grow polynomially with the size of the MDP, the size of the automaton
expressing the temporal logic specification, $\frac{1}{\varepsilon}$,
$\frac{1}{\delta}$ and a finite time horizon. In this approach, the system
maintains a model of the initially unknown MDP, and constructs a product MDP
based on its learned model and the specification automaton that expresses the
temporal logic constraints. During execution, the policy is iteratively updated
using observation of the transitions taken by the system. The iteration
terminates in finitely many steps. With high probability, the resulting policy
is such that, for any state, the difference between the probability of
satisfying the specification under this policy and the optimal one is within a
predefined bound.
| [
"['Jie Fu' 'Ufuk Topcu']",
"Jie Fu and Ufuk Topcu"
]
|
cs.LG | null | 1404.7195 | null | null | http://arxiv.org/pdf/1404.7195v1 | 2014-04-29T00:08:15Z | 2014-04-29T00:08:15Z | Fast Approximation of Rotations and Hessians matrices | A new method to represent and approximate rotation matrices is introduced.
The method represents approximations of a rotation matrix $Q$ with linearithmic
complexity, i.e. with $\frac{1}{2}n\lg(n)$ rotations over pairs of coordinates,
arranged in an FFT-like fashion. The approximation is "learned" using gradient
descent. It allows to represent symmetric matrices $H$ as $QDQ^T$ where $D$ is
a diagonal matrix. It can be used to approximate covariance matrix of Gaussian
models in order to speed up inference, or to estimate and track the inverse
Hessian of an objective function by relating changes in parameters to changes
in gradient along the trajectory followed by the optimization procedure.
Experiments were conducted to approximate synthetic matrices, covariance
matrices of real data, and Hessian matrices of objective functions involved in
machine learning problems.
| [
"Michael Mathieu and Yann LeCun",
"['Michael Mathieu' 'Yann LeCun']"
]
|
cs.LG | 10.1088/1742-6596/574/1/012064 | 1404.7255 | null | null | http://arxiv.org/abs/1404.7255v1 | 2014-04-29T06:43:19Z | 2014-04-29T06:43:19Z | Meteorological time series forecasting based on MLP modelling using
heterogeneous transfer functions | In this paper, we propose to study four meteorological and seasonal time
series coupled with a multi-layer perceptron (MLP) modeling. We chose to
combine two transfer functions for the nodes of the hidden layer, and to use a
temporal indicator (time index as input) in order to take into account the
seasonal aspect of the studied time series. The results of the prediction
concern two years of measurements and the learning step, eight independent
years. We show that this methodology can improve the accuracy of meteorological
data estimation compared to a classical MLP modelling with a homogenous
transfer function.
| [
"['Cyril Voyant' 'Marie Laure Nivet' 'Christophe Paoli' 'Marc Muselli'\n 'Gilles Notton']",
"Cyril Voyant (SPE), Marie Laure Nivet (SPE), Christophe Paoli (SPE),\n Marc Muselli (SPE), Gilles Notton (SPE)"
]
|
cs.CV cs.LG stat.ML | 10.1109/CVPR.2014.526 | 1404.7306 | null | null | http://arxiv.org/abs/1404.7306v1 | 2014-04-29T10:45:22Z | 2014-04-29T10:45:22Z | Generalized Nonconvex Nonsmooth Low-Rank Minimization | As surrogate functions of $L_0$-norm, many nonconvex penalty functions have
been proposed to enhance the sparse vector recovery. It is easy to extend these
nonconvex penalty functions on singular values of a matrix to enhance low-rank
matrix recovery. However, different from convex optimization, solving the
nonconvex low-rank minimization problem is much more challenging than the
nonconvex sparse minimization problem. We observe that all the existing
nonconvex penalty functions are concave and monotonically increasing on
$[0,\infty)$. Thus their gradients are decreasing functions. Based on this
property, we propose an Iteratively Reweighted Nuclear Norm (IRNN) algorithm to
solve the nonconvex nonsmooth low-rank minimization problem. IRNN iteratively
solves a Weighted Singular Value Thresholding (WSVT) problem. By setting the
weight vector as the gradient of the concave penalty function, the WSVT problem
has a closed form solution. In theory, we prove that IRNN decreases the
objective function value monotonically, and any limit point is a stationary
point. Extensive experiments on both synthetic data and real images demonstrate
that IRNN enhances the low-rank matrix recovery compared with state-of-the-art
convex algorithms.
| [
"['Canyi Lu' 'Jinhui Tang' 'Shuicheng Yan' 'Zhouchen Lin']",
"Canyi Lu, Jinhui Tang, Shuicheng Yan, Zhouchen Lin"
]
|
cs.LG cs.SC stat.ML | null | 1404.7456 | null | null | http://arxiv.org/pdf/1404.7456v1 | 2014-04-28T17:19:25Z | 2014-04-28T17:19:25Z | Automatic Differentiation of Algorithms for Machine Learning | Automatic differentiation---the mechanical transformation of numeric computer
programs to calculate derivatives efficiently and accurately---dates to the
origin of the computer age. Reverse mode automatic differentiation both
antedates and generalizes the method of backwards propagation of errors used in
machine learning. Despite this, practitioners in a variety of fields, including
machine learning, have been little influenced by automatic differentiation, and
make scant use of available tools. Here we review the technique of automatic
differentiation, describe its two main modes, and explain how it can benefit
machine learning practitioners. To reach the widest possible audience our
treatment assumes only elementary differential calculus, and does not assume
any knowledge of linear algebra.
| [
"['Atilim Gunes Baydin' 'Barak A. Pearlmutter']",
"Atilim Gunes Baydin, Barak A. Pearlmutter"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.