categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
cs.IR cs.LG stat.ML | null | 1207.6379 | null | null | http://arxiv.org/pdf/1207.6379v1 | 2012-07-26T19:27:03Z | 2012-07-26T19:27:03Z | Identifying Users From Their Rating Patterns | This paper reports on our analysis of the 2011 CAMRa Challenge dataset (Track
2) for context-aware movie recommendation systems. The train dataset comprises
4,536,891 ratings provided by 171,670 users on 23,974$ movies, as well as the
household groupings of a subset of the users. The test dataset comprises 5,450
ratings for which the user label is missing, but the household label is
provided. The challenge required to identify the user labels for the ratings in
the test set. Our main finding is that temporal information (time labels of the
ratings) is significantly more useful for achieving this objective than the
user preferences (the actual ratings). Using a model that leverages on this
fact, we are able to identify users within a known household with an accuracy
of approximately 96% (i.e. misclassification rate around 4%).
| [
"Jos\\'e Bento, Nadia Fawaz, Andrea Montanari, Stratis Ioannidis",
"['José Bento' 'Nadia Fawaz' 'Andrea Montanari' 'Stratis Ioannidis']"
] |
stat.ML cs.LG stat.AP | null | 1207.6430 | null | null | http://arxiv.org/pdf/1207.6430v2 | 2014-06-04T08:31:57Z | 2012-07-26T23:14:34Z | Optimal Data Collection For Informative Rankings Expose Well-Connected
Graphs | Given a graph where vertices represent alternatives and arcs represent
pairwise comparison data, the statistical ranking problem is to find a
potential function, defined on the vertices, such that the gradient of the
potential function agrees with the pairwise comparisons. Our goal in this paper
is to develop a method for collecting data for which the least squares
estimator for the ranking problem has maximal Fisher information. Our approach,
based on experimental design, is to view data collection as a bi-level
optimization problem where the inner problem is the ranking problem and the
outer problem is to identify data which maximizes the informativeness of the
ranking. Under certain assumptions, the data collection problem decouples,
reducing to a problem of finding multigraphs with large algebraic connectivity.
This reduction of the data collection problem to graph-theoretic questions is
one of the primary contributions of this work. As an application, we study the
Yahoo! Movie user rating dataset and demonstrate that the addition of a small
number of well-chosen pairwise comparisons can significantly increase the
Fisher informativeness of the ranking. As another application, we study the
2011-12 NCAA football schedule and propose schedules with the same number of
games which are significantly more informative. Using spectral clustering
methods to identify highly-connected communities within the division, we argue
that the NCAA could improve its notoriously poor rankings by simply scheduling
more out-of-conference games.
| [
"['Braxton Osting' 'Christoph Brune' 'Stanley J. Osher']",
"Braxton Osting and Christoph Brune and Stanley J. Osher"
] |
cs.NI cs.LG | null | 1207.6910 | null | null | http://arxiv.org/pdf/1207.6910v2 | 2013-05-08T10:32:56Z | 2012-07-30T12:20:20Z | Gaussian process regression as a predictive model for Quality-of-Service
in Web service systems | In this paper, we present the Gaussian process regression as the predictive
model for Quality-of-Service (QoS) attributes in Web service systems. The goal
is to predict performance of the execution system expressed as QoS attributes
given existing execution system, service repository, and inputs, e.g., streams
of requests. In order to evaluate the performance of Gaussian process
regression the simulation environment was developed. Two quality indexes were
used, namely, Mean Absolute Error and Mean Squared Error. The results obtained
within the experiment show that the Gaussian process performed the best with
linear kernel and statistically significantly better comparing to
Classification and Regression Trees (CART) method.
| [
"['Jakub M. Tomczak' 'Jerzy Swiatek' 'Krzysztof Latawiec']",
"Jakub M. Tomczak, Jerzy Swiatek, Krzysztof Latawiec"
] |
cs.LG | null | 1207.7035 | null | null | http://arxiv.org/pdf/1207.7035v1 | 2012-07-27T08:47:50Z | 2012-07-27T08:47:50Z | Supervised Laplacian Eigenmaps with Applications in Clinical Diagnostics
for Pediatric Cardiology | Electronic health records contain rich textual data which possess critical
predictive information for machine-learning based diagnostic aids. However many
traditional machine learning methods fail to simultaneously integrate both
vector space data and text. We present a supervised method using Laplacian
eigenmaps to augment existing machine-learning methods with low-dimensional
representations of textual predictors which preserve the local similarities.
The proposed implementation performs alternating optimization using gradient
descent. For the evaluation we applied our method to over 2,000 patient records
from a large single-center pediatric cardiology practice to predict if patients
were diagnosed with cardiac disease. Our method was compared with latent
semantic indexing, latent Dirichlet allocation, and local Fisher discriminant
analysis. The results were assessed using AUC, MCC, specificity, and
sensitivity. Results indicate supervised Laplacian eigenmaps was the highest
performing method in our study, achieving 0.782 and 0.374 for AUC and MCC
respectively. SLE showed an increase in 8.16% in AUC and 20.6% in MCC over the
baseline which excluded textual data and a 2.69% and 5.35% increase in AUC and
MCC respectively over unsupervised Laplacian eigenmaps. This method allows many
existing machine learning predictors to effectively and efficiently utilize the
potential of textual predictors.
| [
"['Thomas Perry' 'Hongyuan Zha' 'Patricio Frias' 'Dadan Zeng'\n 'Mark Braunstein']",
"Thomas Perry and Hongyuan Zha and Patricio Frias and Dadan Zeng and\n Mark Braunstein"
] |
cs.LO cs.LG | 10.2168/LMCS-8(3:25)2012 | 1207.7167 | null | null | http://arxiv.org/abs/1207.7167v2 | 2012-09-28T11:03:50Z | 2012-07-31T04:55:45Z | Predicate Generation for Learning-Based Quantifier-Free Loop Invariant
Inference | We address the predicate generation problem in the context of loop invariant
inference. Motivated by the interpolation-based abstraction refinement
technique, we apply the interpolation theorem to synthesize predicates
implicitly implied by program texts. Our technique is able to improve the
effectiveness and efficiency of the learning-based loop invariant inference
algorithm in [14]. We report experiment results of examples from Linux,
SPEC2000, and Tar utility.
| [
"['Wonchan Lee' 'Yungbum Jung' 'Bow-yaw Wang' 'Kwangkuen Yi']",
"Wonchan Lee (Seoul National University), Yungbum Jung (Seoul National\n University), Bow-yaw Wang (Academia Sinica), Kwangkuen Yi (Seoul National\n University)"
] |
q-bio.QM cs.LG q-bio.BM stat.ML | 10.1186/1471-2105-14-82 | 1207.7253 | null | null | http://arxiv.org/abs/1207.7253v1 | 2012-07-31T14:11:31Z | 2012-07-31T14:11:31Z | Learning a peptide-protein binding affinity predictor with kernel ridge
regression | We propose a specialized string kernel for small bio-molecules, peptides and
pseudo-sequences of binding interfaces. The kernel incorporates
physico-chemical properties of amino acids and elegantly generalize eight
kernels, such as the Oligo, the Weighted Degree, the Blended Spectrum, and the
Radial Basis Function. We provide a low complexity dynamic programming
algorithm for the exact computation of the kernel and a linear time algorithm
for it's approximation. Combined with kernel ridge regression and SupCK, a
novel binding pocket kernel, the proposed kernel yields biologically relevant
and good prediction accuracy on the PepX database. For the first time, a
machine learning predictor is capable of accurately predicting the binding
affinity of any peptide to any protein. The method was also applied to both
single-target and pan-specific Major Histocompatibility Complex class II
benchmark datasets and three Quantitative Structure Affinity Model benchmark
datasets.
On all benchmarks, our method significantly (p-value < 0.057) outperforms the
current state-of-the-art methods at predicting peptide-protein binding
affinities. The proposed approach is flexible and can be applied to predict any
quantitative biological activity. The method should be of value to a large
segment of the research community with the potential to accelerate
peptide-based drug and vaccine development.
| [
"S\\'ebastien Gigu\\`ere, Mario Marchand, Fran\\c{c}ois Laviolette,\n Alexandre Drouin and Jacques Corbeil",
"['Sébastien Giguère' 'Mario Marchand' 'François Laviolette'\n 'Alexandre Drouin' 'Jacques Corbeil']"
] |
stat.ML cs.LG | null | 1208.0129 | null | null | http://arxiv.org/pdf/1208.0129v1 | 2012-08-01T07:57:53Z | 2012-08-01T07:57:53Z | Oracle inequalities for computationally adaptive model selection | We analyze general model selection procedures using penalized empirical loss
minimization under computational constraints. While classical model selection
approaches do not consider computational aspects of performing model selection,
we argue that any practical model selection procedure must not only trade off
estimation and approximation error, but also the computational effort required
to compute empirical minimizers for different function classes. We provide a
framework for analyzing such problems, and we give algorithms for model
selection under a computational budget. These algorithms satisfy oracle
inequalities that show that the risk of the selected model is not much worse
than if we had devoted all of our omputational budget to the optimal function
class.
| [
"['Alekh Agarwal' 'Peter L. Bartlett' 'John C. Duchi']",
"Alekh Agarwal, Peter L. Bartlett, John C. Duchi"
] |
cs.CV cs.DS cs.LG stat.ML | null | 1208.0378 | null | null | http://arxiv.org/pdf/1208.0378v1 | 2012-08-02T00:54:02Z | 2012-08-02T00:54:02Z | Fast Planar Correlation Clustering for Image Segmentation | We describe a new optimization scheme for finding high-quality correlation
clusterings in planar graphs that uses weighted perfect matching as a
subroutine. Our method provides lower-bounds on the energy of the optimal
correlation clustering that are typically fast to compute and tight in
practice. We demonstrate our algorithm on the problem of image segmentation
where this approach outperforms existing global optimization techniques in
minimizing the objective and is competitive with the state of the art in
producing high-quality segmentations.
| [
"Julian Yarkony, Alexander T. Ihler, Charless C. Fowlkes",
"['Julian Yarkony' 'Alexander T. Ihler' 'Charless C. Fowlkes']"
] |
cs.LG stat.ML | null | 1208.0402 | null | null | http://arxiv.org/pdf/1208.0402v1 | 2012-08-02T05:20:01Z | 2012-08-02T05:20:01Z | Multidimensional Membership Mixture Models | We present the multidimensional membership mixture (M3) models where every
dimension of the membership represents an independent mixture model and each
data point is generated from the selected mixture components jointly. This is
helpful when the data has a certain shared structure. For example, three unique
means and three unique variances can effectively form a Gaussian mixture model
with nine components, while requiring only six parameters to fully describe it.
In this paper, we present three instantiations of M3 models (together with the
learning and inference algorithms): infinite, finite, and hybrid, depending on
whether the number of mixtures is fixed or not. They are built upon Dirichlet
process mixture models, latent Dirichlet allocation, and a combination
respectively. We then consider two applications: topic modeling and learning 3D
object arrangements. Our experiments show that our M3 models achieve better
performance using fewer topics than many classic topic models. We also observe
that topics from the different dimensions of M3 models are meaningful and
orthogonal to each other.
| [
"['Yun Jiang' 'Marcus Lim' 'Ashutosh Saxena']",
"Yun Jiang, Marcus Lim and Ashutosh Saxena"
] |
cs.CV cs.LG stat.ML | 10.1137/130936166 | 1208.0432 | null | null | http://arxiv.org/abs/1208.0432v3 | 2014-03-06T06:12:11Z | 2012-08-02T08:43:45Z | Efficient Point-to-Subspace Query in $\ell^1$ with Application to Robust
Object Instance Recognition | Motivated by vision tasks such as robust face and object recognition, we
consider the following general problem: given a collection of low-dimensional
linear subspaces in a high-dimensional ambient (image) space, and a query point
(image), efficiently determine the nearest subspace to the query in $\ell^1$
distance. In contrast to the naive exhaustive search which entails large-scale
linear programs, we show that the computational burden can be cut down
significantly by a simple two-stage algorithm: (1) projecting the query and
data-base subspaces into lower-dimensional space by random Cauchy matrix, and
solving small-scale distance evaluations (linear programs) in the projection
space to locate candidate nearest; (2) with few candidates upon independent
repetition of (1), getting back to the high-dimensional space and performing
exhaustive search. To preserve the identity of the nearest subspace with
nontrivial probability, the projection dimension typically is low-order
polynomial of the subspace dimension multiplied by logarithm of number of the
subspaces (Theorem 2.1). The reduced dimensionality and hence complexity
renders the proposed algorithm particularly relevant to vision application such
as robust face and object instance recognition that we investigate empirically.
| [
"Ju Sun and Yuqian Zhang and John Wright",
"['Ju Sun' 'Yuqian Zhang' 'John Wright']"
] |
cs.CR cs.LG | null | 1208.0564 | null | null | http://arxiv.org/pdf/1208.0564v2 | 2012-08-05T09:31:22Z | 2012-07-27T21:39:21Z | Detection of Deviations in Mobile Applications Network Behavior | In this paper a novel system for detecting meaningful deviations in a mobile
application's network behavior is proposed. The main goal of the proposed
system is to protect mobile device users and cellular infrastructure companies
from malicious applications. The new system is capable of: (1) identifying
malicious attacks or masquerading applications installed on a mobile device,
and (2) identifying republishing of popular applications injected with a
malicious code. The detection is performed based on the application's network
traffic patterns only. For each application two types of models are learned.
The first model, local, represents the personal traffic pattern for each user
using an application and is learned on the device. The second model,
collaborative, represents traffic patterns of numerous users using an
application and is learned on the system server. Machine-learning methods are
used for learning and detection purposes. This paper focuses on methods
utilized for local (i.e., on mobile device) learning and detection of
deviations from the normal application's behavior. These methods were
implemented and evaluated on Android devices. The evaluation experiments
demonstrate that: (1) various applications have specific network traffic
patterns and certain application categories can be distinguishable by their
network patterns, (2) different levels of deviations from normal behavior can
be detected accurately, and (3) local learning is feasible and has a low
performance overhead on mobile devices.
| [
"L. Chekina, D. Mimran, L. Rokach, Y. Elovici, B. Shapira",
"['L. Chekina' 'D. Mimran' 'L. Rokach' 'Y. Elovici' 'B. Shapira']"
] |
cs.LG stat.ML | null | 1208.0645 | null | null | http://arxiv.org/pdf/1208.0645v4 | 2014-07-02T14:46:59Z | 2012-08-03T02:37:44Z | On the Consistency of AUC Pairwise Optimization | AUC (area under ROC curve) is an important evaluation criterion, which has
been popularly used in many learning tasks such as class-imbalance learning,
cost-sensitive learning, learning to rank, etc. Many learning approaches try to
optimize AUC, while owing to the non-convexity and discontinuousness of AUC,
almost all approaches work with surrogate loss functions. Thus, the consistency
of AUC is crucial; however, it has been almost untouched before. In this paper,
we provide a sufficient condition for the asymptotic consistency of learning
approaches based on surrogate loss functions. Based on this result, we prove
that exponential loss and logistic loss are consistent with AUC, but hinge loss
is inconsistent. Then, we derive the $q$-norm hinge loss and general hinge loss
that are consistent with AUC. We also derive the consistent bounds for
exponential loss and logistic loss, and obtain the consistent bounds for many
surrogate loss functions under the non-noise setting. Further, we disclose an
equivalence between the exponential surrogate loss of AUC and exponential
surrogate loss of accuracy, and one straightforward consequence of such finding
is that AdaBoost and RankBoost are equivalent.
| [
"['Wei Gao' 'Zhi-Hua Zhou']",
"Wei Gao and Zhi-Hua Zhou"
] |
cs.IR cs.LG cs.SI physics.soc-ph | null | 1208.0782 | null | null | http://arxiv.org/pdf/1208.0782v2 | 2013-05-17T21:55:30Z | 2012-08-03T16:00:35Z | Wisdom of the Crowd: Incorporating Social Influence in Recommendation
Models | Recommendation systems have received considerable attention recently.
However, most research has been focused on improving the performance of
collaborative filtering (CF) techniques. Social networks, indispensably,
provide us extra information on people's preferences, and should be considered
and deployed to improve the quality of recommendations. In this paper, we
propose two recommendation models, for individuals and for groups respectively,
based on social contagion and social influence network theory. In the
recommendation model for individuals, we improve the result of collaborative
filtering prediction with social contagion outcome, which simulates the result
of information cascade in the decision-making process. In the recommendation
model for groups, we apply social influence network theory to take
interpersonal influence into account to form a settled pattern of disagreement,
and then aggregate opinions of group members. By introducing the concept of
susceptibility and interpersonal influence, the settled rating results are
flexible, and inclined to members whose ratings are "essential".
| [
"['Shang Shang' 'Pan Hui' 'Sanjeev R. Kulkarni' 'Paul W. Cuff']",
"Shang Shang, Pan Hui, Sanjeev R. Kulkarni and Paul W. Cuff"
] |
cs.IR cs.LG | null | 1208.0787 | null | null | http://arxiv.org/pdf/1208.0787v2 | 2013-05-17T21:57:26Z | 2012-08-03T16:15:10Z | A Random Walk Based Model Incorporating Social Information for
Recommendations | Collaborative filtering (CF) is one of the most popular approaches to build a
recommendation system. In this paper, we propose a hybrid collaborative
filtering model based on a Makovian random walk to address the data sparsity
and cold start problems in recommendation systems. More precisely, we construct
a directed graph whose nodes consist of items and users, together with item
content, user profile and social network information. We incorporate user's
ratings into edge settings in the graph model. The model provides personalized
recommendations and predictions to individuals and groups. The proposed
algorithms are evaluated on MovieLens and Epinions datasets. Experimental
results show that the proposed methods perform well compared with other
graph-based methods, especially in the cold start case.
| [
"['Shang Shang' 'Sanjeev R. Kulkarni' 'Paul W. Cuff' 'Pan Hui']",
"Shang Shang, Sanjeev R. Kulkarni, Paul W. Cuff and Pan Hui"
] |
stat.ML cs.LG | null | 1208.0806 | null | null | http://arxiv.org/pdf/1208.0806v1 | 2012-08-03T18:01:52Z | 2012-08-03T18:01:52Z | Cross-conformal predictors | This note introduces the method of cross-conformal prediction, which is a
hybrid of the methods of inductive conformal prediction and cross-validation,
and studies its validity and predictive efficiency empirically.
| [
"Vladimir Vovk",
"['Vladimir Vovk']"
] |
cs.LG stat.ML | null | 1208.0848 | null | null | http://arxiv.org/pdf/1208.0848v2 | 2013-02-22T21:09:57Z | 2012-08-03T21:15:19Z | Learning Theory Approach to Minimum Error Entropy Criterion | We consider the minimum error entropy (MEE) criterion and an empirical risk
minimization learning algorithm in a regression setting. A learning theory
approach is presented for this MEE algorithm and explicit error bounds are
provided in terms of the approximation ability and capacity of the involved
hypothesis space when the MEE scaling parameter is large. Novel asymptotic
analysis is conducted for the generalization error associated with Renyi's
entropy and a Parzen window function, to overcome technical difficulties arisen
from the essential differences between the classical least squares problems and
the MEE setting. A semi-norm and the involved symmetrized least squares error
are introduced, which is related to some ranking algorithms.
| [
"Ting Hu, Jun Fan, Qiang Wu, Ding-Xuan Zhou",
"['Ting Hu' 'Jun Fan' 'Qiang Wu' 'Ding-Xuan Zhou']"
] |
math.OC cs.LG cs.SY | null | 1208.0864 | null | null | http://arxiv.org/pdf/1208.0864v1 | 2012-08-03T22:56:36Z | 2012-08-03T22:56:36Z | Statistical Results on Filtering and Epi-convergence for Learning-Based
Model Predictive Control | Learning-based model predictive control (LBMPC) is a technique that provides
deterministic guarantees on robustness, while statistical identification tools
are used to identify richer models of the system in order to improve
performance. This technical note provides proofs that elucidate the reasons for
our choice of measurement model, as well as giving proofs concerning the
stochastic convergence of LBMPC. The first part of this note discusses
simultaneous state estimation and statistical identification (or learning) of
unmodeled dynamics, for dynamical systems that can be described by ordinary
differential equations (ODE's). The second part provides proofs concerning the
epi-convergence of different statistical estimators that can be used with the
learning-based model predictive control (LBMPC) technique. In particular, we
prove results on the statistical properties of a nonparametric estimator that
we have designed to have the correct deterministic and stochastic properties
for numerical implementation when used in conjunction with LBMPC.
| [
"['Anil Aswani' 'Humberto Gonzalez' 'S. Shankar Sastry' 'Claire Tomlin']",
"Anil Aswani, Humberto Gonzalez, S. Shankar Sastry, Claire Tomlin"
] |
cs.LG cs.CV stat.ML | null | 1208.0959 | null | null | http://arxiv.org/pdf/1208.0959v2 | 2013-01-06T19:00:48Z | 2012-08-04T21:48:52Z | Recklessly Approximate Sparse Coding | It has recently been observed that certain extremely simple feature encoding
techniques are able to achieve state of the art performance on several standard
image classification benchmarks including deep belief networks, convolutional
nets, factored RBMs, mcRBMs, convolutional RBMs, sparse autoencoders and
several others. Moreover, these "triangle" or "soft threshold" encodings are
ex- tremely efficient to compute. Several intuitive arguments have been put
forward to explain this remarkable performance, yet no mathematical
justification has been offered.
The main result of this report is to show that these features are realized as
an approximate solution to the a non-negative sparse coding problem. Using this
connection we describe several variants of the soft threshold features and
demonstrate their effectiveness on two image classification benchmark tasks.
| [
"Misha Denil and Nando de Freitas",
"['Misha Denil' 'Nando de Freitas']"
] |
cs.LG | null | 1208.0984 | null | null | http://arxiv.org/pdf/1208.0984v1 | 2012-08-05T06:34:44Z | 2012-08-05T06:34:44Z | APRIL: Active Preference-learning based Reinforcement Learning | This paper focuses on reinforcement learning (RL) with limited prior
knowledge. In the domain of swarm robotics for instance, the expert can hardly
design a reward function or demonstrate the target behavior, forbidding the use
of both standard RL and inverse reinforcement learning. Although with a limited
expertise, the human expert is still often able to emit preferences and rank
the agent demonstrations. Earlier work has presented an iterative
preference-based RL framework: expert preferences are exploited to learn an
approximate policy return, thus enabling the agent to achieve direct policy
search. Iteratively, the agent selects a new candidate policy and demonstrates
it; the expert ranks the new demonstration comparatively to the previous best
one; the expert's ranking feedback enables the agent to refine the approximate
policy return, and the process is iterated. In this paper, preference-based
reinforcement learning is combined with active ranking in order to decrease the
number of ranking queries to the expert needed to yield a satisfactory policy.
Experiments on the mountain car and the cancer treatment testbeds witness that
a couple of dozen rankings enable to learn a competent policy.
| [
"['Riad Akrour' 'Marc Schoenauer' 'Michèle Sebag']",
"Riad Akrour (INRIA Saclay - Ile de France, LRI), Marc Schoenauer\n (INRIA Saclay - Ile de France, LRI), Mich\\`ele Sebag (LRI)"
] |
math.ST cs.LG math.PR stat.TH | null | 1208.1056 | null | null | http://arxiv.org/pdf/1208.1056v1 | 2012-08-05T22:02:13Z | 2012-08-05T22:02:13Z | Sequential Estimation Methods from Inclusion Principle | In this paper, we propose new sequential estimation methods based on
inclusion principle. The main idea is to reformulate the estimation problems as
constructing sequential random intervals and use confidence sequences to
control the associated coverage probabilities. In contrast to existing
asymptotic sequential methods, our estimation procedures rigorously guarantee
the pre-specified levels of confidence.
| [
"['Xinjia Chen']",
"Xinjia Chen"
] |
stat.ML cs.LG math.OC | 10.1109/TPAMI.2013.226 | 1208.1237 | null | null | http://arxiv.org/abs/1208.1237v3 | 2013-10-07T14:15:01Z | 2012-08-06T18:49:07Z | Fast and Robust Recursive Algorithms for Separable Nonnegative Matrix
Factorization | In this paper, we study the nonnegative matrix factorization problem under
the separability assumption (that is, there exists a cone spanned by a small
subset of the columns of the input nonnegative data matrix containing all
columns), which is equivalent to the hyperspectral unmixing problem under the
linear mixing model and the pure-pixel assumption. We present a family of fast
recursive algorithms, and prove they are robust under any small perturbations
of the input data matrix. This family generalizes several existing
hyperspectral unmixing algorithms and hence provides for the first time a
theoretical justification of their better practical performance.
| [
"Nicolas Gillis and Stephen A. Vavasis",
"['Nicolas Gillis' 'Stephen A. Vavasis']"
] |
cs.LG cs.IR cs.IT math.IT stat.CO stat.ML | null | 1208.1259 | null | null | http://arxiv.org/pdf/1208.1259v1 | 2012-08-06T12:28:06Z | 2012-08-06T12:28:06Z | One Permutation Hashing for Efficient Search and Learning | Recently, the method of b-bit minwise hashing has been applied to large-scale
linear learning and sublinear time near-neighbor search. The major drawback of
minwise hashing is the expensive preprocessing cost, as the method requires
applying (e.g.,) k=200 to 500 permutations on the data. The testing time can
also be expensive if a new data point (e.g., a new document or image) has not
been processed, which might be a significant issue in user-facing applications.
We develop a very simple solution based on one permutation hashing.
Conceptually, given a massive binary data matrix, we permute the columns only
once and divide the permuted columns evenly into k bins; and we simply store,
for each data vector, the smallest nonzero location in each bin. The
interesting probability analysis (which is validated by experiments) reveals
that our one permutation scheme should perform very similarly to the original
(k-permutation) minwise hashing. In fact, the one permutation scheme can be
even slightly more accurate, due to the "sample-without-replacement" effect.
Our experiments with training linear SVM and logistic regression on the
webspam dataset demonstrate that this one permutation hashing scheme can
achieve the same (or even slightly better) accuracies compared to the original
k-permutation scheme. To test the robustness of our method, we also experiment
with the small news20 dataset which is very sparse and has merely on average
500 nonzeros in each data vector. Interestingly, our one permutation scheme
noticeably outperforms the k-permutation scheme when k is not too small on the
news20 dataset. In summary, our method can achieve at least the same accuracy
as the original k-permutation scheme, at merely 1/k of the original
preprocessing cost.
| [
"['Ping Li' 'Art Owen' 'Cun-Hui Zhang']",
"Ping Li and Art Owen and Cun-Hui Zhang"
] |
cs.LG | null | 1208.1315 | null | null | http://arxiv.org/pdf/1208.1315v1 | 2012-08-07T01:31:32Z | 2012-08-07T01:31:32Z | Data Selection for Semi-Supervised Learning | The real challenge in pattern recognition task and machine learning process
is to train a discriminator using labeled data and use it to distinguish
between future data as accurate as possible. However, most of the problems in
the real world have numerous data, which labeling them is a cumbersome or even
an impossible matter. Semi-supervised learning is one approach to overcome
these types of problems. It uses only a small set of labeled with the company
of huge remain and unlabeled data to train the discriminator. In
semi-supervised learning, it is very essential that which data is labeled and
depend on position of data it effectiveness changes. In this paper, we proposed
an evolutionary approach called Artificial Immune System (AIS) to determine
which data is better to be labeled to get the high quality data. The
experimental results represent the effectiveness of this algorithm in finding
these data points.
| [
"['Shafigh Parsazad' 'Ehsan Saboori' 'Amin Allahyar']",
"Shafigh Parsazad, Ehsan Saboori and Amin Allahyar"
] |
cs.LG | null | 1208.1544 | null | null | http://arxiv.org/pdf/1208.1544v1 | 2012-08-07T23:21:31Z | 2012-08-07T23:21:31Z | Guess Who Rated This Movie: Identifying Users Through Subspace
Clustering | It is often the case that, within an online recommender system, multiple
users share a common account. Can such shared accounts be identified solely on
the basis of the user- provided ratings? Once a shared account is identified,
can the different users sharing it be identified as well? Whenever such user
identification is feasible, it opens the way to possible improvements in
personalized recommendations, but also raises privacy concerns. We develop a
model for composite accounts based on unions of linear subspaces, and use
subspace clustering for carrying out the identification task. We show that a
significant fraction of such accounts is identifiable in a reliable manner, and
illustrate potential uses for personalized recommendation.
| [
"['Amy Zhang' 'Nadia Fawaz' 'Stratis Ioannidis' 'Andrea Montanari']",
"Amy Zhang, Nadia Fawaz, Stratis Ioannidis and Andrea Montanari"
] |
cs.LG cs.DS | 10.1016/j.neucom.2012.07.011 | 1208.1819 | null | null | http://arxiv.org/abs/1208.1819v1 | 2012-08-09T06:14:19Z | 2012-08-09T06:14:19Z | Self-Organizing Time Map: An Abstraction of Temporal Multivariate
Patterns | This paper adopts and adapts Kohonen's standard Self-Organizing Map (SOM) for
exploratory temporal structure analysis. The Self-Organizing Time Map (SOTM)
implements SOM-type learning to one-dimensional arrays for individual time
units, preserves the orientation with short-term memory and arranges the arrays
in an ascending order of time. The two-dimensional representation of the SOTM
attempts thus twofold topology preservation, where the horizontal direction
preserves time topology and the vertical direction data topology. This enables
discovering the occurrence and exploring the properties of temporal structural
changes in data. For representing qualities and properties of SOTMs, we adapt
measures and visualizations from the standard SOM paradigm, as well as
introduce a measure of temporal structural changes. The functioning of the
SOTM, and its visualizations and quality and property measures, are illustrated
on artificial toy data. The usefulness of the SOTM in a real-world setting is
shown on poverty, welfare and development indicators.
| [
"['Peter Sarlin']",
"Peter Sarlin"
] |
cs.LG | null | 1208.1829 | null | null | http://arxiv.org/pdf/1208.1829v1 | 2012-08-09T07:14:37Z | 2012-08-09T07:14:37Z | Metric Learning across Heterogeneous Domains by Respectively Aligning
Both Priors and Posteriors | In this paper, we attempts to learn a single metric across two heterogeneous
domains where source domain is fully labeled and has many samples while target
domain has only a few labeled samples but abundant unlabeled samples. To the
best of our knowledge, this task is seldom touched. The proposed learning model
has a simple underlying motivation: all the samples in both the source and the
target domains are mapped into a common space, where both their priors
P(sample)s and their posteriors P(label|sample)s are forced to be respectively
aligned as much as possible. We show that the two mappings, from both the
source domain and the target domain to the common space, can be reparameterized
into a single positive semi-definite(PSD) matrix. Then we develop an efficient
Bregman Projection algorithm to optimize the PDS matrix over which a LogDet
function is used to regularize. Furthermore, we also show that this model can
be easily kernelized and verify its effectiveness in crosslanguage retrieval
task and cross-domain object recognition task.
| [
"['Qiang Qian' 'Songcan Chen']",
"Qiang Qian and Songcan Chen"
] |
cs.LG | null | 1208.1846 | null | null | http://arxiv.org/pdf/1208.1846v1 | 2012-08-09T08:53:11Z | 2012-08-09T08:53:11Z | Margin Distribution Controlled Boosting | Schapire's margin theory provides a theoretical explanation to the success of
boosting-type methods and manifests that a good margin distribution (MD) of
training samples is essential for generalization. However the statement that a
MD is good is vague, consequently, many recently developed algorithms try to
generate a MD in their goodness senses for boosting generalization. Unlike
their indirect control over MD, in this paper, we propose an alternative
boosting algorithm termed Margin distribution Controlled Boosting (MCBoost)
which directly controls the MD by introducing and optimizing a key adjustable
margin parameter. MCBoost's optimization implementation adopts the column
generation technique to ensure fast convergence and small number of weak
classifiers involved in the final MCBooster. We empirically demonstrate: 1)
AdaBoost is actually also a MD controlled algorithm and its iteration number
acts as a parameter controlling the distribution and 2) the generalization
performance of MCBoost evaluated on UCI benchmark datasets is validated better
than those of AdaBoost, L2Boost, LPBoost, AdaBoost-CG and MDBoost.
| [
"['Guangxu Guo' 'Songcan Chen']",
"Guangxu Guo and Songcan Chen"
] |
cs.DB cs.LG | null | 1208.1860 | null | null | http://arxiv.org/pdf/1208.1860v1 | 2012-08-09T10:02:35Z | 2012-08-09T10:02:35Z | Scaling Multiple-Source Entity Resolution using Statistically Efficient
Transfer Learning | We consider a serious, previously-unexplored challenge facing almost all
approaches to scaling up entity resolution (ER) to multiple data sources: the
prohibitive cost of labeling training data for supervised learning of
similarity scores for each pair of sources. While there exists a rich
literature describing almost all aspects of pairwise ER, this new challenge is
arising now due to the unprecedented ability to acquire and store data from
online sources, features driven by ER such as enriched search verticals, and
the uniqueness of noisy and missing data characteristics for each source. We
show on real-world and synthetic data that for state-of-the-art techniques, the
reality of heterogeneous sources means that the number of labeled training data
must scale quadratically in the number of sources, just to maintain constant
precision/recall. We address this challenge with a brand new transfer learning
algorithm which requires far less training data (or equivalently, achieves
superior accuracy with the same data) and is trained using fast convex
optimization. The intuition behind our approach is to adaptively share
structure learned about one scoring problem with all other scoring problems
sharing a data source in common. We demonstrate that our theoretically
motivated approach incurs no runtime cost while it can maintain constant
precision/recall with the cost of labeling increasing only linearly with the
number of sources.
| [
"Sahand Negahban, Benjamin I. P. Rubinstein and Jim Gemmell",
"['Sahand Negahban' 'Benjamin I. P. Rubinstein' 'Jim Gemmell']"
] |
cs.LG math.ST stat.TH | null | 1208.2015 | null | null | http://arxiv.org/pdf/1208.2015v3 | 2013-05-22T11:10:14Z | 2012-08-09T19:31:22Z | Sharp analysis of low-rank kernel matrix approximations | We consider supervised learning problems within the positive-definite kernel
framework, such as kernel ridge regression, kernel logistic regression or the
support vector machine. With kernels leading to infinite-dimensional feature
spaces, a common practical limiting difficulty is the necessity of computing
the kernel matrix, which most frequently leads to algorithms with running time
at least quadratic in the number of observations n, i.e., O(n^2). Low-rank
approximations of the kernel matrix are often considered as they allow the
reduction of running time complexities to O(p^2 n), where p is the rank of the
approximation. The practicality of such methods thus depends on the required
rank p. In this paper, we show that in the context of kernel ridge regression,
for approximations based on a random subset of columns of the original kernel
matrix, the rank p may be chosen to be linear in the degrees of freedom
associated with the problem, a quantity which is classically used in the
statistical analysis of such methods, and is often seen as the implicit number
of parameters of non-parametric estimators. This result enables simple
algorithms that have sub-quadratic running time complexity, but provably
exhibit the same predictive performance than existing algorithms, for any given
problem instance, and not only for worst-case situations.
| [
"Francis Bach (INRIA Paris - Rocquencourt, LIENS)",
"['Francis Bach']"
] |
cs.LG | null | 1208.2112 | null | null | http://arxiv.org/pdf/1208.2112v2 | 2013-01-21T08:12:56Z | 2012-08-10T08:36:49Z | Inverse Reinforcement Learning with Gaussian Process | We present new algorithms for inverse reinforcement learning (IRL, or inverse
optimal control) in convex optimization settings. We argue that finite-space
IRL can be posed as a convex quadratic program under a Bayesian inference
framework with the objective of maximum a posterior estimation. To deal with
problems in large or even infinite state space, we propose a Gaussian process
model and use preference graphs to represent observations of decision
trajectories. Our method is distinguished from other approaches to IRL in that
it makes no assumptions about the form of the reward function and yet it
retains the promise of computationally manageable implementations for potential
real-world applications. In comparison with an establish algorithm on
small-scale numerical problems, our method demonstrated better accuracy in
apprenticeship learning and a more robust dependence on the number of
observations.
| [
"['Qifeng Qiao' 'Peter A. Beling']",
"Qifeng Qiao and Peter A. Beling"
] |
cs.CV cs.LG | null | 1208.2128 | null | null | http://arxiv.org/pdf/1208.2128v1 | 2012-08-10T09:33:37Z | 2012-08-10T09:33:37Z | Brain tumor MRI image classification with feature selection and
extraction using linear discriminant analysis | Feature extraction is a method of capturing visual content of an image. The
feature extraction is the process to represent raw image in its reduced form to
facilitate decision making such as pattern classification. We have tried to
address the problem of classification MRI brain images by creating a robust and
more accurate classifier which can act as an expert assistant to medical
practitioners. The objective of this paper is to present a novel method of
feature selection and extraction. This approach combines the Intensity,
Texture, shape based features and classifies the tumor as white matter, Gray
matter, CSF, abnormal and normal area. The experiment is performed on 140 tumor
contained brain MR images from the Internet Brain Segmentation Repository. The
proposed technique has been carried out over a larger database as compare to
any previous work and is more robust and effective. PCA and Linear Discriminant
Analysis (LDA) were applied on the training sets. The Support Vector Machine
(SVM) classifier served as a comparison of nonlinear techniques Vs linear ones.
PCA and LDA methods are used to reduce the number of features used. The feature
selection using the proposed technique is more beneficial as it analyses the
data according to grouping class variable and gives reduced feature set with
high classification accuracy.
| [
"['V. P. Gladis Pushpa Rathi' 'S. Palani']",
"V. P. Gladis Pushpa Rathi and S. Palani"
] |
null | null | 1208.2294 | null | null | http://arxiv.org/pdf/1208.2294v1 | 2012-08-10T22:22:14Z | 2012-08-10T22:22:14Z | Learning pseudo-Boolean k-DNF and Submodular Functions | We prove that any submodular function f: {0,1}^n -> {0,1,...,k} can be represented as a pseudo-Boolean 2k-DNF formula. Pseudo-Boolean DNFs are a natural generalization of DNF representation for functions with integer range. Each term in such a formula has an associated integral constant. We show that an analog of Hastad's switching lemma holds for pseudo-Boolean k-DNFs if all constants associated with the terms of the formula are bounded. This allows us to generalize Mansour's PAC-learning algorithm for k-DNFs to pseudo-Boolean k-DNFs, and hence gives a PAC-learning algorithm with membership queries under the uniform distribution for submodular functions of the form f:{0,1}^n -> {0,1,...,k}. Our algorithm runs in time polynomial in n, k^{O(k log k / epsilon)}, 1/epsilon and log(1/delta) and works even in the agnostic setting. The line of previous work on learning submodular functions [Balcan, Harvey (STOC '11), Gupta, Hardt, Roth, Ullman (STOC '11), Cheraghchi, Klivans, Kothari, Lee (SODA '12)] implies only n^{O(k)} query complexity for learning submodular functions in this setting, for fixed epsilon and delta. Our learning algorithm implies a property tester for submodularity of functions f:{0,1}^n -> {0, ..., k} with query complexity polynomial in n for k=O((log n/ loglog n)^{1/2}) and constant proximity parameter epsilon. | [
"['Sofya Raskhodnikova' 'Grigory Yaroslavtsev']"
] |
stat.ML cs.LG | null | 1208.2417 | null | null | http://arxiv.org/pdf/1208.2417v1 | 2012-08-12T10:12:48Z | 2012-08-12T10:12:48Z | How to sample if you must: on optimal functional sampling | We examine a fundamental problem that models various active sampling setups,
such as network tomography. We analyze sampling of a multivariate normal
distribution with an unknown expectation that needs to be estimated: in our
setup it is possible to sample the distribution from a given set of linear
functionals, and the difficulty addressed is how to optimally select the
combinations to achieve low estimation error. Although this problem is in the
heart of the field of optimal design, no efficient solutions for the case with
many functionals exist. We present some bounds and an efficient sub-optimal
solution for this problem for more structured sets such as binary functionals
that are induced by graph walks.
| [
"['Assaf Hallak' 'Shie Mannor']",
"Assaf Hallak and Shie Mannor"
] |
cs.LG stat.ML | null | 1208.2523 | null | null | http://arxiv.org/pdf/1208.2523v1 | 2012-08-13T08:30:14Z | 2012-08-13T08:30:14Z | Path Integral Control by Reproducing Kernel Hilbert Space Embedding | We present an embedding of stochastic optimal control problems, of the so
called path integral form, into reproducing kernel Hilbert spaces. Using
consistent, sample based estimates of the embedding leads to a model free,
non-parametric approach for calculation of an approximate solution to the
control problem. This formulation admits a decomposition of the problem into an
invariant and task dependent component. Consequently, we make much more
efficient use of the sample data compared to previous sample based approaches
in this domain, e.g., by allowing sample re-use across tasks. Numerical
examples on test problems, which illustrate the sample efficiency, are
provided.
| [
"Konrad Rawlik and Marc Toussaint and Sethu Vijayakumar",
"['Konrad Rawlik' 'Marc Toussaint' 'Sethu Vijayakumar']"
] |
stat.ML cs.LG math.OC | null | 1208.2572 | null | null | http://arxiv.org/pdf/1208.2572v1 | 2012-08-13T13:02:33Z | 2012-08-13T13:02:33Z | Nonparametric sparsity and regularization | In this work we are interested in the problems of supervised learning and
variable selection when the input-output dependence is described by a nonlinear
function depending on a few variables. Our goal is to consider a sparse
nonparametric model, hence avoiding linear or additive models. The key idea is
to measure the importance of each variable in the model by making use of
partial derivatives. Based on this intuition we propose a new notion of
nonparametric sparsity and a corresponding least squares regularization scheme.
Using concepts and results from the theory of reproducing kernel Hilbert spaces
and proximal methods, we show that the proposed learning algorithm corresponds
to a minimization problem which can be provably solved by an iterative
procedure. The consistency properties of the obtained estimator are studied
both in terms of prediction and selection performance. An extensive empirical
analysis shows that the proposed method performs favorably with respect to the
state-of-the-art methods.
| [
"['Lorenzo Rosasco' 'Silvia Villa' 'Sofia Mosci' 'Matteo Santoro'\n 'Alessandro verri']",
"Lorenzo Rosasco, Silvia Villa, Sofia Mosci, Matteo Santoro, Alessandro\n verri"
] |
cs.LG cs.IR | 10.5121/ijaia.2012.3409 | 1208.2808 | null | null | http://arxiv.org/abs/1208.2808v1 | 2012-08-14T08:36:49Z | 2012-08-14T08:36:49Z | Analysis of a Statistical Hypothesis Based Learning Mechanism for Faster
crawling | The growth of world-wide-web (WWW) spreads its wings from an intangible
quantities of web-pages to a gigantic hub of web information which gradually
increases the complexity of crawling process in a search engine. A search
engine handles a lot of queries from various parts of this world, and the
answers of it solely depend on the knowledge that it gathers by means of
crawling. The information sharing becomes a most common habit of the society,
and it is done by means of publishing structured, semi-structured and
unstructured resources on the web. This social practice leads to an exponential
growth of web-resource, and hence it became essential to crawl for continuous
updating of web-knowledge and modification of several existing resources in any
situation. In this paper one statistical hypothesis based learning mechanism is
incorporated for learning the behavior of crawling speed in different
environment of network, and for intelligently control of the speed of crawler.
The scaling technique is used to compare the performance proposed method with
the standard crawler. The high speed performance is observed after scaling, and
the retrieval of relevant web-resource in such a high speed is analyzed.
| [
"Sudarshan Nandy, Partha Pratim Sarkar and Achintya Das",
"['Sudarshan Nandy' 'Partha Pratim Sarkar' 'Achintya Das']"
] |
cs.LG cs.CL cs.IR cs.SI stat.AP stat.ML | null | 1208.2873 | null | null | http://arxiv.org/pdf/1208.2873v1 | 2012-08-13T18:59:54Z | 2012-08-13T18:59:54Z | Detecting Events and Patterns in Large-Scale User Generated Textual
Streams with Statistical Learning Methods | A vast amount of textual web streams is influenced by events or phenomena
emerging in the real world. The social web forms an excellent modern paradigm,
where unstructured user generated content is published on a regular basis and
in most occasions is freely distributed. The present Ph.D. Thesis deals with
the problem of inferring information - or patterns in general - about events
emerging in real life based on the contents of this textual stream. We show
that it is possible to extract valuable information about social phenomena,
such as an epidemic or even rainfall rates, by automatic analysis of the
content published in Social Media, and in particular Twitter, using Statistical
Machine Learning methods. An important intermediate task regards the formation
and identification of features which characterise a target event; we select and
use those textual features in several linear, non-linear and hybrid inference
approaches achieving a significantly good performance in terms of the applied
loss function. By examining further this rich data set, we also propose methods
for extracting various types of mood signals revealing how affective norms - at
least within the social web's population - evolve during the day and how
significant events emerging in the real world are influencing them. Lastly, we
present some preliminary findings showing several spatiotemporal
characteristics of this textual information as well as the potential of using
it to tackle tasks such as the prediction of voting intentions.
| [
"Vasileios Lampos",
"['Vasileios Lampos']"
] |
cs.LG cs.DB cs.PL cs.SI physics.soc-ph | null | 1208.2925 | null | null | http://arxiv.org/pdf/1208.2925v1 | 2012-08-14T17:04:19Z | 2012-08-14T17:04:19Z | Using Program Synthesis for Social Recommendations | This paper presents a new approach to select events of interest to a user in
a social media setting where events are generated by the activities of the
user's friends through their mobile devices. We argue that given the unique
requirements of the social media setting, the problem is best viewed as an
inductive learning problem, where the goal is to first generalize from the
users' expressed "likes" and "dislikes" of specific events, then to produce a
program that can be manipulated by the system and distributed to the collection
devices to collect only data of interest. The key contribution of this paper is
a new algorithm that combines existing machine learning techniques with new
program synthesis technology to learn users' preferences. We show that when
compared with the more standard approaches, our new algorithm provides up to
order-of-magnitude reductions in model training time, and significantly higher
prediction accuracies for our target application. The approach also improves on
standard machine learning techniques in that it produces clear programs that
can be manipulated to optimize data collection and filtering.
| [
"['Alvin Cheung' 'Armando Solar-Lezama' 'Samuel Madden']",
"Alvin Cheung, Armando Solar-Lezama, Samuel Madden"
] |
stat.ML cs.LG | null | 1208.3030 | null | null | http://arxiv.org/pdf/1208.3030v2 | 2013-04-22T04:12:22Z | 2012-08-15T05:35:36Z | Asymptotic Generalization Bound of Fisher's Linear Discriminant Analysis | Fisher's linear discriminant analysis (FLDA) is an important dimension
reduction method in statistical pattern recognition. It has been shown that
FLDA is asymptotically Bayes optimal under the homoscedastic Gaussian
assumption. However, this classical result has the following two major
limitations: 1) it holds only for a fixed dimensionality $D$, and thus does not
apply when $D$ and the training sample size $N$ are proportionally large; 2) it
does not provide a quantitative description on how the generalization ability
of FLDA is affected by $D$ and $N$. In this paper, we present an asymptotic
generalization analysis of FLDA based on random matrix theory, in a setting
where both $D$ and $N$ increase and $D/N\longrightarrow\gamma\in[0,1)$. The
obtained lower bound of the generalization discrimination power overcomes both
limitations of the classical result, i.e., it is applicable when $D$ and $N$
are proportionally large and provides a quantitative description of the
generalization ability of FLDA in terms of the ratio $\gamma=D/N$ and the
population discrimination power. Besides, the discrimination power bound also
leads to an upper bound on the generalization error of binary-classification
with FLDA.
| [
"['Wei Bian' 'Dacheng Tao']",
"Wei Bian and Dacheng Tao"
] |
stat.ME cs.LG | null | 1208.3145 | null | null | http://arxiv.org/pdf/1208.3145v1 | 2012-08-14T11:08:53Z | 2012-08-14T11:08:53Z | Metric distances derived from cosine similarity and Pearson and Spearman
correlations | We investigate two classes of transformations of cosine similarity and
Pearson and Spearman correlations into metric distances, utilising the simple
tool of metric-preserving functions. The first class puts anti-correlated
objects maximally far apart. Previously known transforms fall within this
class. The second class collates correlated and anti-correlated objects. An
example of such a transformation that yields a metric distance is the sine
function when applied to centered data.
| [
"['Stijn van Dongen' 'Anton J. Enright']",
"Stijn van Dongen and Anton J. Enright"
] |
stat.ML cs.LG | null | 1208.3279 | null | null | http://arxiv.org/pdf/1208.3279v1 | 2012-08-06T16:20:23Z | 2012-08-06T16:20:23Z | Structured Prediction Cascades | Structured prediction tasks pose a fundamental trade-off between the need for
model complexity to increase predictive power and the limited computational
resources for inference in the exponentially-sized output spaces such models
require. We formulate and develop the Structured Prediction Cascade
architecture: a sequence of increasingly complex models that progressively
filter the space of possible outputs. The key principle of our approach is that
each model in the cascade is optimized to accurately filter and refine the
structured output state space of the next model, speeding up both learning and
inference in the next layer of the cascade. We learn cascades by optimizing a
novel convex loss function that controls the trade-off between the filtering
efficiency and the accuracy of the cascade, and provide generalization bounds
for both accuracy and efficiency. We also extend our approach to intractable
models using tree-decomposition ensembles, and provide algorithms and theory
for this setting. We evaluate our approach on several large-scale problems,
achieving state-of-the-art performance in handwriting recognition and human
pose recognition. We find that structured prediction cascades allow tremendous
speedups and the use of previously intractable features and models in both
settings.
| [
"David Weiss, Benjamin Sapp, Ben Taskar",
"['David Weiss' 'Benjamin Sapp' 'Ben Taskar']"
] |
stat.ML cs.LG | null | 1208.3422 | null | null | http://arxiv.org/pdf/1208.3422v2 | 2013-01-08T20:26:55Z | 2012-08-16T17:16:18Z | Distance Metric Learning for Kernel Machines | Recent work in metric learning has significantly improved the
state-of-the-art in k-nearest neighbor classification. Support vector machines
(SVM), particularly with RBF kernels, are amongst the most popular
classification algorithms that uses distance metrics to compare examples. This
paper provides an empirical analysis of the efficacy of three of the most
popular Mahalanobis metric learning algorithms as pre-processing for SVM
training. We show that none of these algorithms generate metrics that lead to
particularly satisfying improvements for SVM-RBF classification. As a remedy we
introduce support vector metric learning (SVML), a novel algorithm that
seamlessly combines the learning of a Mahalanobis metric with the training of
the RBF-SVM parameters. We demonstrate the capabilities of SVML on nine
benchmark data sets of varying sizes and difficulties. In our study, SVML
outperforms all alternative state-of-the-art metric learning algorithms in
terms of accuracy and establishes itself as a serious alternative to the
standard Euclidean metric with model selection by cross validation.
| [
"['Zhixiang Xu' 'Kilian Q. Weinberger' 'Olivier Chapelle']",
"Zhixiang Xu, Kilian Q. Weinberger, Olivier Chapelle"
] |
cs.LG | null | 1208.3561 | null | null | http://arxiv.org/pdf/1208.3561v3 | 2013-05-25T19:23:14Z | 2012-08-17T09:49:31Z | Efficient Active Learning of Halfspaces: an Aggressive Approach | We study pool-based active learning of half-spaces. We revisit the aggressive
approach for active learning in the realizable case, and show that it can be
made efficient and practical, while also having theoretical guarantees under
reasonable assumptions. We further show, both theoretically and experimentally,
that it can be preferable to mellow approaches. Our efficient aggressive active
learner of half-spaces has formal approximation guarantees that hold when the
pool is separable with a margin. While our analysis is focused on the
realizable setting, we show that a simple heuristic allows using the same
algorithm successfully for pools with low error as well. We further compare the
aggressive approach to the mellow approach, and prove that there are cases in
which the aggressive approach results in significantly better label complexity
compared to the mellow approach. We demonstrate experimentally that substantial
improvements in label complexity can be achieved using the aggressive approach,
for both realizable and low-error settings.
| [
"['Alon Gonen' 'Sivan Sabato' 'Shai Shalev-Shwartz']",
"Alon Gonen, Sivan Sabato, Shai Shalev-Shwartz"
] |
cs.LG cs.IT math.IT | null | 1208.3689 | null | null | http://arxiv.org/pdf/1208.3689v1 | 2012-08-17T20:58:20Z | 2012-08-17T20:58:20Z | An improvement direction for filter selection techniques using
information theory measures and quadratic optimization | Filter selection techniques are known for their simplicity and efficiency.
However this kind of methods doesn't take into consideration the features
inter-redundancy. Consequently the un-removed redundant features remain in the
final classification model, giving lower generalization performance. In this
paper we propose to use a mathematical optimization method that reduces
inter-features redundancy and maximize relevance between each feature and the
target variable.
| [
"['Waad Bouaguel' 'Ghazi Bel Mufti']",
"Waad Bouaguel and Ghazi Bel Mufti"
] |
cs.LG | null | 1208.3719 | null | null | http://arxiv.org/pdf/1208.3719v2 | 2013-03-06T23:27:04Z | 2012-08-18T02:14:47Z | Auto-WEKA: Combined Selection and Hyperparameter Optimization of
Classification Algorithms | Many different machine learning algorithms exist; taking into account each
algorithm's hyperparameters, there is a staggeringly large number of possible
alternatives overall. We consider the problem of simultaneously selecting a
learning algorithm and setting its hyperparameters, going beyond previous work
that addresses these issues in isolation. We show that this problem can be
addressed by a fully automated approach, leveraging recent innovations in
Bayesian optimization. Specifically, we consider a wide range of feature
selection techniques (combining 3 search and 8 evaluator methods) and all
classification approaches implemented in WEKA, spanning 2 ensemble methods, 10
meta-methods, 27 base classifiers, and hyperparameter settings for each
classifier. On each of 21 popular datasets from the UCI repository, the KDD Cup
09, variants of the MNIST dataset and CIFAR-10, we show classification
performance often much better than using standard selection/hyperparameter
optimization methods. We hope that our approach will help non-expert users to
more effectively identify machine learning algorithms and hyperparameter
settings appropriate to their applications, and hence to achieve improved
performance.
| [
"['Chris Thornton' 'Frank Hutter' 'Holger H. Hoos' 'Kevin Leyton-Brown']",
"Chris Thornton and Frank Hutter and Holger H. Hoos and Kevin\n Leyton-Brown"
] |
stat.ML cs.LG | null | 1208.3728 | null | null | http://arxiv.org/pdf/1208.3728v2 | 2014-05-24T10:56:17Z | 2012-08-18T06:27:31Z | Online Learning with Predictable Sequences | We present methods for online linear optimization that take advantage of
benign (as opposed to worst-case) sequences. Specifically if the sequence
encountered by the learner is described well by a known "predictable process",
the algorithms presented enjoy tighter bounds as compared to the typical worst
case bounds. Additionally, the methods achieve the usual worst-case regret
bounds if the sequence is not benign. Our approach can be seen as a way of
adding prior knowledge about the sequence within the paradigm of online
learning. The setting is shown to encompass partial and side information.
Variance and path-length bounds can be seen as particular examples of online
learning with simple predictable sequences.
We further extend our methods and results to include competing with a set of
possible predictable processes (models), that is "learning" the predictable
process itself concurrently with using it to obtain better regret guarantees.
We show that such model selection is possible under various assumptions on the
available feedback. Our results suggest a promising direction of further
research with potential applications to stock market and time series
prediction.
| [
"['Alexander Rakhlin' 'Karthik Sridharan']",
"Alexander Rakhlin and Karthik Sridharan"
] |
cs.LG cs.CE cs.IR q-bio.QM | 10.1186/1471-2105-13-307 | 1208.3779 | null | null | http://arxiv.org/abs/1208.3779v3 | 2013-04-21T06:21:36Z | 2012-08-18T19:32:20Z | Multiple graph regularized protein domain ranking | Background Protein domain ranking is a fundamental task in structural
biology. Most protein domain ranking methods rely on the pairwise comparison of
protein domains while neglecting the global manifold structure of the protein
domain database. Recently, graph regularized ranking that exploits the global
structure of the graph defined by the pairwise similarities has been proposed.
However, the existing graph regularized ranking methods are very sensitive to
the choice of the graph model and parameters, and this remains a difficult
problem for most of the protein domain ranking methods.
Results To tackle this problem, we have developed the Multiple Graph
regularized Ranking algorithm, MultiG- Rank. Instead of using a single graph to
regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold
of protein domain distribution by combining multiple initial graphs for the
regularization. Graph weights are learned with ranking scores jointly and
automatically, by alternately minimizing an ob- jective function in an
iterative algorithm. Experimental results on a subset of the ASTRAL SCOP
protein domain database demonstrate that MultiG-Rank achieves a better ranking
performance than single graph regularized ranking methods and pairwise
similarity based ranking methods.
Conclusion The problem of graph model and parameter selection in graph
regularized protein domain ranking can be solved effectively by combining
multiple graphs. This aspect of generalization introduces a new frontier in
applying multiple graphs to solving protein domain ranking applications.
| [
"['Jim Jing-Yan Wang' 'Halima Bensmail' 'Xin Gao']",
"Jim Jing-Yan Wang, Halima Bensmail and Xin Gao"
] |
cs.CV cs.LG stat.ML | null | 1208.3839 | null | null | http://arxiv.org/pdf/1208.3839v2 | 2013-04-03T14:21:40Z | 2012-08-19T14:49:27Z | Discriminative Sparse Coding on Multi-Manifold for Data Representation
and Classification | Sparse coding has been popularly used as an effective data representation
method in various applications, such as computer vision, medical imaging and
bioinformatics, etc. However, the conventional sparse coding algorithms and its
manifold regularized variants (graph sparse coding and Laplacian sparse
coding), learn the codebook and codes in a unsupervised manner and neglect the
class information available in the training set. To address this problem, in
this paper we propose a novel discriminative sparse coding method based on
multi-manifold, by learning discriminative class-conditional codebooks and
sparse codes from both data feature space and class labels. First, the entire
training set is partitioned into multiple manifolds according to the class
labels. Then, we formulate the sparse coding as a manifold-manifold matching
problem and learn class-conditional codebooks and codes to maximize the
manifold margins of different classes. Lastly, we present a data point-manifold
matching error based strategy to classify the unlabeled data point.
Experimental results on somatic mutations identification and breast tumors
classification in ultrasonic images tasks demonstrate the efficacy of the
proposed data representation-classification approach.
| [
"Jing-Yan Wang",
"['Jing-Yan Wang']"
] |
cs.LG cs.CV stat.ML | null | 1208.3845 | null | null | http://arxiv.org/pdf/1208.3845v3 | 2013-04-03T14:21:50Z | 2012-08-19T15:21:09Z | Adaptive Graph via Multiple Kernel Learning for Nonnegative Matrix
Factorization | Nonnegative Matrix Factorization (NMF) has been continuously evolving in
several areas like pattern recognition and information retrieval methods. It
factorizes a matrix into a product of 2 low-rank non-negative matrices that
will define parts-based, and linear representation of nonnegative data.
Recently, Graph regularized NMF (GrNMF) is proposed to find a compact
representation,which uncovers the hidden semantics and simultaneously respects
the intrinsic geometric structure. In GNMF, an affinity graph is constructed
from the original data space to encode the geometrical information. In this
paper, we propose a novel idea which engages a Multiple Kernel Learning
approach into refining the graph structure that reflects the factorization of
the matrix and the new data space. The GrNMF is improved by utilizing the graph
refined by the kernel learning, and then a novel kernel learning method is
introduced under the GrNMF framework. Our approach shows encouraging results of
the proposed algorithm in comparison to the state-of-the-art clustering
algorithms like NMF, GrNMF, SVD etc.
| [
"Jing-Yan Wang and Mustafa AbdulJabbar",
"['Jing-Yan Wang' 'Mustafa AbdulJabbar']"
] |
cs.LG cs.DB cs.PF stat.ML | null | 1208.3943 | null | null | http://arxiv.org/pdf/1208.3943v1 | 2012-08-20T08:48:40Z | 2012-08-20T08:48:40Z | Performance Tuning Of J48 Algorithm For Prediction Of Soil Fertility | Data mining involves the systematic analysis of large data sets, and data
mining in agricultural soil datasets is exciting and modern research area. The
productive capacity of a soil depends on soil fertility. Achieving and
maintaining appropriate levels of soil fertility, is of utmost importance if
agricultural land is to remain capable of nourishing crop production. In this
research, Steps for building a predictive model of soil fertility have been
explained.
This paper aims at predicting soil fertility class using decision tree
algorithms in data mining . Further, it focuses on performance tuning of J48
decision tree algorithm with the help of meta-techniques such as attribute
selection and boosting.
| [
"Jay Gholap",
"['Jay Gholap']"
] |
cs.LG stat.ML | null | 1208.4138 | null | null | http://arxiv.org/pdf/1208.4138v1 | 2012-08-20T23:21:10Z | 2012-08-20T23:21:10Z | Semi-supervised Clustering Ensemble by Voting | Clustering ensemble is one of the most recent advances in unsupervised
learning. It aims to combine the clustering results obtained using different
algorithms or from different runs of the same clustering algorithm for the same
data set, this is accomplished using on a consensus function, the efficiency
and accuracy of this method has been proven in many works in literature. In the
first part of this paper we make a comparison among current approaches to
clustering ensemble in literature. All of these approaches consist of two main
steps: the ensemble generation and consensus function. In the second part of
the paper, we suggest engaging supervision in the clustering ensemble procedure
to get more enhancements on the clustering results. Supervision can be applied
in two places: either by using semi-supervised algorithms in the clustering
ensemble generation step or in the form of a feedback used by the consensus
function stage. Also, we introduce a flexible two parameter weighting
mechanism, the first parameter describes the compatibility between the datasets
under study and the semi-supervised clustering algorithms used to generate the
base partitions, the second parameter is used to provide the user feedback on
the these partitions. The two parameters are engaged in a "relabeling and
voting" based consensus function to produce the final clustering.
| [
"['Ashraf Mohammed Iqbal' \"Abidalrahman Moh'd\" 'Zahoor Khan']",
"Ashraf Mohammed Iqbal, Abidalrahman Moh'd, Zahoor Khan"
] |
cs.IR cs.LG cs.SI | null | 1208.4147 | null | null | http://arxiv.org/pdf/1208.4147v3 | 2015-11-27T22:55:51Z | 2012-08-21T00:28:32Z | Generating ordered list of Recommended Items: a Hybrid Recommender
System of Microblog | Precise recommendation of followers helps in improving the user experience
and maintaining the prosperity of twitter and microblog platforms. In this
paper, we design a hybrid recommender system of microblog as a solution of KDD
Cup 2012, track 1 task, which requires predicting users a user might follow in
Tencent Microblog. We describe the background of the problem and present the
algorithm consisting of keyword analysis, user taxonomy, (potential)interests
extraction and item recommendation. Experimental result shows the high
performance of our algorithm. Some possible improvements are discussed, which
leads to further study.
| [
"['Yingzhen Li' 'Ye Zhang']",
"Yingzhen Li and Ye Zhang"
] |
cs.LG cs.NI | 10.1109/TWC.2013.030413.121120 | 1208.4290 | null | null | http://arxiv.org/abs/1208.4290v2 | 2012-12-06T12:22:40Z | 2012-08-21T15:35:31Z | A Learning Theoretic Approach to Energy Harvesting Communication System
Optimization | A point-to-point wireless communication system in which the transmitter is
equipped with an energy harvesting device and a rechargeable battery, is
studied. Both the energy and the data arrivals at the transmitter are modeled
as Markov processes. Delay-limited communication is considered assuming that
the underlying channel is block fading with memory, and the instantaneous
channel state information is available at both the transmitter and the
receiver. The expected total transmitted data during the transmitter's
activation time is maximized under three different sets of assumptions
regarding the information available at the transmitter about the underlying
stochastic processes. A learning theoretic approach is introduced, which does
not assume any a priori information on the Markov processes governing the
communication system. In addition, online and offline optimization problems are
studied for the same setting. Full statistical knowledge and causal information
on the realizations of the underlying stochastic processes are assumed in the
online optimization problem, while the offline optimization problem assumes
non-causal knowledge of the realizations in advance. Comparing the optimal
solutions in all three frameworks, the performance loss due to the lack of the
transmitter's information regarding the behaviors of the underlying Markov
processes is quantified.
| [
"Pol Blasco, Deniz G\\\"und\\\"uz and Mischa Dohler",
"['Pol Blasco' 'Deniz Gündüz' 'Mischa Dohler']"
] |
cs.SY cs.AI cs.LG | null | 1208.4773 | null | null | http://arxiv.org/pdf/1208.4773v1 | 2012-08-23T14:48:52Z | 2012-08-23T14:48:52Z | Optimized Look-Ahead Tree Policies: A Bridge Between Look-Ahead Tree
Policies and Direct Policy Search | Direct policy search (DPS) and look-ahead tree (LT) policies are two widely
used classes of techniques to produce high performance policies for sequential
decision-making problems. To make DPS approaches work well, one crucial issue
is to select an appropriate space of parameterized policies with respect to the
targeted problem. A fundamental issue in LT approaches is that, to take good
decisions, such policies must develop very large look-ahead trees which may
require excessive online computational resources. In this paper, we propose a
new hybrid policy learning scheme that lies at the intersection of DPS and LT,
in which the policy is an algorithm that develops a small look-ahead tree in a
directed way, guided by a node scoring function that is learned through DPS.
The LT-based representation is shown to be a versatile way of representing
policies in a DPS scheme, while at the same time, DPS enables to significantly
reduce the size of the look-ahead trees that are required to take high-quality
decisions.
We experimentally compare our method with two other state-of-the-art DPS
techniques and four common LT policies on four benchmark domains and show that
it combines the advantages of the two techniques from which it originates. In
particular, we show that our method: (1) produces overall better performing
policies than both pure DPS and pure LT policies, (2) requires a substantially
smaller number of policy evaluations than other DPS techniques, (3) is easy to
tune and (4) results in policies that are quite robust with respect to
perturbations of the initial conditions.
| [
"['Tobias Jung' 'Louis Wehenkel' 'Damien Ernst' 'Francis Maes']",
"Tobias Jung, Louis Wehenkel, Damien Ernst, Francis Maes"
] |
cs.LG math.PR | null | 1208.5003 | null | null | http://arxiv.org/pdf/1208.5003v3 | 2014-07-15T15:33:38Z | 2012-08-24T16:29:48Z | Identification of Probabilities of Languages | We consider the problem of inferring the probability distribution associated
with a language, given data consisting of an infinite sequence of elements of
the languge. We do this under two assumptions on the algorithms concerned: (i)
like a real-life algorothm it has round-off errors, and (ii) it has no
round-off errors. Assuming (i) we (a) consider a probability mass function of
the elements of the language if the data are drawn independent identically
distributed (i.i.d.), provided the probability mass function is computable and
has a finite expectation. We give an effective procedure to almost surely
identify in the limit the target probability mass function using the Strong Law
of Large Numbers. Second (b) we treat the case of possibly incomputable
probabilistic mass functions in the above setting. In this case we can only
pointswize converge to the target probability mass function almost surely.
Third (c) we consider the case where the data are dependent assuming they are
typical for at least one computable measure and the language is finite. There
is an effective procedure to identify by infinite recurrence a nonempty subset
of the computable measures according to which the data is typical. Here we use
the theory of Kolmogorov complexity. Assuming (ii) we obtain the weaker result
for (a) that the target distribution is identified by infinite recurrence
almost surely; (b) stays the same as under assumption (i). We consider the
associated predictions.
| [
"Paul M. B. Vitanyi (CWI and University of Amsterdam) and Nick Chater\n (Behavioural Science Group, Warwick Business School, University of Warwick)",
"['Paul M. B. Vitanyi' 'Nick Chater']"
] |
stat.ML cs.LG | 10.1109/JSTSP.2012.2234082 | 1208.5062 | null | null | http://arxiv.org/abs/1208.5062v3 | 2012-12-07T20:30:49Z | 2012-08-24T20:36:36Z | Changepoint detection for high-dimensional time series with missing data | This paper describes a novel approach to change-point detection when the
observed high-dimensional data may have missing elements. The performance of
classical methods for change-point detection typically scales poorly with the
dimensionality of the data, so that a large number of observations are
collected after the true change-point before it can be reliably detected.
Furthermore, missing components in the observed data handicap conventional
approaches. The proposed method addresses these challenges by modeling the
dynamic distribution underlying the data as lying close to a time-varying
low-dimensional submanifold embedded within the ambient observation space.
Specifically, streaming data is used to track a submanifold approximation,
measure deviations from this approximation, and calculate a series of
statistics of the deviations for detecting when the underlying manifold has
changed in a sharp or unexpected manner. The approach described in this paper
leverages several recent results in the field of high-dimensional data
analysis, including subspace tracking with missing data, multiscale analysis
techniques for point clouds, online optimization, and change-point detection
performance analysis. Simulations and experiments highlight the robustness and
efficacy of the proposed approach in detecting an abrupt change in an otherwise
slowly varying low-dimensional manifold.
| [
"['Yao Xie' 'Jiaji Huang' 'Rebecca Willett']",
"Yao Xie, Jiaji Huang, Rebecca Willett"
] |
cs.LG | null | 1208.5801 | null | null | http://arxiv.org/pdf/1208.5801v2 | 2012-08-31T18:17:40Z | 2012-08-28T21:51:36Z | Vector Field k-Means: Clustering Trajectories by Fitting Multiple Vector
Fields | Scientists study trajectory data to understand trends in movement patterns,
such as human mobility for traffic analysis and urban planning. There is a
pressing need for scalable and efficient techniques for analyzing this data and
discovering the underlying patterns. In this paper, we introduce a novel
technique which we call vector-field $k$-means.
The central idea of our approach is to use vector fields to induce a
similarity notion between trajectories. Other clustering algorithms seek a
representative trajectory that best describes each cluster, much like $k$-means
identifies a representative "center" for each cluster. Vector-field $k$-means,
on the other hand, recognizes that in all but the simplest examples, no single
trajectory adequately describes a cluster. Our approach is based on the premise
that movement trends in trajectory data can be modeled as flows within multiple
vector fields, and the vector field itself is what defines each of the
clusters. We also show how vector-field $k$-means connects techniques for
scalar field design on meshes and $k$-means clustering.
We present an algorithm that finds a locally optimal clustering of
trajectories into vector fields, and demonstrate how vector-field $k$-means can
be used to mine patterns from trajectory data. We present experimental evidence
of its effectiveness and efficiency using several datasets, including
historical hurricane data, GPS tracks of people and vehicles, and anonymous
call records from a large phone company. We compare our results to previous
trajectory clustering techniques, and find that our algorithm performs faster
in practice than the current state-of-the-art in trajectory clustering, in some
examples by a large margin.
| [
"['Nivan Ferreira' 'James T. Klosowski' 'Carlos Scheidegger'\n 'Claudio Silva']",
"Nivan Ferreira, James T. Klosowski, Carlos Scheidegger, Claudio Silva"
] |
cs.LG | null | 1208.6231 | null | null | http://arxiv.org/pdf/1208.6231v1 | 2012-08-30T16:48:05Z | 2012-08-30T16:48:05Z | Link Prediction via Generalized Coupled Tensor Factorisation | This study deals with the missing link prediction problem: the problem of
predicting the existence of missing connections between entities of interest.
We address link prediction using coupled analysis of relational datasets
represented as heterogeneous data, i.e., datasets in the form of matrices and
higher-order tensors. We propose to use an approach based on probabilistic
interpretation of tensor factorisation models, i.e., Generalised Coupled Tensor
Factorisation, which can simultaneously fit a large class of tensor models to
higher-order tensors/matrices with com- mon latent factors using different loss
functions. Numerical experiments demonstrate that joint analysis of data from
multiple sources via coupled factorisation improves the link prediction
performance and the selection of right loss function and tensor model is
crucial for accurately predicting missing links.
| [
"['Beyza Ermiş' 'Evrim Acar' 'A. Taylan Cemgil']",
"Beyza Ermi\\c{s} and Evrim Acar and A. Taylan Cemgil"
] |
cs.NE cs.LG | null | 1208.6310 | null | null | http://arxiv.org/pdf/1208.6310v1 | 2012-08-16T12:14:46Z | 2012-08-16T12:14:46Z | Automated Marble Plate Classification System Based On Different Neural
Network Input Training Sets and PLC Implementation | The process of sorting marble plates according to their surface texture is an
important task in the automated marble plate production. Nowadays some
inspection systems in marble industry that automate the classification tasks
are too expensive and are compatible only with specific technological equipment
in the plant. In this paper a new approach to the design of an Automated Marble
Plate Classification System (AMPCS),based on different neural network input
training sets is proposed, aiming at high classification accuracy using simple
processing and application of only standard devices. It is based on training a
classification MLP neural network with three different input training sets:
extracted texture histograms, Discrete Cosine and Wavelet Transform over the
histograms. The algorithm is implemented in a PLC for real-time operation. The
performance of the system is assessed with each one of the input training sets.
The experimental test results regarding classification accuracy and quick
operation are represented and discussed.
| [
"['Irina Topalova']",
"Irina Topalova"
] |
cs.CV cs.AI cs.IR cs.LG cs.MM | 10.5120/8320-1959 | 1208.6335 | null | null | null | null | null | Comparative Study and Optimization of Feature-Extraction Techniques for
Content based Image Retrieval | The aim of a Content-Based Image Retrieval (CBIR) system, also known as Query
by Image Content (QBIC), is to help users to retrieve relevant images based on
their contents. CBIR technologies provide a method to find images in large
databases by using unique descriptors from a trained image. The image
descriptors include texture, color, intensity and shape of the object inside an
image. Several feature-extraction techniques viz., Average RGB, Color Moments,
Co-occurrence, Local Color Histogram, Global Color Histogram and Geometric
Moment have been critically compared in this paper. However, individually these
techniques result in poor performance. So, combinations of these techniques
have also been evaluated and results for the most efficient combination of
techniques have been presented and optimized for each class of image query. We
also propose an improvement in image retrieval performance by introducing the
idea of Query modification through image cropping. It enables the user to
identify a region of interest and modify the initial query to refine and
personalize the image retrieval results.
| [
"Aman Chadha, Sushmit Mallik and Ravdeep Johar"
] |
cs.LG stat.ML | null | 1208.6338 | null | null | http://arxiv.org/pdf/1208.6338v1 | 2012-08-31T00:31:34Z | 2012-08-31T00:31:34Z | A Widely Applicable Bayesian Information Criterion | A statistical model or a learning machine is called regular if the map taking
a parameter to a probability distribution is one-to-one and if its Fisher
information matrix is always positive definite. If otherwise, it is called
singular. In regular statistical models, the Bayes free energy, which is
defined by the minus logarithm of Bayes marginal likelihood, can be
asymptotically approximated by the Schwarz Bayes information criterion (BIC),
whereas in singular models such approximation does not hold.
Recently, it was proved that the Bayes free energy of a singular model is
asymptotically given by a generalized formula using a birational invariant, the
real log canonical threshold (RLCT), instead of half the number of parameters
in BIC. Theoretical values of RLCTs in several statistical models are now being
discovered based on algebraic geometrical methodology. However, it has been
difficult to estimate the Bayes free energy using only training samples,
because an RLCT depends on an unknown true distribution.
In the present paper, we define a widely applicable Bayesian information
criterion (WBIC) by the average log likelihood function over the posterior
distribution with the inverse temperature $1/\log n$, where $n$ is the number
of training samples. We mathematically prove that WBIC has the same asymptotic
expansion as the Bayes free energy, even if a statistical model is singular for
and unrealizable by a statistical model. Since WBIC can be numerically
calculated without any information about a true distribution, it is a
generalized version of BIC onto singular statistical models.
| [
"['Sumio Watanabe']",
"Sumio Watanabe"
] |
null | null | 1209.0001 | null | null | http://arxiv.org/pdf/1209.0001v1 | 2012-08-30T20:40:06Z | 2012-08-30T20:40:06Z | An Improved Bound for the Nystrom Method for Large Eigengap | We develop an improved bound for the approximation error of the Nystr"{o}m method under the assumption that there is a large eigengap in the spectrum of kernel matrix. This is based on the empirical observation that the eigengap has a significant impact on the approximation error of the Nystr"{o}m method. Our approach is based on the concentration inequality of integral operator and the theory of matrix perturbation. Our analysis shows that when there is a large eigengap, we can improve the approximation error of the Nystr"{o}m method from $O(N/m^{1/4})$ to $O(N/m^{1/2})$ when measured in Frobenius norm, where $N$ is the size of the kernel matrix, and $m$ is the number of sampled columns. | [
"['Mehrdad Mahdavi' 'Tianbao Yang' 'Rong Jin']"
] |
cs.LG stat.ML | null | 1209.0029 | null | null | http://arxiv.org/pdf/1209.0029v3 | 2012-09-05T20:21:29Z | 2012-08-31T22:50:00Z | Statistically adaptive learning for a general class of cost functions
(SA L-BFGS) | We present a system that enables rapid model experimentation for tera-scale
machine learning with trillions of non-zero features, billions of training
examples, and millions of parameters. Our contribution to the literature is a
new method (SA L-BFGS) for changing batch L-BFGS to perform in near real-time
by using statistical tools to balance the contributions of previous weights,
old training examples, and new training examples to achieve fast convergence
with few iterations. The result is, to our knowledge, the most scalable and
flexible linear learning system reported in the literature, beating standard
practice with the current best system (Vowpal Wabbit and AllReduce). Using the
KDD Cup 2012 data set from Tencent, Inc. we provide experimental results to
verify the performance of this method.
| [
"['Stephen Purpura' 'Dustin Hillard' 'Mark Hubenthal' 'Jim Walsh'\n 'Scott Golder' 'Scott Smith']",
"Stephen Purpura, Dustin Hillard, Mark Hubenthal, Jim Walsh, Scott\n Golder, Scott Smith"
] |
cs.AI cs.DS cs.LG cs.LO | null | 1209.0056 | null | null | http://arxiv.org/pdf/1209.0056v1 | 2012-09-01T05:13:00Z | 2012-09-01T05:13:00Z | Learning implicitly in reasoning in PAC-Semantics | We consider the problem of answering queries about formulas of propositional
logic based on background knowledge partially represented explicitly as other
formulas, and partially represented as partially obscured examples
independently drawn from a fixed probability distribution, where the queries
are answered with respect to a weaker semantics than usual -- PAC-Semantics,
introduced by Valiant (2000) -- that is defined using the distribution of
examples. We describe a fairly general, efficient reduction to limited versions
of the decision problem for a proof system (e.g., bounded space treelike
resolution, bounded degree polynomial calculus, etc.) from corresponding
versions of the reasoning problem where some of the background knowledge is not
explicitly given as formulas, only learnable from the examples. Crucially, we
do not generate an explicit representation of the knowledge extracted from the
examples, and so the "learning" of the background knowledge is only done
implicitly. As a consequence, this approach can utilize formulas as background
knowledge that are not perfectly valid over the distribution---essentially the
analogue of agnostic learning here.
| [
"Brendan Juba",
"['Brendan Juba']"
] |
physics.data-an cs.LG physics.soc-ph stat.AP stat.ME | 10.1214/12-AOAS614 | 1209.0089 | null | null | http://arxiv.org/abs/1209.0089v3 | 2014-01-08T08:38:09Z | 2012-09-01T12:58:35Z | Estimating the historical and future probabilities of large terrorist
events | Quantities with right-skewed distributions are ubiquitous in complex social
systems, including political conflict, economics and social networks, and these
systems sometimes produce extremely large events. For instance, the 9/11
terrorist events produced nearly 3000 fatalities, nearly six times more than
the next largest event. But, was this enormous loss of life statistically
unlikely given modern terrorism's historical record? Accurately estimating the
probability of such an event is complicated by the large fluctuations in the
empirical distribution's upper tail. We present a generic statistical algorithm
for making such estimates, which combines semi-parametric models of tail
behavior and a nonparametric bootstrap. Applied to a global database of
terrorist events, we estimate the worldwide historical probability of observing
at least one 9/11-sized or larger event since 1968 to be 11-35%. These results
are robust to conditioning on global variations in economic development,
domestic versus international events, the type of weapon used and a truncated
history that stops at 1998. We then use this procedure to make a data-driven
statistical forecast of at least one similar event over the next decade.
| [
"Aaron Clauset, Ryan Woodard",
"['Aaron Clauset' 'Ryan Woodard']"
] |
cs.DL cs.LG stat.ML | null | 1209.0125 | null | null | http://arxiv.org/pdf/1209.0125v2 | 2013-08-16T23:52:49Z | 2012-09-01T19:27:19Z | A History of Cluster Analysis Using the Classification Society's
Bibliography Over Four Decades | The Classification Literature Automated Search Service, an annual
bibliography based on citation of one or more of a set of around 80 book or
journal publications, ran from 1972 to 2012. We analyze here the years 1994 to
2011. The Classification Society's Service, as it was termed, has been produced
by the Classification Society. In earlier decades it was distributed as a
diskette or CD with the Journal of Classification. Among our findings are the
following: an enormous increase in scholarly production post approximately
2000; a very major increase in quantity, coupled with work in different
disciplines, from approximately 2004; and a major shift also from cluster
analysis in earlier times having mathematics and psychology as disciplines of
the journals published in, and affiliations of authors, contrasted with, in
more recent times, a "centre of gravity" in management and engineering.
| [
"['Fionn Murtagh' 'Michael J. Kurtz']",
"Fionn Murtagh and Michael J. Kurtz"
] |
cs.LG cs.CE cs.NE | null | 1209.0127 | null | null | http://arxiv.org/pdf/1209.0127v2 | 2012-09-24T19:28:24Z | 2012-09-01T19:53:23Z | Autoregressive short-term prediction of turning points using support
vector regression | This work is concerned with autoregressive prediction of turning points in
financial price sequences. Such turning points are critical local extrema
points along a series, which mark the start of new swings. Predicting the
future time of such turning points or even their early or late identification
slightly before or after the fact has useful applications in economics and
finance. Building on recently proposed neural network model for turning point
prediction, we propose and study a new autoregressive model for predicting
turning points of small swings. Our method relies on a known turning point
indicator, a Fourier enriched representation of price histories, and support
vector regression. We empirically examine the performance of the proposed
method over a long history of the Dow Jones Industrial average. Our study shows
that the proposed method is superior to the previous neural network model, in
terms of trading performance of a simple trading application and also exhibits
a quantifiable advantage over the buy-and-hold benchmark.
| [
"Ran El-Yaniv, Alexandra Faynburd",
"['Ran El-Yaniv' 'Alexandra Faynburd']"
] |
math.OC cs.LG stat.ML | null | 1209.0368 | null | null | http://arxiv.org/pdf/1209.0368v1 | 2012-09-03T14:46:14Z | 2012-09-03T14:46:14Z | Proximal methods for the latent group lasso penalty | We consider a regularized least squares problem, with regularization by
structured sparsity-inducing norms, which extend the usual $\ell_1$ and the
group lasso penalty, by allowing the subsets to overlap. Such regularizations
lead to nonsmooth problems that are difficult to optimize, and we propose in
this paper a suitable version of an accelerated proximal method to solve them.
We prove convergence of a nested procedure, obtained composing an accelerated
proximal method with an inner algorithm for computing the proximity operator.
By exploiting the geometrical properties of the penalty, we devise a new active
set strategy, thanks to which the inner iteration is relatively fast, thus
guaranteeing good computational performances of the overall algorithm. Our
approach allows to deal with high dimensional problems without pre-processing
for dimensionality reduction, leading to better computational and prediction
performances with respect to the state-of-the art methods, as shown empirically
both on toy and real data.
| [
"Silvia Villa, Lorenzo Rosasco, Sofia Mosci, Alessandro Verri",
"['Silvia Villa' 'Lorenzo Rosasco' 'Sofia Mosci' 'Alessandro Verri']"
] |
cs.LG math.OC | null | 1209.0430 | null | null | http://arxiv.org/pdf/1209.0430v2 | 2013-04-24T12:19:54Z | 2012-09-03T18:48:37Z | Fixed-rank matrix factorizations and Riemannian low-rank optimization | Motivated by the problem of learning a linear regression model whose
parameter is a large fixed-rank non-symmetric matrix, we consider the
optimization of a smooth cost function defined on the set of fixed-rank
matrices. We adopt the geometric framework of optimization on Riemannian
quotient manifolds. We study the underlying geometries of several well-known
fixed-rank matrix factorizations and then exploit the Riemannian quotient
geometry of the search space in the design of a class of gradient descent and
trust-region algorithms. The proposed algorithms generalize our previous
results on fixed-rank symmetric positive semidefinite matrices, apply to a
broad range of applications, scale to high-dimensional problems and confer a
geometric basis to recent contributions on the learning of fixed-rank
non-symmetric matrices. We make connections with existing algorithms in the
context of low-rank matrix completion and discuss relative usefulness of the
proposed framework. Numerical experiments suggest that the proposed algorithms
compete with the state-of-the-art and that manifold optimization offers an
effective and versatile framework for the design of machine learning algorithms
that learn a fixed-rank matrix.
| [
"B. Mishra, G. Meyer, S. Bonnabel and R. Sepulchre",
"['B. Mishra' 'G. Meyer' 'S. Bonnabel' 'R. Sepulchre']"
] |
cs.LG stat.ML | null | 1209.0521 | null | null | http://arxiv.org/pdf/1209.0521v2 | 2018-01-08T15:50:42Z | 2012-09-04T03:15:53Z | Efficient EM Training of Gaussian Mixtures with Missing Data | In data-mining applications, we are frequently faced with a large fraction of
missing entries in the data matrix, which is problematic for most discriminant
machine learning algorithms. A solution that we explore in this paper is the
use of a generative model (a mixture of Gaussians) to compute the conditional
expectation of the missing variables given the observed variables. Since
training a Gaussian mixture with many different patterns of missing values can
be computationally very expensive, we introduce a spanning-tree based algorithm
that significantly speeds up training in these conditions. We also observe that
good results can be obtained by using the generative model to fill-in the
missing values for a separate discriminant learning algorithm.
| [
"['Olivier Delalleau' 'Aaron Courville' 'Yoshua Bengio']",
"Olivier Delalleau and Aaron Courville and Yoshua Bengio"
] |
cs.LG stat.ML | null | 1209.0738 | null | null | http://arxiv.org/pdf/1209.0738v3 | 2014-06-16T15:06:48Z | 2012-09-04T19:06:51Z | Sparse coding for multitask and transfer learning | We investigate the use of sparse coding and dictionary learning in the
context of multitask and transfer learning. The central assumption of our
learning method is that the tasks parameters are well approximated by sparse
linear combinations of the atoms of a dictionary on a high or infinite
dimensional space. This assumption, together with the large quantity of
available data in the multitask and transfer learning settings, allows a
principled choice of the dictionary. We provide bounds on the generalization
error of this approach, for both settings. Numerical experiments on one
synthetic and two real datasets show the advantage of our method over single
task learning, a previous method based on orthogonal and dense representation
of the tasks and a related method learning task grouping.
| [
"Andreas Maurer, Massimiliano Pontil, Bernardino Romera-Paredes",
"['Andreas Maurer' 'Massimiliano Pontil' 'Bernardino Romera-Paredes']"
] |
cs.LG | 10.1109/ICSTE.2010.5608792 | 1209.0853 | null | null | http://arxiv.org/abs/1209.0853v1 | 2012-09-05T03:02:26Z | 2012-09-05T03:02:26Z | Improving the K-means algorithm using improved downhill simplex search | The k-means algorithm is one of the well-known and most popular clustering
algorithms. K-means seeks an optimal partition of the data by minimizing the
sum of squared error with an iterative optimization procedure, which belongs to
the category of hill climbing algorithms. As we know hill climbing searches are
famous for converging to local optimums. Since k-means can converge to a local
optimum, different initial points generally lead to different convergence
cancroids, which makes it important to start with a reasonable initial
partition in order to achieve high quality clustering solutions. However, in
theory, there exist no efficient and universal methods for determining such
initial partitions. In this paper we tried to find an optimum initial
partitioning for k-means algorithm. To achieve this goal we proposed a new
improved version of downhill simplex search, and then we used it in order to
find an optimal result for clustering approach and then compare this algorithm
with Genetic Algorithm base (GA), Genetic K-Means (GKM), Improved Genetic
K-Means (IGKM) and k-means algorithms.
| [
"['Ehsan Saboori' 'Shafigh Parsazad' 'Anoosheh Sadeghi']",
"Ehsan Saboori, Shafigh Parsazad, Anoosheh Sadeghi"
] |
cs.LG | null | 1209.0913 | null | null | http://arxiv.org/pdf/1209.0913v1 | 2012-09-05T10:08:02Z | 2012-09-05T10:08:02Z | Structuring Relevant Feature Sets with Multiple Model Learning | Feature selection is one of the most prominent learning tasks, especially in
high-dimensional datasets in which the goal is to understand the mechanisms
that underly the learning dataset. However most of them typically deliver just
a flat set of relevant features and provide no further information on what kind
of structures, e.g. feature groupings, might underly the set of relevant
features. In this paper we propose a new learning paradigm in which our goal is
to uncover the structures that underly the set of relevant features for a given
learning problem. We uncover two types of features sets, non-replaceable
features that contain important information about the target variable and
cannot be replaced by other features, and functionally similar features sets
that can be used interchangeably in learned models, given the presence of the
non-replaceable features, with no change in the predictive performance. To do
so we propose a new learning algorithm that learns a number of disjoint models
using a model disjointness regularization constraint together with a constraint
on the predictive agreement of the disjoint models. We explore the behavior of
our approach on a number of high-dimensional datasets, and show that, as
expected by their construction, these satisfy a number of properties. Namely,
model disjointness, a high predictive agreement, and a similar predictive
performance to models learned on the full set of relevant features. The ability
to structure the set of relevant features in such a manner can become a
valuable tool in different applications of scientific knowledge discovery.
| [
"Jun Wang and Alexandros Kalousis",
"['Jun Wang' 'Alexandros Kalousis']"
] |
cs.IT cs.LG math.IT | null | 1209.1033 | null | null | http://arxiv.org/pdf/1209.1033v4 | 2013-05-01T09:28:29Z | 2012-09-05T16:29:17Z | The Annealing Sparse Bayesian Learning Algorithm | In this paper we propose a two-level hierarchical Bayesian model and an
annealing schedule to re-enable the noise variance learning capability of the
fast marginalized Sparse Bayesian Learning Algorithms. The performance such as
NMSE and F-measure can be greatly improved due to the annealing technique. This
algorithm tends to produce the most sparse solution under moderate SNR
scenarios and can outperform most concurrent SBL algorithms while pertains
small computational load.
| [
"Benyuan Liu and Hongqi Fan and Zaiqi Lu and Qiang Fu",
"['Benyuan Liu' 'Hongqi Fan' 'Zaiqi Lu' 'Qiang Fu']"
] |
cs.LG stat.ML | null | 1209.1077 | null | null | http://arxiv.org/pdf/1209.1077v1 | 2012-09-05T19:10:09Z | 2012-09-05T19:10:09Z | Learning Probability Measures with respect to Optimal Transport Metrics | We study the problem of estimating, in the sense of optimal transport
metrics, a measure which is assumed supported on a manifold embedded in a
Hilbert space. By establishing a precise connection between optimal transport
metrics, optimal quantization, and learning theory, we derive new probabilistic
bounds for the performance of a classic algorithm in unsupervised learning
(k-means), when used to produce a probability measure derived from the data. In
the course of the analysis, we arrive at new lower bounds, as well as
probabilistic upper bounds on the convergence rate of the empirical law of
large numbers, which, unlike existing bounds, are applicable to a wide class of
measures.
| [
"['Guillermo D. Canas' 'Lorenzo Rosasco']",
"Guillermo D. Canas and Lorenzo Rosasco"
] |
cs.LG cs.AI stat.ML | 10.1016/j.neucom.2014.09.044 | 1209.1086 | null | null | http://arxiv.org/abs/1209.1086v3 | 2014-09-29T09:27:31Z | 2012-09-05T19:48:59Z | Robustness and Generalization for Metric Learning | Metric learning has attracted a lot of interest over the last decade, but the
generalization ability of such methods has not been thoroughly studied. In this
paper, we introduce an adaptation of the notion of algorithmic robustness
(previously introduced by Xu and Mannor) that can be used to derive
generalization bounds for metric learning. We further show that a weak notion
of robustness is in fact a necessary and sufficient condition for a metric
learning algorithm to generalize. To illustrate the applicability of the
proposed framework, we derive generalization results for a large family of
existing metric learning algorithms, including some sparse formulations that
are not covered by previous results.
| [
"['Aurélien Bellet' 'Amaury Habrard']",
"Aur\\'elien Bellet and Amaury Habrard"
] |
cs.LG stat.ML | null | 1209.1121 | null | null | http://arxiv.org/pdf/1209.1121v4 | 2013-02-19T17:53:17Z | 2012-09-05T21:18:03Z | Learning Manifolds with K-Means and K-Flats | We study the problem of estimating a manifold from random samples. In
particular, we consider piecewise constant and piecewise linear estimators
induced by k-means and k-flats, and analyze their performance. We extend
previous results for k-means in two separate directions. First, we provide new
results for k-means reconstruction on manifolds and, secondly, we prove
reconstruction bounds for higher-order approximation (k-flats), for which no
known results were previously available. While the results for k-means are
novel, some of the technical tools are well-established in the literature. In
the case of k-flats, both the results and the mathematical tools are new.
| [
"Guillermo D. Canas and Tomaso Poggio and Lorenzo Rosasco",
"['Guillermo D. Canas' 'Tomaso Poggio' 'Lorenzo Rosasco']"
] |
stat.ML cs.LG | null | 1209.1360 | null | null | http://arxiv.org/pdf/1209.1360v2 | 2012-09-14T14:14:53Z | 2012-09-06T18:22:25Z | Multiclass Learning with Simplex Coding | In this paper we discuss a novel framework for multiclass learning, defined
by a suitable coding/decoding strategy, namely the simplex coding, that allows
to generalize to multiple classes a relaxation approach commonly used in binary
classification. In this framework, a relaxation error analysis can be developed
avoiding constraints on the considered hypotheses class. Moreover, we show that
in this setting it is possible to derive the first provably consistent
regularized method with training/tuning complexity which is independent to the
number of classes. Tools from convex analysis are introduced that can be used
beyond the scope of this paper.
| [
"['Youssef Mroueh' 'Tomaso Poggio' 'Lorenzo Rosasco' 'Jean-Jacques Slotine']",
"Youssef Mroueh, Tomaso Poggio, Lorenzo Rosasco, Jean-Jacques Slotine"
] |
stat.ML cs.LG | null | 1209.1450 | null | null | http://arxiv.org/pdf/1209.1450v1 | 2012-09-07T06:28:42Z | 2012-09-07T06:28:42Z | On spatial selectivity and prediction across conditions with fMRI | Researchers in functional neuroimaging mostly use activation coordinates to
formulate their hypotheses. Instead, we propose to use the full statistical
images to define regions of interest (ROIs). This paper presents two machine
learning approaches, transfer learning and selection transfer, that are
compared upon their ability to identify the common patterns between brain
activation maps related to two functional tasks. We provide some preliminary
quantification of these similarities, and show that selection transfer makes it
possible to set a spatial scale yielding ROIs that are more specific to the
context of interest than with transfer learning. In particular, selection
transfer outlines well known regions such as the Visual Word Form Area when
discriminating between different visual tasks.
| [
"['Yannick Schwartz' 'Gaël Varoquaux' 'Bertrand Thirion']",
"Yannick Schwartz (INRIA Saclay - Ile de France, LNAO), Ga\\\"el\n Varoquaux (INRIA Saclay - Ile de France, LNAO), Bertrand Thirion (INRIA\n Saclay - Ile de France, LNAO)"
] |
stat.ML cs.LG math.OC | 10.1109/TIT.2016.2515078 | 1209.1557 | null | null | http://arxiv.org/abs/1209.1557v4 | 2016-01-27T13:14:52Z | 2012-09-07T14:46:49Z | Learning Model-Based Sparsity via Projected Gradient Descent | Several convex formulation methods have been proposed previously for
statistical estimation with structured sparsity as the prior. These methods
often require a carefully tuned regularization parameter, often a cumbersome or
heuristic exercise. Furthermore, the estimate that these methods produce might
not belong to the desired sparsity model, albeit accurately approximating the
true parameter. Therefore, greedy-type algorithms could often be more desirable
in estimating structured-sparse parameters. So far, these greedy methods have
mostly focused on linear statistical models. In this paper we study the
projected gradient descent with non-convex structured-sparse parameter model as
the constraint set. Should the cost function have a Stable Model-Restricted
Hessian the algorithm produces an approximation for the desired minimizer. As
an example we elaborate on application of the main results to estimation in
Generalized Linear Model.
| [
"['Sohail Bahmani' 'Petros T. Boufounos' 'Bhiksha Raj']",
"Sohail Bahmani, Petros T. Boufounos, and Bhiksha Raj"
] |
cs.LG stat.ML | null | 1209.1688 | null | null | http://arxiv.org/pdf/1209.1688v4 | 2015-11-12T17:51:33Z | 2012-09-08T04:42:18Z | Rank Centrality: Ranking from Pair-wise Comparisons | The question of aggregating pair-wise comparisons to obtain a global ranking
over a collection of objects has been of interest for a very long time: be it
ranking of online gamers (e.g. MSR's TrueSkill system) and chess players,
aggregating social opinions, or deciding which product to sell based on
transactions. In most settings, in addition to obtaining a ranking, finding
`scores' for each object (e.g. player's rating) is of interest for
understanding the intensity of the preferences.
In this paper, we propose Rank Centrality, an iterative rank aggregation
algorithm for discovering scores for objects (or items) from pair-wise
comparisons. The algorithm has a natural random walk interpretation over the
graph of objects with an edge present between a pair of objects if they are
compared; the score, which we call Rank Centrality, of an object turns out to
be its stationary probability under this random walk. To study the efficacy of
the algorithm, we consider the popular Bradley-Terry-Luce (BTL) model
(equivalent to the Multinomial Logit (MNL) for pair-wise comparisons) in which
each object has an associated score which determines the probabilistic outcomes
of pair-wise comparisons between objects. In terms of the pair-wise marginal
probabilities, which is the main subject of this paper, the MNL model and the
BTL model are identical. We bound the finite sample error rates between the
scores assumed by the BTL model and those estimated by our algorithm. In
particular, the number of samples required to learn the score well with high
probability depends on the structure of the comparison graph. When the
Laplacian of the comparison graph has a strictly positive spectral gap, e.g.
each item is compared to a subset of randomly chosen items, this leads to
dependence on the number of samples that is nearly order-optimal.
| [
"Sahand Negahban, Sewoong Oh, Devavrat Shah",
"['Sahand Negahban' 'Sewoong Oh' 'Devavrat Shah']"
] |
stat.ML cs.LG | null | 1209.1727 | null | null | http://arxiv.org/pdf/1209.1727v1 | 2012-09-08T15:22:07Z | 2012-09-08T15:22:07Z | Bandits with heavy tail | The stochastic multi-armed bandit problem is well understood when the reward
distributions are sub-Gaussian. In this paper we examine the bandit problem
under the weaker assumption that the distributions have moments of order
1+\epsilon, for some $\epsilon \in (0,1]$. Surprisingly, moments of order 2
(i.e., finite variance) are sufficient to obtain regret bounds of the same
order as under sub-Gaussian reward distributions. In order to achieve such
regret, we define sampling strategies based on refined estimators of the mean
such as the truncated empirical mean, Catoni's M-estimator, and the
median-of-means estimator. We also derive matching lower bounds that also show
that the best achievable regret deteriorates when \epsilon <1.
| [
"['Sébastien Bubeck' 'Nicolò Cesa-Bianchi' 'Gábor Lugosi']",
"S\\'ebastien Bubeck, Nicol\\`o Cesa-Bianchi and G\\'abor Lugosi"
] |
cs.LG cs.NI | null | 1209.1739 | null | null | http://arxiv.org/pdf/1209.1739v1 | 2012-09-08T18:34:01Z | 2012-09-08T18:34:01Z | Design of Spectrum Sensing Policy for Multi-user Multi-band Cognitive
Radio Network | Finding an optimal sensing policy for a particular access policy and sensing
scheme is a laborious combinatorial problem that requires the system model
parameters to be known. In practise the parameters or the model itself may not
be completely known making reinforcement learning methods appealing. In this
paper a non-parametric reinforcement learning-based method is developed for
sensing and accessing multi-band radio spectrum in multi-user cognitive radio
networks. A suboptimal sensing policy search algorithm is proposed for a
particular multi-user multi-band access policy and the randomized
Chair-Varshney rule. The randomized Chair-Varshney rule is used to reduce the
probability of false alarms under a constraint on the probability of detection
that protects the primary user. The simulation results show that the proposed
method achieves a sum profit (e.g. data rate) close to the optimal sensing
policy while achieving the desired probability of detection.
| [
"['Jan Oksanen' 'Jarmo Lundén' 'Visa Koivunen']",
"Jan Oksanen, Jarmo Lund\\'en and Visa Koivunen"
] |
cs.CR cs.LG | null | 1209.1797 | null | null | http://arxiv.org/pdf/1209.1797v3 | 2013-06-05T13:19:42Z | 2012-09-09T13:02:49Z | Securing Your Transactions: Detecting Anomalous Patterns In XML
Documents | XML transactions are used in many information systems to store data and
interact with other systems. Abnormal transactions, the result of either an
on-going cyber attack or the actions of a benign user, can potentially harm the
interacting systems and therefore they are regarded as a threat. In this paper
we address the problem of anomaly detection and localization in XML
transactions using machine learning techniques. We present a new XML anomaly
detection framework, XML-AD. Within this framework, an automatic method for
extracting features from XML transactions was developed as well as a practical
method for transforming XML features into vectors of fixed dimensionality. With
these two methods in place, the XML-AD framework makes it possible to utilize
general learning algorithms for anomaly detection. Central to the functioning
of the framework is a novel multi-univariate anomaly detection algorithm,
ADIFA. The framework was evaluated on four XML transactions datasets, captured
from real information systems, in which it achieved over 89% true positive
detection rate with less than a 0.2% false positive rate.
| [
"Eitan Menahem, Alon Schclar, Lior Rokach, Yuval Elovici",
"['Eitan Menahem' 'Alon Schclar' 'Lior Rokach' 'Yuval Elovici']"
] |
cs.LG | null | 1209.1800 | null | null | http://arxiv.org/pdf/1209.1800v1 | 2012-09-09T14:11:04Z | 2012-09-09T14:11:04Z | An Empirical Study of MAUC in Multi-class Problems with Uncertain Cost
Matrices | Cost-sensitive learning relies on the availability of a known and fixed cost
matrix. However, in some scenarios, the cost matrix is uncertain during
training, and re-train a classifier after the cost matrix is specified would
not be an option. For binary classification, this issue can be successfully
addressed by methods maximizing the Area Under the ROC Curve (AUC) metric.
Since the AUC can measure performance of base classifiers independent of cost
during training, and a larger AUC is more likely to lead to a smaller total
cost in testing using the threshold moving method. As an extension of AUC to
multi-class problems, MAUC has attracted lots of attentions and been widely
used. Although MAUC also measures performance of base classifiers independent
of cost, it is unclear whether a larger MAUC of classifiers is more likely to
lead to a smaller total cost. In fact, it is also unclear what kinds of
post-processing methods should be used in multi-class problems to convert base
classifiers into discrete classifiers such that the total cost is as small as
possible. In the paper, we empirically explore the relationship between MAUC
and the total cost of classifiers by applying two categories of post-processing
methods. Our results suggest that a larger MAUC is also beneficial.
Interestingly, simple calibration methods that convert the output matrix into
posterior probabilities perform better than existing sophisticated post
re-optimization methods.
| [
"Rui Wang, Ke Tang",
"['Rui Wang' 'Ke Tang']"
] |
stat.ML cs.LG math.OC | null | 1209.1873 | null | null | http://arxiv.org/pdf/1209.1873v2 | 2013-01-30T15:30:25Z | 2012-09-10T03:25:29Z | Stochastic Dual Coordinate Ascent Methods for Regularized Loss
Minimization | Stochastic Gradient Descent (SGD) has become popular for solving large scale
supervised machine learning optimization problems such as SVM, due to their
strong theoretical guarantees. While the closely related Dual Coordinate Ascent
(DCA) method has been implemented in various software packages, it has so far
lacked good convergence analysis. This paper presents a new analysis of
Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods
enjoy strong theoretical guarantees that are comparable or better than SGD.
This analysis justifies the effectiveness of SDCA for practical applications.
| [
"Shai Shalev-Shwartz and Tong Zhang",
"['Shai Shalev-Shwartz' 'Tong Zhang']"
] |
cs.LG cs.CV | 10.1016/j.eswa.2012.07.021 | 1209.1960 | null | null | http://arxiv.org/abs/1209.1960v1 | 2012-09-10T12:22:06Z | 2012-09-10T12:22:06Z | A Comparative Study of Efficient Initialization Methods for the K-Means
Clustering Algorithm | K-means is undoubtedly the most widely used partitional clustering algorithm.
Unfortunately, due to its gradient descent nature, this algorithm is highly
sensitive to the initial placement of the cluster centers. Numerous
initialization methods have been proposed to address this problem. In this
paper, we first present an overview of these methods with an emphasis on their
computational efficiency. We then compare eight commonly used linear time
complexity initialization methods on a large and diverse collection of data
sets using various performance criteria. Finally, we analyze the experimental
results using non-parametric statistical tests and provide recommendations for
practitioners. We demonstrate that popular initialization methods often perform
poorly and that there are in fact strong alternatives to these methods.
| [
"['M. Emre Celebi' 'Hassan A. Kingravi' 'Patricio A. Vela']",
"M. Emre Celebi, Hassan A. Kingravi, Patricio A. Vela"
] |
cs.LG stat.ML | null | 1209.2139 | null | null | http://arxiv.org/pdf/1209.2139v2 | 2013-12-31T03:19:45Z | 2012-09-10T20:13:42Z | Fused Multiple Graphical Lasso | In this paper, we consider the problem of estimating multiple graphical
models simultaneously using the fused lasso penalty, which encourages adjacent
graphs to share similar structures. A motivating example is the analysis of
brain networks of Alzheimer's disease using neuroimaging data. Specifically, we
may wish to estimate a brain network for the normal controls (NC), a brain
network for the patients with mild cognitive impairment (MCI), and a brain
network for Alzheimer's patients (AD). We expect the two brain networks for NC
and MCI to share common structures but not to be identical to each other;
similarly for the two brain networks for MCI and AD. The proposed formulation
can be solved using a second-order method. Our key technical contribution is to
establish the necessary and sufficient condition for the graphs to be
decomposable. Based on this key property, a simple screening rule is presented,
which decomposes the large graphs into small subgraphs and allows an efficient
estimation of multiple independent (small) subgraphs, dramatically reducing the
computational cost. We perform experiments on both synthetic and real data; our
results demonstrate the effectiveness and efficiency of the proposed approach.
| [
"['Sen Yang' 'Zhaosong Lu' 'Xiaotong Shen' 'Peter Wonka' 'Jieping Ye']",
"Sen Yang, Zhaosong Lu, Xiaotong Shen, Peter Wonka, Jieping Ye"
] |
math.OC cs.LG cs.MA cs.SY | null | 1209.2194 | null | null | http://arxiv.org/pdf/1209.2194v5 | 2014-12-15T21:07:19Z | 2012-09-11T01:33:58Z | Cooperative learning in multi-agent systems from intermittent
measurements | Motivated by the problem of tracking a direction in a decentralized way, we
consider the general problem of cooperative learning in multi-agent systems
with time-varying connectivity and intermittent measurements. We propose a
distributed learning protocol capable of learning an unknown vector $\mu$ from
noisy measurements made independently by autonomous nodes. Our protocol is
completely distributed and able to cope with the time-varying, unpredictable,
and noisy nature of inter-agent communication, and intermittent noisy
measurements of $\mu$. Our main result bounds the learning speed of our
protocol in terms of the size and combinatorial features of the (time-varying)
networks connecting the nodes.
| [
"Naomi Ehrich Leonard, Alex Olshevsky",
"['Naomi Ehrich Leonard' 'Alex Olshevsky']"
] |
cs.LG cs.AI cs.IR math.ST stat.TH | null | 1209.2355 | null | null | http://arxiv.org/pdf/1209.2355v5 | 2013-07-27T18:02:46Z | 2012-09-11T15:47:43Z | Counterfactual Reasoning and Learning Systems | This work shows how to leverage causal inference to understand the behavior
of complex learning systems interacting with their environment and predict the
consequences of changes to the system. Such predictions allow both humans and
algorithms to select changes that improve both the short-term and long-term
performance of such systems. This work is illustrated by experiments carried
out on the ad placement system associated with the Bing search engine.
| [
"L\\'eon Bottou, Jonas Peters, Joaquin Qui\\~nonero-Candela, Denis X.\n Charles, D. Max Chickering, Elon Portugaly, Dipankar Ray, Patrice Simard, Ed\n Snelson",
"['Léon Bottou' 'Jonas Peters' 'Joaquin Quiñonero-Candela'\n 'Denis X. Charles' 'D. Max Chickering' 'Elon Portugaly' 'Dipankar Ray'\n 'Patrice Simard' 'Ed Snelson']"
] |
cs.LG math.OC stat.ML | null | 1209.2388 | null | null | http://arxiv.org/pdf/1209.2388v3 | 2013-04-29T18:48:17Z | 2012-09-11T18:16:56Z | On the Complexity of Bandit and Derivative-Free Stochastic Convex
Optimization | The problem of stochastic convex optimization with bandit feedback (in the
learning community) or without knowledge of gradients (in the optimization
community) has received much attention in recent years, in the form of
algorithms and performance upper bounds. However, much less is known about the
inherent complexity of these problems, and there are few lower bounds in the
literature, especially for nonlinear functions. In this paper, we investigate
the attainable error/regret in the bandit and derivative-free settings, as a
function of the dimension d and the available number of queries T. We provide a
precise characterization of the attainable performance for strongly-convex and
smooth functions, which also imply a non-trivial lower bound for more general
problems. Moreover, we prove that in both the bandit and derivative-free
setting, the required number of queries must scale at least quadratically with
the dimension. Finally, we show that on the natural class of quadratic
functions, it is possible to obtain a "fast" O(1/T) error rate in terms of T,
under mild assumptions, even without having access to gradients. To the best of
our knowledge, this is the first such rate in a derivative-free stochastic
setting, and holds despite previous results which seem to imply the contrary.
| [
"['Ohad Shamir']",
"Ohad Shamir"
] |
stat.ML cs.LG | null | 1209.2434 | null | null | http://arxiv.org/pdf/1209.2434v1 | 2012-09-11T20:37:02Z | 2012-09-11T20:37:02Z | Query Complexity of Derivative-Free Optimization | This paper provides lower bounds on the convergence rate of Derivative Free
Optimization (DFO) with noisy function evaluations, exposing a fundamental and
unavoidable gap between the performance of algorithms with access to gradients
and those with access to only function evaluations. However, there are
situations in which DFO is unavoidable, and for such situations we propose a
new DFO algorithm that is proved to be near optimal for the class of strongly
convex objective functions. A distinctive feature of the algorithm is that it
uses only Boolean-valued function comparisons, rather than function
evaluations. This makes the algorithm useful in an even wider range of
applications, such as optimization based on paired comparisons from human
subjects, for example. We also show that regardless of whether DFO is based on
noisy function evaluations or Boolean-valued function comparisons, the
convergence rate is the same.
| [
"Kevin G. Jamieson, Robert D. Nowak, Benjamin Recht",
"['Kevin G. Jamieson' 'Robert D. Nowak' 'Benjamin Recht']"
] |
cs.LG | null | 1209.2501 | null | null | http://arxiv.org/pdf/1209.2501v1 | 2012-09-12T05:28:32Z | 2012-09-12T05:28:32Z | Performance Evaluation of Predictive Classifiers For Knowledge Discovery
From Engineering Materials Data Sets | In this paper, naive Bayesian and C4.5 Decision Tree Classifiers(DTC) are
successively applied on materials informatics to classify the engineering
materials into different classes for the selection of materials that suit the
input design specifications. Here, the classifiers are analyzed individually
and their performance evaluation is analyzed with confusion matrix predictive
parameters and standard measures, the classification results are analyzed on
different class of materials. Comparison of classifiers has found that naive
Bayesian classifier is more accurate and better than the C4.5 DTC. The
knowledge discovered by the naive bayesian classifier can be employed for
decision making in materials selection in manufacturing industries.
| [
"Hemanth K. S Doreswamy",
"['Hemanth K. S Doreswamy']"
] |
cs.LO cs.AI cs.LG math.LO math.PR | null | 1209.2620 | null | null | http://arxiv.org/pdf/1209.2620v1 | 2012-09-12T14:17:09Z | 2012-09-12T14:17:09Z | Probabilities on Sentences in an Expressive Logic | Automated reasoning about uncertain knowledge has many applications. One
difficulty when developing such systems is the lack of a completely
satisfactory integration of logic and probability. We address this problem
directly. Expressive languages like higher-order logic are ideally suited for
representing and reasoning about structured knowledge. Uncertain knowledge can
be modeled by using graded probabilities rather than binary truth-values. The
main technical problem studied in this paper is the following: Given a set of
sentences, each having some probability of being true, what probability should
be ascribed to other (query) sentences? A natural wish-list, among others, is
that the probability distribution (i) is consistent with the knowledge base,
(ii) allows for a consistent inference procedure and in particular (iii)
reduces to deductive logic in the limit of probabilities being 0 and 1, (iv)
allows (Bayesian) inductive reasoning and (v) learning in the limit and in
particular (vi) allows confirmation of universally quantified
hypotheses/sentences. We translate this wish-list into technical requirements
for a prior probability and show that probabilities satisfying all our criteria
exist. We also give explicit constructions and several general
characterizations of probabilities that satisfy some or all of the criteria and
various (counter) examples. We also derive necessary and sufficient conditions
for extending beliefs about finitely many sentences to suitable probabilities
over all sentences, and in particular least dogmatic or least biased ones. We
conclude with a brief outlook on how the developed theory might be used and
approximated in autonomous reasoning agents. Our theory is a step towards a
globally consistent and empirically satisfactory unification of probability and
logic.
| [
"Marcus Hutter and John W. Lloyd and Kee Siong Ng and William T. B.\n Uther",
"['Marcus Hutter' 'John W. Lloyd' 'Kee Siong Ng' 'William T. B. Uther']"
] |
cs.LG | null | 1209.2673 | null | null | http://arxiv.org/pdf/1209.2673v2 | 2012-09-24T18:28:44Z | 2012-09-12T17:39:37Z | Conditional validity of inductive conformal predictors | Conformal predictors are set predictors that are automatically valid in the
sense of having coverage probability equal to or exceeding a given confidence
level. Inductive conformal predictors are a computationally efficient version
of conformal predictors satisfying the same property of validity. However,
inductive conformal predictors have been only known to control unconditional
coverage probability. This paper explores various versions of conditional
validity and various ways to achieve them using inductive conformal predictors
and their modifications.
| [
"Vladimir Vovk",
"['Vladimir Vovk']"
] |
cs.LG math.OC stat.ML | null | 1209.2693 | null | null | http://arxiv.org/pdf/1209.2693v1 | 2012-09-12T19:14:21Z | 2012-09-12T19:14:21Z | Regret Bounds for Restless Markov Bandits | We consider the restless Markov bandit problem, in which the state of each
arm evolves according to a Markov process independently of the learner's
actions. We suggest an algorithm that after $T$ steps achieves
$\tilde{O}(\sqrt{T})$ regret with respect to the best policy that knows the
distributions of all arms. No assumptions on the Markov chains are made except
that they are irreducible. In addition, we show that index-based policies are
necessarily suboptimal for the considered problem.
| [
"Ronald Ortner, Daniil Ryabko, Peter Auer, R\\'emi Munos",
"['Ronald Ortner' 'Daniil Ryabko' 'Peter Auer' 'Rémi Munos']"
] |
cs.LG cs.DS stat.AP | null | 1209.2759 | null | null | http://arxiv.org/pdf/1209.2759v1 | 2012-09-13T01:44:12Z | 2012-09-13T01:44:12Z | Multi-track Map Matching | We study algorithms for matching user tracks, consisting of time-ordered
location points, to paths in the road network. Previous work has focused on the
scenario where the location data is linearly ordered and consists of fairly
dense and regular samples. In this work, we consider the \emph{multi-track map
matching}, where the location data comes from different trips on the same
route, each with very sparse samples. This captures the realistic scenario
where users repeatedly travel on regular routes and samples are sparsely
collected, either due to energy consumption constraints or because samples are
only collected when the user actively uses a service. In the multi-track
problem, the total set of combined locations is only partially ordered, rather
than globally ordered as required by previous map-matching algorithms. We
propose two methods, the iterative projection scheme and the graph Laplacian
scheme, to solve the multi-track problem by using a single-track map-matching
subroutine. We also propose a boosting technique which may be applied to either
approach to improve the accuracy of the estimated paths. In addition, in order
to deal with variable sampling rates in single-track map matching, we propose a
method based on a particular regularized cost function that can be adapted for
different sampling rates and measurement errors. We evaluate the effectiveness
of our techniques for reconstructing tracks under several different
configurations of sampling error and sampling rate.
| [
"Adel Javanmard, Maya Haridasan and Li Zhang",
"['Adel Javanmard' 'Maya Haridasan' 'Li Zhang']"
] |
cs.LG stat.ML | null | 1209.2784 | null | null | http://arxiv.org/pdf/1209.2784v1 | 2012-09-13T06:14:31Z | 2012-09-13T06:14:31Z | Minimax Multi-Task Learning and a Generalized Loss-Compositional
Paradigm for MTL | Since its inception, the modus operandi of multi-task learning (MTL) has been
to minimize the task-wise mean of the empirical risks. We introduce a
generalized loss-compositional paradigm for MTL that includes a spectrum of
formulations as a subfamily. One endpoint of this spectrum is minimax MTL: a
new MTL formulation that minimizes the maximum of the tasks' empirical risks.
Via a certain relaxation of minimax MTL, we obtain a continuum of MTL
formulations spanning minimax MTL and classical MTL. The full paradigm itself
is loss-compositional, operating on the vector of empirical risks. It
incorporates minimax MTL, its relaxations, and many new MTL formulations as
special cases. We show theoretically that minimax MTL tends to avoid worst case
outcomes on newly drawn test tasks in the learning to learn (LTL) test setting.
The results of several MTL formulations on synthetic and real problems in the
MTL and LTL test settings are encouraging.
| [
"['Nishant A. Mehta' 'Dongryeol Lee' 'Alexander G. Gray']",
"Nishant A. Mehta, Dongryeol Lee, Alexander G. Gray"
] |
cs.LG | null | 1209.2790 | null | null | http://arxiv.org/pdf/1209.2790v1 | 2012-09-13T06:47:26Z | 2012-09-13T06:47:26Z | Improving Energy Efficiency in Femtocell Networks: A Hierarchical
Reinforcement Learning Framework | This paper investigates energy efficiency for two-tier femtocell networks
through combining game theory and stochastic learning. With the Stackelberg
game formulation, a hierarchical reinforcement learning framework is applied to
study the joint average utility maximization of macrocells and femtocells
subject to the minimum signal-to-interference-plus-noise-ratio requirements.
The macrocells behave as the leaders and the femtocells are followers during
the learning procedure. At each time step, the leaders commit to dynamic
strategies based on the best responses of the followers, while the followers
compete against each other with no further information but the leaders'
strategy information. In this paper, we propose two learning algorithms to
schedule each cell's stochastic power levels, leading by the macrocells.
Numerical experiments are presented to validate the proposed studies and show
that the two learning algorithms substantially improve the energy efficiency of
the femtocell networks.
| [
"Xianfu Chen, Honggang Zhang, Tao Chen, and Mika Lasanen",
"['Xianfu Chen' 'Honggang Zhang' 'Tao Chen' 'Mika Lasanen']"
] |
cs.SI cs.LG math.PR physics.soc-ph | null | 1209.2910 | null | null | http://arxiv.org/pdf/1209.2910v1 | 2012-09-13T14:35:58Z | 2012-09-13T14:35:58Z | Community Detection in the Labelled Stochastic Block Model | We consider the problem of community detection from observed interactions
between individuals, in the context where multiple types of interaction are
possible. We use labelled stochastic block models to represent the observed
data, where labels correspond to interaction types. Focusing on a two-community
scenario, we conjecture a threshold for the problem of reconstructing the
hidden communities in a way that is correlated with the true partition. To
substantiate the conjecture, we prove that the given threshold correctly
identifies a transition on the behaviour of belief propagation from insensitive
to sensitive. We further prove that the same threshold corresponds to the
transition in a related inference problem on a tree model from infeasible to
feasible. Finally, numerical results using belief propagation for community
detection give further support to the conjecture.
| [
"Simon Heimlicher, Marc Lelarge, Laurent Massouli\\'e",
"['Simon Heimlicher' 'Marc Lelarge' 'Laurent Massoulié']"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.