categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
stat.ML cs.AI cs.LG | null | 1309.3699 | null | null | http://arxiv.org/pdf/1309.3699v1 | 2013-09-14T21:06:22Z | 2013-09-14T21:06:22Z | Local Support Vector Machines:Formulation and Analysis | We provide a formulation for Local Support Vector Machines (LSVMs) that
generalizes previous formulations, and brings out the explicit connections to
local polynomial learning used in nonparametric estimation literature. We
investigate the simplest type of LSVMs called Local Linear Support Vector
Machines (LLSVMs). For the first time we establish conditions under which
LLSVMs make Bayes consistent predictions at each test point $x_0$. We also
establish rates at which the local risk of LLSVMs converges to the minimum
value of expected local risk at each point $x_0$. Using stability arguments we
establish generalization error bounds for LLSVMs.
| [
"Ravi Ganti and Alexander Gray",
"['Ravi Ganti' 'Alexander Gray']"
] |
cs.CV cs.LG stat.ML | null | 1309.3809 | null | null | http://arxiv.org/pdf/1309.3809v1 | 2013-09-16T00:22:01Z | 2013-09-16T00:22:01Z | Visual-Semantic Scene Understanding by Sharing Labels in a Context
Network | We consider the problem of naming objects in complex, natural scenes
containing widely varying object appearance and subtly different names.
Informed by cognitive research, we propose an approach based on sharing context
based object hypotheses between visual and lexical spaces. To this end, we
present the Visual Semantic Integration Model (VSIM) that represents object
labels as entities shared between semantic and visual contexts and infers a new
image by updating labels through context switching. At the core of VSIM is a
semantic Pachinko Allocation Model and a visual nearest neighbor Latent
Dirichlet Allocation Model. For inference, we derive an iterative Data
Augmentation algorithm that pools the label probabilities and maximizes the
joint label posterior of an image. Our model surpasses the performance of
state-of-art methods in several visual tasks on the challenging SUN09 dataset.
| [
"Ishani Chakraborty and Ahmed Elgammal",
"['Ishani Chakraborty' 'Ahmed Elgammal']"
] |
cs.LG | null | 1309.3877 | null | null | http://arxiv.org/pdf/1309.3877v1 | 2013-09-16T09:39:25Z | 2013-09-16T09:39:25Z | A Metric-learning based framework for Support Vector Machines and
Multiple Kernel Learning | Most metric learning algorithms, as well as Fisher's Discriminant Analysis
(FDA), optimize some cost function of different measures of within-and
between-class distances. On the other hand, Support Vector Machines(SVMs) and
several Multiple Kernel Learning (MKL) algorithms are based on the SVM large
margin theory. Recently, SVMs have been analyzed from SVM and metric learning,
and to develop new algorithms that build on the strengths of each. Inspired by
the metric learning interpretation of SVM, we develop here a new
metric-learning based SVM framework in which we incorporate metric learning
concepts within SVM. We extend the optimization problem of SVM to include some
measure of the within-class distance and along the way we develop a new
within-class distance measure which is appropriate for SVM. In addition, we
adopt the same approach for MKL and show that it can be also formulated as a
Mahalanobis metric learning problem. Our end result is a number of SVM/MKL
algorithms that incorporate metric learning concepts. We experiment with them
on a set of benchmark datasets and observe important predictive performance
improvements.
| [
"['Huyen Do' 'Alexandros Kalousis']",
"Huyen Do and Alexandros Kalousis"
] |
cs.IR cs.CL cs.LG | null | 1309.3949 | null | null | http://arxiv.org/pdf/1309.3949v1 | 2013-09-16T13:27:04Z | 2013-09-16T13:27:04Z | Performance Investigation of Feature Selection Methods | Sentiment analysis or opinion mining has become an open research domain after
proliferation of Internet and Web 2.0 social media. People express their
attitudes and opinions on social media including blogs, discussion forums,
tweets, etc. and, sentiment analysis concerns about detecting and extracting
sentiment or opinion from online text. Sentiment based text classification is
different from topical text classification since it involves discrimination
based on expressed opinion on a topic. Feature selection is significant for
sentiment analysis as the opinionated text may have high dimensions, which can
adversely affect the performance of sentiment analysis classifier. This paper
explores applicability of feature selection methods for sentiment analysis and
investigates their performance for classification in term of recall, precision
and accuracy. Five feature selection methods (Document Frequency, Information
Gain, Gain Ratio, Chi Squared, and Relief-F) and three popular sentiment
feature lexicons (HM, GI and Opinion Lexicon) are investigated on movie reviews
corpus with a size of 2000 documents. The experimental results show that
Information Gain gave consistent results and Gain Ratio performs overall best
for sentimental feature selection while sentiment lexicons gave poor
performance. Furthermore, we found that performance of the classifier depends
on appropriate number of representative feature selected from text.
| [
"Anuj sharma, Shubhamoy Dey",
"['Anuj sharma' 'Shubhamoy Dey']"
] |
cs.CV astro-ph.EP astro-ph.IM cs.LG | 10.1017/S1473550413000372 | 1309.4024 | null | null | http://arxiv.org/abs/1309.4024v1 | 2013-09-16T16:32:35Z | 2013-09-16T16:32:35Z | The Cyborg Astrobiologist: Matching of Prior Textures by Image
Compression for Geological Mapping and Novelty Detection | (abridged) We describe an image-comparison technique of Heidemann and Ritter
that uses image compression, and is capable of: (i) detecting novel textures in
a series of images, as well as of: (ii) alerting the user to the similarity of
a new image to a previously-observed texture. This image-comparison technique
has been implemented and tested using our Astrobiology Phone-cam system, which
employs Bluetooth communication to send images to a local laptop server in the
field for the image-compression analysis. We tested the system in a field site
displaying a heterogeneous suite of sandstones, limestones, mudstones and
coalbeds. Some of the rocks are partly covered with lichen. The image-matching
procedure of this system performed very well with data obtained through our
field test, grouping all images of yellow lichens together and grouping all
images of a coal bed together, and giving a 91% accuracy for similarity
detection. Such similarity detection could be employed to make maps of
different geological units. The novelty-detection performance of our system was
also rather good (a 64% accuracy). Such novelty detection may become valuable
in searching for new geological units, which could be of astrobiological
interest. The image-comparison technique is an unsupervised technique that is
not capable of directly classifying an image as containing a particular
geological feature; labeling of such geological features is done post facto by
human geologists associated with this study, for the purpose of analyzing the
system's performance. By providing more advanced capabilities for similarity
detection and novelty detection, this image-compression technique could be
useful in giving more scientific autonomy to robotic planetary rovers, and in
assisting human astronauts in their geological exploration and assessment.
| [
"['P. C. McGuire' 'A. Bonnici' 'K. R. Bruner' 'C. Gross' 'J. Ormö'\n 'R. A. Smosna' 'S. Walter' 'L. Wendt']",
"P.C. McGuire, A. Bonnici, K.R. Bruner, C. Gross, J. Orm\\\"o, R.A.\n Smosna, S. Walter, L. Wendt"
] |
cs.CL cs.AI cs.LG | 10.1613/jair.3640 | 1309.4035 | null | null | http://arxiv.org/abs/1309.4035v1 | 2013-09-16T16:51:02Z | 2013-09-16T16:51:02Z | Domain and Function: A Dual-Space Model of Semantic Relations and
Compositions | Given appropriate representations of the semantic relations between carpenter
and wood and between mason and stone (for example, vectors in a vector space
model), a suitable algorithm should be able to recognize that these relations
are highly similar (carpenter is to wood as mason is to stone; the relations
are analogous). Likewise, with representations of dog, house, and kennel, an
algorithm should be able to recognize that the semantic composition of dog and
house, dog house, is highly similar to kennel (dog house and kennel are
synonymous). It seems that these two tasks, recognizing relations and
compositions, are closely connected. However, up to now, the best models for
relations are significantly different from the best models for compositions. In
this paper, we introduce a dual-space model that unifies these two tasks. This
model matches the performance of the best previous models for relations and
compositions. The dual-space model consists of a space for measuring domain
similarity and a space for measuring function similarity. Carpenter and wood
share the same domain, the domain of carpentry. Mason and stone share the same
domain, the domain of masonry. Carpenter and mason share the same function, the
function of artisans. Wood and stone share the same function, the function of
materials. In the composition dog house, kennel has some domain overlap with
both dog and house (the domains of pets and buildings). The function of kennel
is similar to the function of house (the function of shelters). By combining
domain and function similarities in various ways, we can model relations,
compositions, and other aspects of semantics.
| [
"Peter D. Turney",
"['Peter D. Turney']"
] |
cs.LG cs.CV | null | 1309.4061 | null | null | http://arxiv.org/pdf/1309.4061v1 | 2013-09-16T18:30:41Z | 2013-09-16T18:30:41Z | Learning a Loopy Model For Semantic Segmentation Exactly | Learning structured models using maximum margin techniques has become an
indispensable tool for com- puter vision researchers, as many computer vision
applications can be cast naturally as an image labeling problem. Pixel-based or
superpixel-based conditional random fields are particularly popular examples.
Typ- ically, neighborhood graphs, which contain a large number of cycles, are
used. As exact inference in loopy graphs is NP-hard in general, learning these
models without approximations is usually deemed infeasible. In this work we
show that, despite the theoretical hardness, it is possible to learn loopy
models exactly in practical applications. To this end, we analyze the use of
multiple approximate inference techniques together with cutting plane training
of structural SVMs. We show that our proposed method yields exact solutions
with an optimality guarantees in a computer vision application, for little
additional computational cost. We also propose a dynamic caching scheme to
accelerate training further, yielding runtimes that are comparable with
approximate methods. We hope that this insight can lead to a reconsideration of
the tractability of loopy models in computer vision.
| [
"['Andreas Christian Mueller' 'Sven Behnke']",
"Andreas Christian Mueller, Sven Behnke"
] |
stat.ML cs.LG math.ST stat.TH | null | 1309.4111 | null | null | http://arxiv.org/pdf/1309.4111v1 | 2013-09-16T20:47:51Z | 2013-09-16T20:47:51Z | Regularized Spectral Clustering under the Degree-Corrected Stochastic
Blockmodel | Spectral clustering is a fast and popular algorithm for finding clusters in
networks. Recently, Chaudhuri et al. (2012) and Amini et al.(2012) proposed
inspired variations on the algorithm that artificially inflate the node degrees
for improved statistical performance. The current paper extends the previous
statistical estimation results to the more canonical spectral clustering
algorithm in a way that removes any assumption on the minimum degree and
provides guidance on the choice of the tuning parameter. Moreover, our results
show how the "star shape" in the eigenvectors--a common feature of empirical
networks--can be explained by the Degree-Corrected Stochastic Blockmodel and
the Extended Planted Partition model, two statistical models that allow for
highly heterogeneous degrees. Throughout, the paper characterizes and justifies
several of the variations of the spectral clustering algorithm in terms of
these models.
| [
"Tai Qin, Karl Rohe",
"['Tai Qin' 'Karl Rohe']"
] |
cs.LG q-bio.PE | null | 1309.4132 | null | null | http://arxiv.org/pdf/1309.4132v2 | 2014-04-03T23:07:16Z | 2013-09-16T22:31:58Z | Attribute-Efficient Evolvability of Linear Functions | In a seminal paper, Valiant (2006) introduced a computational model for
evolution to address the question of complexity that can arise through
Darwinian mechanisms. Valiant views evolution as a restricted form of
computational learning, where the goal is to evolve a hypothesis that is close
to the ideal function. Feldman (2008) showed that (correlational) statistical
query learning algorithms could be framed as evolutionary mechanisms in
Valiant's model. P. Valiant (2012) considered evolvability of real-valued
functions and also showed that weak-optimization algorithms that use
weak-evaluation oracles could be converted to evolutionary mechanisms.
In this work, we focus on the complexity of representations of evolutionary
mechanisms. In general, the reductions of Feldman and P. Valiant may result in
intermediate representations that are arbitrarily complex (polynomial-sized
circuits). We argue that biological constraints often dictate that the
representations have low complexity, such as constant depth and fan-in
circuits. We give mechanisms for evolving sparse linear functions under a large
class of smooth distributions. These evolutionary algorithms are
attribute-efficient in the sense that the size of the representations and the
number of generations required depend only on the sparsity of the target
function and the accuracy parameter, but have no dependence on the total number
of attributes.
| [
"Elaine Angelino, Varun Kanade",
"['Elaine Angelino' 'Varun Kanade']"
] |
cs.AI cs.LG cs.RO | null | 1309.4714 | null | null | http://arxiv.org/pdf/1309.4714v1 | 2013-09-18T17:29:03Z | 2013-09-18T17:29:03Z | Temporal-Difference Learning to Assist Human Decision Making during the
Control of an Artificial Limb | In this work we explore the use of reinforcement learning (RL) to help with
human decision making, combining state-of-the-art RL algorithms with an
application to prosthetics. Managing human-machine interaction is a problem of
considerable scope, and the simplification of human-robot interfaces is
especially important in the domains of biomedical technology and rehabilitation
medicine. For example, amputees who control artificial limbs are often required
to quickly switch between a number of control actions or modes of operation in
order to operate their devices. We suggest that by learning to anticipate
(predict) a user's behaviour, artificial limbs could take on an active role in
a human's control decisions so as to reduce the burden on their users.
Recently, we showed that RL in the form of general value functions (GVFs) could
be used to accurately detect a user's control intent prior to their explicit
control choices. In the present work, we explore the use of temporal-difference
learning and GVFs to predict when users will switch their control influence
between the different motor functions of a robot arm. Experiments were
performed using a multi-function robot arm that was controlled by muscle
signals from a user's body (similar to conventional artificial limb control).
Our approach was able to acquire and maintain forecasts about a user's
switching decisions in real time. It also provides an intuitive and reward-free
way for users to correct or reinforce the decisions made by the machine
learning system. We expect that when a system is certain enough about its
predictions, it can begin to take over switching decisions from the user to
streamline control and potentially decrease the time and effort needed to
complete tasks. This preliminary study therefore suggests a way to naturally
integrate human- and machine-based decision making systems.
| [
"Ann L. Edwards, Alexandra Kearney, Michael Rory Dawson, Richard S.\n Sutton, Patrick M. Pilarski",
"['Ann L. Edwards' 'Alexandra Kearney' 'Michael Rory Dawson'\n 'Richard S. Sutton' 'Patrick M. Pilarski']"
] |
stat.ML cs.LG cs.NI | null | 1309.4844 | null | null | http://arxiv.org/pdf/1309.4844v1 | 2013-09-19T03:09:33Z | 2013-09-19T03:09:33Z | Network Anomaly Detection: A Survey and Comparative Analysis of
Stochastic and Deterministic Methods | We present five methods to the problem of network anomaly detection. These
methods cover most of the common techniques in the anomaly detection field,
including Statistical Hypothesis Tests (SHT), Support Vector Machines (SVM) and
clustering analysis. We evaluate all methods in a simulated network that
consists of nominal data, three flow-level anomalies and one packet-level
attack. Through analyzing the results, we point out the advantages and
disadvantages of each method and conclude that combining the results of the
individual methods can yield improved anomaly detection results.
| [
"['Jing Wang' 'Daniel Rossell' 'Christos G. Cassandras'\n 'Ioannis Ch. Paschalidis']",
"Jing Wang, Daniel Rossell, Christos G. Cassandras, and Ioannis Ch.\n Paschalidis"
] |
cs.AI cs.DL cs.LG cs.LO cs.MS | null | 1309.4962 | null | null | http://arxiv.org/pdf/1309.4962v1 | 2013-09-19T13:22:31Z | 2013-09-19T13:22:31Z | HOL(y)Hammer: Online ATP Service for HOL Light | HOL(y)Hammer is an online AI/ATP service for formal (computer-understandable)
mathematics encoded in the HOL Light system. The service allows its users to
upload and automatically process an arbitrary formal development (project)
based on HOL Light, and to attack arbitrary conjectures that use the concepts
defined in some of the uploaded projects. For that, the service uses several
automated reasoning systems combined with several premise selection methods
trained on all the project proofs. The projects that are readily available on
the server for such query answering include the recent versions of the
Flyspeck, Multivariate Analysis and Complex Analysis libraries. The service
runs on a 48-CPU server, currently employing in parallel for each task 7 AI/ATP
combinations and 4 decision procedures that contribute to its overall
performance. The system is also available for local installation by interested
users, who can customize it for their own proof development. An Emacs interface
allowing parallel asynchronous queries to the service is also provided. The
overall structure of the service is outlined, problems that arise and their
solutions are discussed, and an initial account of using the system is given.
| [
"['Cezary Kaliszyk' 'Josef Urban']",
"Cezary Kaliszyk and Josef Urban"
] |
cs.LG stat.AP | null | 1309.4999 | null | null | http://arxiv.org/pdf/1309.4999v1 | 2013-09-18T06:44:33Z | 2013-09-18T06:44:33Z | Bayesian rules and stochastic models for high accuracy prediction of
solar radiation | It is essential to find solar predictive methods to massively insert
renewable energies on the electrical distribution grid. The goal of this study
is to find the best methodology allowing predicting with high accuracy the
hourly global radiation. The knowledge of this quantity is essential for the
grid manager or the private PV producer in order to anticipate fluctuations
related to clouds occurrences and to stabilize the injected PV power. In this
paper, we test both methodologies: single and hybrid predictors. In the first
class, we include the multi-layer perceptron (MLP), auto-regressive and moving
average (ARMA), and persistence models. In the second class, we mix these
predictors with Bayesian rules to obtain ad-hoc models selections, and Bayesian
averages of outputs related to single models. If MLP and ARMA are equivalent
(nRMSE close to 40.5% for the both), this hybridization allows a nRMSE gain
upper than 14 percentage points compared to the persistence estimation
(nRMSE=37% versus 51%).
| [
"Cyril Voyant (SPE), C. Darras (SPE), Marc Muselli (SPE), Christophe\n Paoli (SPE), Marie Laure Nivet (SPE), Philippe Poggi (SPE)",
"['Cyril Voyant' 'C. Darras' 'Marc Muselli' 'Christophe Paoli'\n 'Marie Laure Nivet' 'Philippe Poggi']"
] |
cs.LG q-bio.GN stat.ML | null | 1309.5047 | null | null | http://arxiv.org/pdf/1309.5047v1 | 2013-09-19T16:45:18Z | 2013-09-19T16:45:18Z | A Comparative Analysis of Ensemble Classifiers: Case Studies in Genomics | The combination of multiple classifiers using ensemble methods is
increasingly important for making progress in a variety of difficult prediction
problems. We present a comparative analysis of several ensemble methods through
two case studies in genomics, namely the prediction of genetic interactions and
protein functions, to demonstrate their efficacy on real-world datasets and
draw useful conclusions about their behavior. These methods include simple
aggregation, meta-learning, cluster-based meta-learning, and ensemble selection
using heterogeneous classifiers trained on resampled data to improve the
diversity of their predictions. We present a detailed analysis of these methods
across 4 genomics datasets and find the best of these methods offer
statistically significant improvements over the state of the art in their
respective domains. In addition, we establish a novel connection between
ensemble selection and meta-learning, demonstrating how both of these disparate
methods establish a balance between ensemble diversity and performance.
| [
"['Sean Whalen' 'Gaurav Pandey']",
"Sean Whalen and Gaurav Pandey"
] |
cs.CL cs.LG q-bio.NC | 10.1007/s00422-013-0557-3 | 1309.5319 | null | null | http://arxiv.org/abs/1309.5319v1 | 2013-09-20T16:47:48Z | 2013-09-20T16:47:48Z | Recognizing Speech in a Novel Accent: The Motor Theory of Speech
Perception Reframed | The motor theory of speech perception holds that we perceive the speech of
another in terms of a motor representation of that speech. However, when we
have learned to recognize a foreign accent, it seems plausible that recognition
of a word rarely involves reconstruction of the speech gestures of the speaker
rather than the listener. To better assess the motor theory and this
observation, we proceed in three stages. Part 1 places the motor theory of
speech perception in a larger framework based on our earlier models of the
adaptive formation of mirror neurons for grasping, and for viewing extensions
of that mirror system as part of a larger system for neuro-linguistic
processing, augmented by the present consideration of recognizing speech in a
novel accent. Part 2 then offers a novel computational model of how a listener
comes to understand the speech of someone speaking the listener's native
language with a foreign accent. The core tenet of the model is that the
listener uses hypotheses about the word the speaker is currently uttering to
update probabilities linking the sound produced by the speaker to phonemes in
the native language repertoire of the listener. This, on average, improves the
recognition of later words. This model is neutral regarding the nature of the
representations it uses (motor vs. auditory). It serve as a reference point for
the discussion in Part 3, which proposes a dual-stream neuro-linguistic
architecture to revisits claims for and against the motor theory of speech
perception and the relevance of mirror neurons, and extracts some implications
for the reframing of the motor theory.
| [
"Cl\\'ement Moulin-Frier (INRIA Bordeaux - Sud-Ouest, GIPSA-lab), M. A.\n Arbib (USC)",
"['Clément Moulin-Frier' 'M. A. Arbib']"
] |
cs.LG cs.CV stat.ML | null | 1309.5427 | null | null | http://arxiv.org/pdf/1309.5427v1 | 2013-09-21T03:42:04Z | 2013-09-21T03:42:04Z | Latent Fisher Discriminant Analysis | Linear Discriminant Analysis (LDA) is a well-known method for dimensionality
reduction and classification. Previous studies have also extended the
binary-class case into multi-classes. However, many applications, such as
object detection and keyframe extraction cannot provide consistent
instance-label pairs, while LDA requires labels on instance level for training.
Thus it cannot be directly applied for semi-supervised classification problem.
In this paper, we overcome this limitation and propose a latent variable Fisher
discriminant analysis model. We relax the instance-level labeling into
bag-level, is a kind of semi-supervised (video-level labels of event type are
required for semantic frame extraction) and incorporates a data-driven prior
over the latent variables. Hence, our method combines the latent variable
inference and dimension reduction in an unified bayesian framework. We test our
method on MUSK and Corel data sets and yield competitive results compared to
the baseline approach. We also demonstrate its capacity on the challenging
TRECVID MED11 dataset for semantic keyframe extraction and conduct a
human-factors ranking-based experimental evaluation, which clearly demonstrates
our proposed method consistently extracts more semantically meaningful
keyframes than challenging baselines.
| [
"['Gang Chen']",
"Gang Chen"
] |
cs.LG | null | 1309.5605 | null | null | http://arxiv.org/pdf/1309.5605v1 | 2013-09-22T14:46:15Z | 2013-09-22T14:46:15Z | Stochastic Bound Majorization | Recently a majorization method for optimizing partition functions of
log-linear models was proposed alongside a novel quadratic variational
upper-bound. In the batch setting, it outperformed state-of-the-art first- and
second-order optimization methods on various learning tasks. We propose a
stochastic version of this bound majorization method as well as a low-rank
modification for high-dimensional data-sets. The resulting stochastic
second-order method outperforms stochastic gradient descent (across variations
and various tunings) both in terms of the number of iterations and computation
time till convergence while finding a better quality parameter setting. The
proposed method bridges first- and second-order stochastic optimization methods
by maintaining a computational complexity that is linear in the data dimension
and while exploiting second order information about the pseudo-global curvature
of the objective function (as opposed to the local curvature in the Hessian).
| [
"Anna Choromanska and Tony Jebara",
"['Anna Choromanska' 'Tony Jebara']"
] |
stat.ML cs.LG | 10.1016/j.patcog.2014.07.022 | 1309.5643 | null | null | http://arxiv.org/abs/1309.5643v3 | 2014-08-12T09:04:32Z | 2013-09-22T20:24:50Z | Multiple Instance Learning with Bag Dissimilarities | Multiple instance learning (MIL) is concerned with learning from sets (bags)
of objects (instances), where the individual instance labels are ambiguous. In
this setting, supervised learning cannot be applied directly. Often,
specialized MIL methods learn by making additional assumptions about the
relationship of the bag labels and instance labels. Such assumptions may fit a
particular dataset, but do not generalize to the whole range of MIL problems.
Other MIL methods shift the focus of assumptions from the labels to the overall
(dis)similarity of bags, and therefore learn from bags directly. We propose to
represent each bag by a vector of its dissimilarities to other bags in the
training set, and treat these dissimilarities as a feature representation. We
show several alternatives to define a dissimilarity between bags and discuss
which definitions are more suitable for particular MIL problems. The
experimental results show that the proposed approach is computationally
inexpensive, yet very competitive with state-of-the-art algorithms on a wide
range of MIL datasets.
| [
"['Veronika Cheplygina' 'David M. J. Tax' 'Marco Loog']",
"Veronika Cheplygina, David M. J. Tax, and Marco Loog"
] |
cs.AI cs.CV cs.LG | 10.1109/TPAMI.2014.2363465 | 1309.5655 | null | null | http://arxiv.org/abs/1309.5655v3 | 2017-01-19T17:45:24Z | 2013-09-22T21:19:36Z | A new look at reweighted message passing | We propose a new family of message passing techniques for MAP estimation in
graphical models which we call {\em Sequential Reweighted Message Passing}
(SRMP). Special cases include well-known techniques such as {\em Min-Sum
Diffusion} (MSD) and a faster {\em Sequential Tree-Reweighted Message Passing}
(TRW-S). Importantly, our derivation is simpler than the original derivation of
TRW-S, and does not involve a decomposition into trees. This allows easy
generalizations. We present such a generalization for the case of higher-order
graphical models, and test it on several real-world problems with promising
results.
| [
"['Vladimir Kolmogorov']",
"Vladimir Kolmogorov"
] |
cs.LG cs.DC cs.SY math.OC | null | 1309.5803 | null | null | http://arxiv.org/pdf/1309.5803v1 | 2013-09-20T14:38:01Z | 2013-09-20T14:38:01Z | Scalable Anomaly Detection in Large Homogenous Populations | Anomaly detection in large populations is a challenging but highly relevant
problem. The problem is essentially a multi-hypothesis problem, with a
hypothesis for every division of the systems into normal and anomal systems.
The number of hypothesis grows rapidly with the number of systems and
approximate solutions become a necessity for any problems of practical
interests. In the current paper we take an optimization approach to this
multi-hypothesis problem. We first observe that the problem is equivalent to a
non-convex combinatorial optimization problem. We then relax the problem to a
convex problem that can be solved distributively on the systems and that stays
computationally tractable as the number of systems increase. An interesting
property of the proposed method is that it can under certain conditions be
shown to give exactly the same result as the combinatorial multi-hypothesis
problem and the relaxation is hence tight.
| [
"Henrik Ohlsson, Tianshi Chen, Sina Khoshfetrat Pakazad, Lennart Ljung\n and S. Shankar Sastry",
"['Henrik Ohlsson' 'Tianshi Chen' 'Sina Khoshfetrat Pakazad'\n 'Lennart Ljung' 'S. Shankar Sastry']"
] |
cs.LG | null | 1309.5823 | null | null | http://arxiv.org/pdf/1309.5823v1 | 2013-09-23T14:39:48Z | 2013-09-23T14:39:48Z | A Kernel Classification Framework for Metric Learning | Learning a distance metric from the given training samples plays a crucial
role in many machine learning tasks, and various models and optimization
algorithms have been proposed in the past decade. In this paper, we generalize
several state-of-the-art metric learning methods, such as large margin nearest
neighbor (LMNN) and information theoretic metric learning (ITML), into a kernel
classification framework. First, doublets and triplets are constructed from the
training samples, and a family of degree-2 polynomial kernel functions are
proposed for pairs of doublets or triplets. Then, a kernel classification
framework is established, which can not only generalize many popular metric
learning methods such as LMNN and ITML, but also suggest new metric learning
methods, which can be efficiently implemented, interestingly, by using the
standard support vector machine (SVM) solvers. Two novel metric learning
methods, namely doublet-SVM and triplet-SVM, are then developed under the
proposed framework. Experimental results show that doublet-SVM and triplet-SVM
achieve competitive classification accuracies with state-of-the-art metric
learning methods such as ITML and LMNN but with significantly less training
time.
| [
"Faqiang Wang, Wangmeng Zuo, Lei Zhang, Deyu Meng and David Zhang",
"['Faqiang Wang' 'Wangmeng Zuo' 'Lei Zhang' 'Deyu Meng' 'David Zhang']"
] |
cs.OH cs.IT cs.LG math.IT | null | 1309.5854 | null | null | http://arxiv.org/pdf/1309.5854v1 | 2013-09-01T22:14:52Z | 2013-09-01T22:14:52Z | Demodulation of Sparse PPM Signals with Low Samples Using Trained RIP
Matrix | Compressed sensing (CS) theory considers the restricted isometry property
(RIP) as a sufficient condition for measurement matrix which guarantees the
recovery of any sparse signal from its compressed measurements. The RIP
condition also preserves enough information for classification of sparse
symbols, even with fewer measurements. In this work, we utilize RIP bound as
the cost function for training a simple neural network in order to exploit the
near optimal measurements or equivalently near optimal features for
classification of a known set of sparse symbols. As an example, we consider
demodulation of pulse position modulation (PPM) signals. The results indicate
that the proposed method has much better performance than the random
measurements and requires less samples than the optimum matched filter
demodulator, at the expense of some performance loss. Further, the proposed
approach does not need equalizer for multipath channels in contrast to the
conventional receiver.
| [
"Seyed Hossein Hosseini, Mahrokh G. Shayesteh, Mehdi Chehel Amirani",
"['Seyed Hossein Hosseini' 'Mahrokh G. Shayesteh' 'Mehdi Chehel Amirani']"
] |
cs.LG | null | 1309.5904 | null | null | http://arxiv.org/pdf/1309.5904v1 | 2013-09-23T18:14:02Z | 2013-09-23T18:14:02Z | Fenchel Duals for Drifting Adversaries | We describe a primal-dual framework for the design and analysis of online
convex optimization algorithms for {\em drifting regret}. Existing literature
shows (nearly) optimal drifting regret bounds only for the $\ell_2$ and the
$\ell_1$-norms. Our work provides a connection between these algorithms and the
Online Mirror Descent ($\omd$) updates; one key insight that results from our
work is that in order for these algorithms to succeed, it suffices to have the
gradient of the regularizer to be bounded (in an appropriate norm). For
situations (like for the $\ell_1$ norm) where the vanilla regularizer does not
have this property, we have to {\em shift} the regularizer to ensure this.
Thus, this helps explain the various updates presented in \cite{bansal10,
buchbinder12}. We also consider the online variant of the problem with
1-lookahead, and with movement costs in the $\ell_2$-norm. Our primal dual
approach yields nearly optimal competitive ratios for this problem.
| [
"['Suman K Bera' 'Anamitra R Choudhury' 'Syamantak Das' 'Sambuddha Roy'\n 'Jayram S. Thatchachar']",
"Suman K Bera, Anamitra R Choudhury, Syamantak Das, Sambuddha Roy and\n Jayram S. Thatchachar"
] |
cs.CL cs.LG cs.SD | null | 1309.6176 | null | null | http://arxiv.org/pdf/1309.6176v1 | 2013-09-23T13:51:28Z | 2013-09-23T13:51:28Z | Feature Learning with Gaussian Restricted Boltzmann Machine for Robust
Speech Recognition | In this paper, we first present a new variant of Gaussian restricted
Boltzmann machine (GRBM) called multivariate Gaussian restricted Boltzmann
machine (MGRBM), with its definition and learning algorithm. Then we propose
using a learned GRBM or MGRBM to extract better features for robust speech
recognition. Our experiments on Aurora2 show that both GRBM-extracted and
MGRBM-extracted feature performs much better than Mel-frequency cepstral
coefficient (MFCC) with either HMM-GMM or hybrid HMM-deep neural network (DNN)
acoustic model, and MGRBM-extracted feature is slightly better.
| [
"['Xin Zheng' 'Zhiyong Wu' 'Helen Meng' 'Weifeng Li' 'Lianhong Cai']",
"Xin Zheng, Zhiyong Wu, Helen Meng, Weifeng Li, Lianhong Cai"
] |
cs.CV cs.LG stat.ML | null | 1309.6301 | null | null | http://arxiv.org/pdf/1309.6301v2 | 2013-09-27T19:36:41Z | 2013-09-24T19:48:56Z | Solving OSCAR regularization problems by proximal splitting algorithms | The OSCAR (octagonal selection and clustering algorithm for regression)
regularizer consists of a L_1 norm plus a pair-wise L_inf norm (responsible for
its grouping behavior) and was proposed to encourage group sparsity in
scenarios where the groups are a priori unknown. The OSCAR regularizer has a
non-trivial proximity operator, which limits its applicability. We reformulate
this regularizer as a weighted sorted L_1 norm, and propose its grouping
proximity operator (GPO) and approximate proximity operator (APO), thus making
state-of-the-art proximal splitting algorithms (PSAs) available to solve
inverse problems with OSCAR regularization. The GPO is in fact the APO followed
by additional grouping and averaging operations, which are costly in time and
storage, explaining the reason why algorithms with APO are much faster than
that with GPO. The convergences of PSAs with GPO are guaranteed since GPO is an
exact proximity operator. Although convergence of PSAs with APO is may not be
guaranteed, we have experimentally found that APO behaves similarly to GPO when
the regularization parameter of the pair-wise L_inf norm is set to an
appropriately small value. Experiments on recovery of group-sparse signals
(with unknown groups) show that PSAs with APO are very fast and accurate.
| [
"Xiangrong Zeng and M\\'ario A. T. Figueiredo",
"['Xiangrong Zeng' 'Mário A. T. Figueiredo']"
] |
cs.LG cs.CV stat.ML | 10.1109/TNNLS.2015.2490080 | 1309.6487 | null | null | http://arxiv.org/abs/1309.6487v2 | 2015-10-30T14:43:50Z | 2013-09-25T12:53:13Z | A Unified Framework for Representation-based Subspace Clustering of
Out-of-sample and Large-scale Data | Under the framework of spectral clustering, the key of subspace clustering is
building a similarity graph which describes the neighborhood relations among
data points. Some recent works build the graph using sparse, low-rank, and
$\ell_2$-norm-based representation, and have achieved state-of-the-art
performance. However, these methods have suffered from the following two
limitations. First, the time complexities of these methods are at least
proportional to the cube of the data size, which make those methods inefficient
for solving large-scale problems. Second, they cannot cope with out-of-sample
data that are not used to construct the similarity graph. To cluster each
out-of-sample datum, the methods have to recalculate the similarity graph and
the cluster membership of the whole data set. In this paper, we propose a
unified framework which makes representation-based subspace clustering
algorithms feasible to cluster both out-of-sample and large-scale data. Under
our framework, the large-scale problem is tackled by converting it as
out-of-sample problem in the manner of "sampling, clustering, coding, and
classifying". Furthermore, we give an estimation for the error bounds by
treating each subspace as a point in a hyperspace. Extensive experimental
results on various benchmark data sets show that our methods outperform several
recently-proposed scalable methods in clustering large-scale data set.
| [
"['Xi Peng' 'Huajin Tang' 'Lei Zhang' 'Zhang Yi' 'Shijie Xiao']",
"Xi Peng, Huajin Tang, Lei Zhang, Zhang Yi, and Shijie Xiao"
] |
cs.NE cs.LG q-bio.NC | null | 1309.6584 | null | null | http://arxiv.org/pdf/1309.6584v3 | 2019-07-09T19:38:38Z | 2013-09-25T17:32:24Z | Should I Stay or Should I Go: Coordinating Biological Needs with
Continuously-updated Assessments of the Environment | This paper presents Wanderer, a model of how autonomous adaptive systems
coordinate internal biological needs with moment-by-moment assessments of the
probabilities of events in the external world. The extent to which Wanderer
moves about or explores its environment reflects the relative activations of
two competing motivational sub-systems: one represents the need to acquire
energy and it excites exploration, and the other represents the need to avoid
predators and it inhibits exploration. The environment contains food,
predators, and neutral stimuli. Wanderer responds to these events in a way that
is adaptive in the short turn, and reassesses the probabilities of these events
so that it can modify its long term behaviour appropriately. When food appears,
Wanderer be-comes satiated and exploration temporarily decreases. When a
predator appears, Wanderer both decreases exploration in the short term, and
becomes more "cautious" about exploring in the future. Wanderer also forms
associations between neutral features and salient ones (food and predators)
when they are present at the same time, and uses these associations to guide
its behaviour.
| [
"Liane Gabora",
"['Liane Gabora']"
] |
cs.SI cs.LG stat.ML | 10.1109/JSTSP.2014.2299517 | 1309.6707 | null | null | http://arxiv.org/abs/1309.6707v2 | 2014-01-22T02:42:52Z | 2013-09-26T02:01:44Z | Distributed Online Learning in Social Recommender Systems | In this paper, we consider decentralized sequential decision making in
distributed online recommender systems, where items are recommended to users
based on their search query as well as their specific background including
history of bought items, gender and age, all of which comprise the context
information of the user. In contrast to centralized recommender systems, in
which there is a single centralized seller who has access to the complete
inventory of items as well as the complete record of sales and user
information, in decentralized recommender systems each seller/learner only has
access to the inventory of items and user information for its own products and
not the products and user information of other sellers, but can get commission
if it sells an item of another seller. Therefore the sellers must distributedly
find out for an incoming user which items to recommend (from the set of own
items or items of another seller), in order to maximize the revenue from own
sales and commissions. We formulate this problem as a cooperative contextual
bandit problem, analytically bound the performance of the sellers compared to
the best recommendation strategy given the complete realization of user
arrivals and the inventory of items, as well as the context-dependent purchase
probabilities of each item, and verify our results via numerical examples on a
distributed data set adapted based on Amazon data. We evaluate the dependence
of the performance of a seller on the inventory of items the seller has, the
number of connections it has with the other sellers, and the commissions which
the seller gets by selling items of other sellers to its users.
| [
"Cem Tekin, Simpson Zhang, Mihaela van der Schaar",
"['Cem Tekin' 'Simpson Zhang' 'Mihaela van der Schaar']"
] |
stat.ML cs.LG | null | 1309.6786 | null | null | http://arxiv.org/pdf/1309.6786v4 | 2014-09-24T09:25:09Z | 2013-09-26T10:32:43Z | One-class Collaborative Filtering with Random Graphs: Annotated Version | The bane of one-class collaborative filtering is interpreting and modelling
the latent signal from the missing class. In this paper we present a novel
Bayesian generative model for implicit collaborative filtering. It forms a core
component of the Xbox Live architecture, and unlike previous approaches,
delineates the odds of a user disliking an item from simply not considering it.
The latent signal is treated as an unobserved random graph connecting users
with items they might have encountered. We demonstrate how large-scale
distributed learning can be achieved through a combination of stochastic
gradient descent and mean field variational inference over random graph
samples. A fine-grained comparison is done against a state of the art baseline
on real world data.
| [
"['Ulrich Paquet' 'Noam Koenigstein']",
"Ulrich Paquet, Noam Koenigstein"
] |
cs.LG stat.ML | null | 1309.6811 | null | null | http://arxiv.org/pdf/1309.6811v1 | 2013-09-26T12:26:53Z | 2013-09-26T12:26:53Z | Generative Multiple-Instance Learning Models For Quantitative
Electromyography | We present a comprehensive study of the use of generative modeling approaches
for Multiple-Instance Learning (MIL) problems. In MIL a learner receives
training instances grouped together into bags with labels for the bags only
(which might not be correct for the comprised instances). Our work was
motivated by the task of facilitating the diagnosis of neuromuscular disorders
using sets of motor unit potential trains (MUPTs) detected within a muscle
which can be cast as a MIL problem. Our approach leads to a state-of-the-art
solution to the problem of muscle classification. By introducing and analyzing
generative models for MIL in a general framework and examining a variety of
model structures and components, our work also serves as a methodological guide
to modelling MIL tasks. We evaluate our proposed methods both on MUPT datasets
and on the MUSK1 dataset, one of the most widely used benchmarks for MIL.
| [
"['Tameem Adel' 'Benn Smith' 'Ruth Urner' 'Daniel Stashuk'\n 'Daniel J. Lizotte']",
"Tameem Adel, Benn Smith, Ruth Urner, Daniel Stashuk, Daniel J. Lizotte"
] |
cs.LG stat.ML | null | 1309.6812 | null | null | http://arxiv.org/pdf/1309.6812v1 | 2013-09-26T12:28:35Z | 2013-09-26T12:28:35Z | The Bregman Variational Dual-Tree Framework | Graph-based methods provide a powerful tool set for many non-parametric
frameworks in Machine Learning. In general, the memory and computational
complexity of these methods is quadratic in the number of examples in the data
which makes them quickly infeasible for moderate to large scale datasets. A
significant effort to find more efficient solutions to the problem has been
made in the literature. One of the state-of-the-art methods that has been
recently introduced is the Variational Dual-Tree (VDT) framework. Despite some
of its unique features, VDT is currently restricted only to Euclidean spaces
where the Euclidean distance quantifies the similarity. In this paper, we
extend the VDT framework beyond the Euclidean distance to more general Bregman
divergences that include the Euclidean distance as a special case. By
exploiting the properties of the general Bregman divergence, we show how the
new framework can maintain all the pivotal features of the VDT framework and
yet significantly improve its performance in non-Euclidean domains. We apply
the proposed framework to different text categorization problems and
demonstrate its benefits over the original VDT.
| [
"Saeed Amizadeh, Bo Thiesson, Milos Hauskrecht",
"['Saeed Amizadeh' 'Bo Thiesson' 'Milos Hauskrecht']"
] |
cs.LG stat.ML | null | 1309.6813 | null | null | http://arxiv.org/pdf/1309.6813v1 | 2013-09-26T12:28:52Z | 2013-09-26T12:28:52Z | Hinge-loss Markov Random Fields: Convex Inference for Structured
Prediction | Graphical models for structured domains are powerful tools, but the
computational complexities of combinatorial prediction spaces can force
restrictions on models, or require approximate inference in order to be
tractable. Instead of working in a combinatorial space, we use hinge-loss
Markov random fields (HL-MRFs), an expressive class of graphical models with
log-concave density functions over continuous variables, which can represent
confidences in discrete predictions. This paper demonstrates that HL-MRFs are
general tools for fast and accurate structured prediction. We introduce the
first inference algorithm that is both scalable and applicable to the full
class of HL-MRFs, and show how to train HL-MRFs with several learning
algorithms. Our experiments show that HL-MRFs match or surpass the predictive
performance of state-of-the-art methods, including discrete models, in four
application domains.
| [
"Stephen Bach, Bert Huang, Ben London, Lise Getoor",
"['Stephen Bach' 'Bert Huang' 'Ben London' 'Lise Getoor']"
] |
cs.LG stat.ML | null | 1309.6814 | null | null | http://arxiv.org/pdf/1309.6814v1 | 2013-09-26T12:29:19Z | 2013-09-26T12:29:19Z | High-dimensional Joint Sparsity Random Effects Model for Multi-task
Learning | Joint sparsity regularization in multi-task learning has attracted much
attention in recent years. The traditional convex formulation employs the group
Lasso relaxation to achieve joint sparsity across tasks. Although this approach
leads to a simple convex formulation, it suffers from several issues due to the
looseness of the relaxation. To remedy this problem, we view jointly sparse
multi-task learning as a specialized random effects model, and derive a convex
relaxation approach that involves two steps. The first step learns the
covariance matrix of the coefficients using a convex formulation which we refer
to as sparse covariance coding; the second step solves a ridge regression
problem with a sparse quadratic regularizer based on the covariance matrix
obtained in the first step. It is shown that this approach produces an
asymptotically optimal quadratic regularizer in the multitask learning setting
when the number of tasks approaches infinity. Experimental results demonstrate
that the convex formulation obtained via the proposed model significantly
outperforms group Lasso (and related multi-stage formulations
| [
"['Krishnakumar Balasubramanian' 'Kai Yu' 'Tong Zhang']",
"Krishnakumar Balasubramanian, Kai Yu, Tong Zhang"
] |
cs.LG stat.ML | null | 1309.6818 | null | null | http://arxiv.org/pdf/1309.6818v1 | 2013-09-26T12:35:03Z | 2013-09-26T12:35:03Z | Boosting in the presence of label noise | Boosting is known to be sensitive to label noise. We studied two approaches
to improve AdaBoost's robustness against labelling errors. One is to employ a
label-noise robust classifier as a base learner, while the other is to modify
the AdaBoost algorithm to be more robust. Empirical evaluation shows that a
committee of robust classifiers, although converges faster than non label-noise
aware AdaBoost, is still susceptible to label noise. However, pairing it with
the new robust Boosting algorithm we propose here results in a more resilient
algorithm under mislabelling.
| [
"Jakramate Bootkrajang, Ata Kaban",
"['Jakramate Bootkrajang' 'Ata Kaban']"
] |
cs.LG stat.ML | null | 1309.6819 | null | null | http://arxiv.org/pdf/1309.6819v1 | 2013-09-26T12:35:19Z | 2013-09-26T12:35:19Z | Hilbert Space Embeddings of Predictive State Representations | Predictive State Representations (PSRs) are an expressive class of models for
controlled stochastic processes. PSRs represent state as a set of predictions
of future observable events. Because PSRs are defined entirely in terms of
observable data, statistically consistent estimates of PSR parameters can be
learned efficiently by manipulating moments of observed training data. Most
learning algorithms for PSRs have assumed that actions and observations are
finite with low cardinality. In this paper, we generalize PSRs to infinite sets
of observations and actions, using the recent concept of Hilbert space
embeddings of distributions. The essence is to represent the state as a
nonparametric conditional embedding operator in a Reproducing Kernel Hilbert
Space (RKHS) and leverage recent work in kernel methods to estimate, predict,
and update the representation. We show that these Hilbert space embeddings of
PSRs are able to gracefully handle continuous actions and observations, and
that our learned models outperform competing system identification algorithms
on several prediction benchmarks.
| [
"Byron Boots, Geoffrey Gordon, Arthur Gretton",
"['Byron Boots' 'Geoffrey Gordon' 'Arthur Gretton']"
] |
cs.LG cs.AI stat.ML | null | 1309.6820 | null | null | http://arxiv.org/pdf/1309.6820v1 | 2013-09-26T12:35:41Z | 2013-09-26T12:35:41Z | SparsityBoost: A New Scoring Function for Learning Bayesian Network
Structure | We give a new consistent scoring function for structure learning of Bayesian
networks. In contrast to traditional approaches to scorebased structure
learning, such as BDeu or MDL, the complexity penalty that we propose is
data-dependent and is given by the probability that a conditional independence
test correctly shows that an edge cannot exist. What really distinguishes this
new scoring function from earlier work is that it has the property of becoming
computationally easier to maximize as the amount of data increases. We prove a
polynomial sample complexity result, showing that maximizing this score is
guaranteed to correctly learn a structure with no false edges and a
distribution close to the generating distribution, whenever there exists a
Bayesian network which is a perfect map for the data generating distribution.
Although the new score can be used with any search algorithm, we give empirical
results showing that it is particularly effective when used together with a
linear programming relaxation approach to Bayesian network structure learning.
| [
"['Eliot Brenner' 'David Sontag']",
"Eliot Brenner, David Sontag"
] |
cs.LG stat.ML | null | 1309.6821 | null | null | http://arxiv.org/pdf/1309.6821v1 | 2013-09-26T12:36:00Z | 2013-09-26T12:36:00Z | Sample Complexity of Multi-task Reinforcement Learning | Transferring knowledge across a sequence of reinforcement-learning tasks is
challenging, and has a number of important applications. Though there is
encouraging empirical evidence that transfer can improve performance in
subsequent reinforcement-learning tasks, there has been very little theoretical
analysis. In this paper, we introduce a new multi-task algorithm for a sequence
of reinforcement-learning tasks when each task is sampled independently from
(an unknown) distribution over a finite set of Markov decision processes whose
parameters are initially unknown. For this setting, we prove under certain
assumptions that the per-task sample complexity of exploration is reduced
significantly due to transfer compared to standard single-task algorithms. Our
multi-task algorithm also has the desired characteristic that it is guaranteed
not to exhibit negative transfer: in the worst case its per-task sample
complexity is comparable to the corresponding single-task algorithm.
| [
"Emma Brunskill, Lihong Li",
"['Emma Brunskill' 'Lihong Li']"
] |
cs.LG stat.ML | null | 1309.6823 | null | null | http://arxiv.org/pdf/1309.6823v1 | 2013-09-26T12:36:30Z | 2013-09-26T12:36:30Z | Convex Relaxations of Bregman Divergence Clustering | Although many convex relaxations of clustering have been proposed in the past
decade, current formulations remain restricted to spherical Gaussian or
discriminative models and are susceptible to imbalanced clusters. To address
these shortcomings, we propose a new class of convex relaxations that can be
flexibly applied to more general forms of Bregman divergence clustering. By
basing these new formulations on normalized equivalence relations we retain
additional control on relaxation quality, which allows improvement in
clustering quality. We furthermore develop optimization methods that improve
scalability by exploiting recent implicit matrix norm methods. In practice, we
find that the new formulations are able to efficiently produce tighter
clusterings that improve the accuracy of state of the art methods.
| [
"Hao Cheng, Xinhua Zhang, Dale Schuurmans",
"['Hao Cheng' 'Xinhua Zhang' 'Dale Schuurmans']"
] |
cs.AI cs.LG stat.ML | null | 1309.6829 | null | null | http://arxiv.org/pdf/1309.6829v1 | 2013-09-26T12:38:09Z | 2013-09-26T12:38:09Z | Bethe-ADMM for Tree Decomposition based Parallel MAP Inference | We consider the problem of maximum a posteriori (MAP) inference in discrete
graphical models. We present a parallel MAP inference algorithm called
Bethe-ADMM based on two ideas: tree-decomposition of the graph and the
alternating direction method of multipliers (ADMM). However, unlike the
standard ADMM, we use an inexact ADMM augmented with a Bethe-divergence based
proximal function, which makes each subproblem in ADMM easy to solve in
parallel using the sum-product algorithm. We rigorously prove global
convergence of Bethe-ADMM. The proposed algorithm is extensively evaluated on
both synthetic and real datasets to illustrate its effectiveness. Further, the
parallel Bethe-ADMM is shown to scale almost linearly with increasing number of
cores.
| [
"Qiang Fu, Huahua Wang, Arindam Banerjee",
"['Qiang Fu' 'Huahua Wang' 'Arindam Banerjee']"
] |
cs.LG stat.ML | null | 1309.6830 | null | null | http://arxiv.org/pdf/1309.6830v1 | 2013-09-26T12:39:01Z | 2013-09-26T12:39:01Z | Building Bridges: Viewing Active Learning from the Multi-Armed Bandit
Lens | In this paper we propose a multi-armed bandit inspired, pool based active
learning algorithm for the problem of binary classification. By carefully
constructing an analogy between active learning and multi-armed bandits, we
utilize ideas such as lower confidence bounds, and self-concordant
regularization from the multi-armed bandit literature to design our proposed
algorithm. Our algorithm is a sequential algorithm, which in each round assigns
a sampling distribution on the pool, samples one point from this distribution,
and queries the oracle for the label of this sampled point. The design of this
sampling distribution is also inspired by the analogy between active learning
and multi-armed bandits. We show how to derive lower confidence bounds required
by our algorithm. Experimental comparisons to previously proposed active
learning algorithms show superior performance on some standard UCI datasets.
| [
"['Ravi Ganti' 'Alexander G. Gray']",
"Ravi Ganti, Alexander G. Gray"
] |
cs.LG stat.ML | null | 1309.6831 | null | null | http://arxiv.org/pdf/1309.6831v1 | 2013-09-26T12:39:19Z | 2013-09-26T12:39:19Z | Batch-iFDD for Representation Expansion in Large MDPs | Matching pursuit (MP) methods are a promising class of feature construction
algorithms for value function approximation. Yet existing MP methods require
creating a pool of potential features, mandating expert knowledge or
enumeration of a large feature pool, both of which hinder scalability. This
paper introduces batch incremental feature dependency discovery (Batch-iFDD) as
an MP method that inherits a provable convergence property. Additionally,
Batch-iFDD does not require a large pool of features, leading to lower
computational complexity. Empirical policy evaluation results across three
domains with up to one million states highlight the scalability of Batch-iFDD
over the previous state of the art MP algorithm.
| [
"Alborz Geramifard, Thomas J. Walsh, Nicholas Roy, Jonathan How",
"['Alborz Geramifard' 'Thomas J. Walsh' 'Nicholas Roy' 'Jonathan How']"
] |
cs.LG stat.ML | null | 1309.6833 | null | null | http://arxiv.org/pdf/1309.6833v1 | 2013-09-26T12:40:19Z | 2013-09-26T12:40:19Z | Multiple Instance Learning by Discriminative Training of Markov Networks | We introduce a graphical framework for multiple instance learning (MIL) based
on Markov networks. This framework can be used to model the traditional MIL
definition as well as more general MIL definitions. Different levels of
ambiguity -- the portion of positive instances in a bag -- can be explored in
weakly supervised data. To train these models, we propose a discriminative
max-margin learning algorithm leveraging efficient inference for
cardinality-based cliques. The efficacy of the proposed framework is evaluated
on a variety of data sets. Experimental results verify that encoding or
learning the degree of ambiguity can improve classification performance.
| [
"Hossein Hajimirsadeghi, Jinling Li, Greg Mori, Mohammad Zaki, Tarek\n Sayed",
"['Hossein Hajimirsadeghi' 'Jinling Li' 'Greg Mori' 'Mohammad Zaki'\n 'Tarek Sayed']"
] |
cs.LG stat.ML | null | 1309.6834 | null | null | http://arxiv.org/pdf/1309.6834v1 | 2013-09-26T12:40:36Z | 2013-09-26T12:40:36Z | Unsupervised Learning of Noisy-Or Bayesian Networks | This paper considers the problem of learning the parameters in Bayesian
networks of discrete variables with known structure and hidden variables.
Previous approaches in these settings typically use expectation maximization;
when the network has high treewidth, the required expectations might be
approximated using Monte Carlo or variational methods. We show how to avoid
inference altogether during learning by giving a polynomial-time algorithm
based on the method-of-moments, building upon recent work on learning
discrete-valued mixture models. In particular, we show how to learn the
parameters for a family of bipartite noisy-or Bayesian networks. In our
experimental results, we demonstrate an application of our algorithm to
learning QMR-DT, a large Bayesian network used for medical diagnosis. We show
that it is possible to fully learn the parameters of QMR-DT even when only the
findings are observed in the training data (ground truth diseases unknown).
| [
"Yonatan Halpern, David Sontag",
"['Yonatan Halpern' 'David Sontag']"
] |
cs.LG stat.ML | null | 1309.6835 | null | null | http://arxiv.org/pdf/1309.6835v1 | 2013-09-26T12:41:06Z | 2013-09-26T12:41:06Z | Gaussian Processes for Big Data | We introduce stochastic variational inference for Gaussian process models.
This enables the application of Gaussian process (GP) models to data sets
containing millions of data points. We show how GPs can be vari- ationally
decomposed to depend on a set of globally relevant inducing variables which
factorize the model in the necessary manner to perform variational inference.
Our ap- proach is readily extended to models with non-Gaussian likelihoods and
latent variable models based around Gaussian processes. We demonstrate the
approach on a simple toy problem and two real world data sets.
| [
"['James Hensman' 'Nicolo Fusi' 'Neil D. Lawrence']",
"James Hensman, Nicolo Fusi, Neil D. Lawrence"
] |
cs.LG stat.ML | null | 1309.6838 | null | null | http://arxiv.org/pdf/1309.6838v1 | 2013-09-26T12:41:38Z | 2013-09-26T12:41:38Z | Inverse Covariance Estimation for High-Dimensional Data in Linear Time
and Space: Spectral Methods for Riccati and Sparse Models | We propose maximum likelihood estimation for learning Gaussian graphical
models with a Gaussian (ell_2^2) prior on the parameters. This is in contrast
to the commonly used Laplace (ell_1) prior for encouraging sparseness. We show
that our optimization problem leads to a Riccati matrix equation, which has a
closed form solution. We propose an efficient algorithm that performs a
singular value decomposition of the training data. Our algorithm is
O(NT^2)-time and O(NT)-space for N variables and T samples. Our method is
tailored to high-dimensional problems (N gg T), in which sparseness promoting
methods become intractable. Furthermore, instead of obtaining a single solution
for a specific regularization parameter, our algorithm finds the whole solution
path. We show that the method has logarithmic sample complexity under the
spiked covariance model. We also propose sparsification of the dense solution
with provable performance guarantees. We provide techniques for using our
learnt models, such as removing unimportant variables, computing likelihoods
and conditional distributions. Finally, we show promising results in several
gene expressions datasets.
| [
"['Jean Honorio' 'Tommi S. Jaakkola']",
"Jean Honorio, Tommi S. Jaakkola"
] |
cs.LG stat.ML | null | 1309.6840 | null | null | http://arxiv.org/pdf/1309.6840v1 | 2013-09-26T12:42:25Z | 2013-09-26T12:42:25Z | Constrained Bayesian Inference for Low Rank Multitask Learning | We present a novel approach for constrained Bayesian inference. Unlike
current methods, our approach does not require convexity of the constraint set.
We reduce the constrained variational inference to a parametric optimization
over the feasible set of densities and propose a general recipe for such
problems. We apply the proposed constrained Bayesian inference approach to
multitask learning subject to rank constraints on the weight matrix. Further,
constrained parameter estimation is applied to recover the sparse conditional
independence structure encoded by prior precision matrices. Our approach is
motivated by reverse inference for high dimensional functional neuroimaging, a
domain where the high dimensionality and small number of examples requires the
use of constraints to ensure meaningful and effective models. For this
application, we propose a model that jointly learns a weight matrix and the
prior inverse covariance structure between different tasks. We present
experimental validation showing that the proposed approach outperforms strong
baseline models in terms of predictive performance and structure recovery.
| [
"Oluwasanmi Koyejo, Joydeep Ghosh",
"['Oluwasanmi Koyejo' 'Joydeep Ghosh']"
] |
cs.LG stat.ML | null | 1309.6847 | null | null | http://arxiv.org/pdf/1309.6847v1 | 2013-09-26T12:45:00Z | 2013-09-26T12:45:00Z | Learning Max-Margin Tree Predictors | Structured prediction is a powerful framework for coping with joint
prediction of interacting outputs. A central difficulty in using this framework
is that often the correct label dependence structure is unknown. At the same
time, we would like to avoid an overly complex structure that will lead to
intractable prediction. In this work we address the challenge of learning tree
structured predictive models that achieve high accuracy while at the same time
facilitate efficient (linear time) inference. We start by proving that this
task is in general NP-hard, and then suggest an approximate alternative.
Briefly, our CRANK approach relies on a novel Circuit-RANK regularizer that
penalizes non-tree structures and that can be optimized using a CCCP procedure.
We demonstrate the effectiveness of our approach on several domains and show
that, despite the relative simplicity of the structure, prediction accuracy is
competitive with a fully connected model that is computationally costly at
prediction time.
| [
"Ofer Meshi, Elad Eban, Gal Elidan, Amir Globerson",
"['Ofer Meshi' 'Elad Eban' 'Gal Elidan' 'Amir Globerson']"
] |
cs.LG cs.AI stat.ML | null | 1309.6849 | null | null | http://arxiv.org/pdf/1309.6849v1 | 2013-09-26T12:45:43Z | 2013-09-26T12:45:43Z | Cyclic Causal Discovery from Continuous Equilibrium Data | We propose a method for learning cyclic causal models from a combination of
observational and interventional equilibrium data. Novel aspects of the
proposed method are its ability to work with continuous data (without assuming
linearity) and to deal with feedback loops. Within the context of biochemical
reactions, we also propose a novel way of modeling interventions that modify
the activity of compounds instead of their abundance. For computational
reasons, we approximate the nonlinear causal mechanisms by (coupled) local
linearizations, one for each experimental condition. We apply the method to
reconstruct a cellular signaling network from the flow cytometry data measured
by Sachs et al. (2005). We show that our method finds evidence in the data for
feedback loops and that it gives a more accurate quantitative description of
the data at comparable model complexity.
| [
"Joris Mooij, Tom Heskes",
"['Joris Mooij' 'Tom Heskes']"
] |
cs.LG cs.DS stat.ML | null | 1309.6850 | null | null | http://arxiv.org/pdf/1309.6850v1 | 2013-09-26T12:45:59Z | 2013-09-26T12:45:59Z | Structured Convex Optimization under Submodular Constraints | A number of discrete and continuous optimization problems in machine learning
are related to convex minimization problems under submodular constraints. In
this paper, we deal with a submodular function with a directed graph structure,
and we show that a wide range of convex optimization problems under submodular
constraints can be solved much more efficiently than general submodular
optimization methods by a reduction to a maximum flow problem. Furthermore, we
give some applications, including sparse optimization methods, in which the
proposed methods are effective. Additionally, we evaluate the performance of
the proposed method through computational experiments.
| [
"Kiyohito Nagano, Yoshinobu Kawahara",
"['Kiyohito Nagano' 'Yoshinobu Kawahara']"
] |
cs.DS cs.AI cs.LG | null | 1309.6851 | null | null | http://arxiv.org/pdf/1309.6851v1 | 2013-09-26T12:46:13Z | 2013-09-26T12:46:13Z | Treedy: A Heuristic for Counting and Sampling Subsets | Consider a collection of weighted subsets of a ground set N. Given a query
subset Q of N, how fast can one (1) find the weighted sum over all subsets of
Q, and (2) sample a subset of Q proportionally to the weights? We present a
tree-based greedy heuristic, Treedy, that for a given positive tolerance d
answers such counting and sampling queries to within a guaranteed relative
error d and total variation distance d, respectively. Experimental results on
artificial instances and in application to Bayesian structure discovery in
Bayesian networks show that approximations yield dramatic savings in running
time compared to exact computation, and that Treedy typically outperforms a
previously proposed sorting-based heuristic.
| [
"Teppo Niinimaki, Mikko Koivisto",
"['Teppo Niinimaki' 'Mikko Koivisto']"
] |
cs.LG cs.IR stat.ML | null | 1309.6852 | null | null | http://arxiv.org/pdf/1309.6852v1 | 2013-09-26T12:46:39Z | 2013-09-26T12:46:39Z | Stochastic Rank Aggregation | This paper addresses the problem of rank aggregation, which aims to find a
consensus ranking among multiple ranking inputs. Traditional rank aggregation
methods are deterministic, and can be categorized into explicit and implicit
methods depending on whether rank information is explicitly or implicitly
utilized. Surprisingly, experimental results on real data sets show that
explicit rank aggregation methods would not work as well as implicit methods,
although rank information is critical for the task. Our analysis indicates that
the major reason might be the unreliable rank information from incomplete
ranking inputs. To solve this problem, we propose to incorporate uncertainty
into rank aggregation and tackle the problem in both unsupervised and
supervised scenario. We call this novel framework {stochastic rank aggregation}
(St.Agg for short). Specifically, we introduce a prior distribution on ranks,
and transform the ranking functions or objectives in traditional explicit
methods to their expectations over this distribution. Our experiments on
benchmark data sets show that the proposed St.Agg outperforms the baselines in
both unsupervised and supervised scenarios.
| [
"['Shuzi Niu' 'Yanyan Lan' 'Jiafeng Guo' 'Xueqi Cheng']",
"Shuzi Niu, Yanyan Lan, Jiafeng Guo, Xueqi Cheng"
] |
cs.LG stat.ML | null | 1309.6858 | null | null | http://arxiv.org/pdf/1309.6858v1 | 2013-09-26T12:49:02Z | 2013-09-26T12:49:02Z | The Supervised IBP: Neighbourhood Preserving Infinite Latent Feature
Models | We propose a probabilistic model to infer supervised latent variables in the
Hamming space from observed data. Our model allows simultaneous inference of
the number of binary latent variables, and their values. The latent variables
preserve neighbourhood structure of the data in a sense that objects in the
same semantic concept have similar latent values, and objects in different
concepts have dissimilar latent values. We formulate the supervised infinite
latent variable problem based on an intuitive principle of pulling objects
together if they are of the same type, and pushing them apart if they are not.
We then combine this principle with a flexible Indian Buffet Process prior on
the latent variables. We show that the inferred supervised latent variables can
be directly used to perform a nearest neighbour search for the purpose of
retrieval. We introduce a new application of dynamically extending hash codes,
and show how to effectively couple the structure of the hash codes with
continuously growing structure of the neighbourhood preserving infinite latent
feature space.
| [
"['Novi Quadrianto' 'Viktoriia Sharmanska' 'David A. Knowles'\n 'Zoubin Ghahramani']",
"Novi Quadrianto, Viktoriia Sharmanska, David A. Knowles, Zoubin\n Ghahramani"
] |
cs.LG cs.AI stat.ML | null | 1309.6860 | null | null | http://arxiv.org/pdf/1309.6860v1 | 2013-09-26T12:49:46Z | 2013-09-26T12:49:46Z | Identifying Finite Mixtures of Nonparametric Product Distributions and
Causal Inference of Confounders | We propose a kernel method to identify finite mixtures of nonparametric
product distributions. It is based on a Hilbert space embedding of the joint
distribution. The rank of the constructed tensor is equal to the number of
mixture components. We present an algorithm to recover the components by
partitioning the data points into clusters such that the variables are jointly
conditionally independent given the cluster. This method can be used to
identify finite confounders.
| [
"Eleni Sgouritsa, Dominik Janzing, Jonas Peters, Bernhard Schoelkopf",
"['Eleni Sgouritsa' 'Dominik Janzing' 'Jonas Peters' 'Bernhard Schoelkopf']"
] |
cs.LG stat.ML | null | 1309.6862 | null | null | http://arxiv.org/pdf/1309.6862v1 | 2013-09-26T12:50:04Z | 2013-09-26T12:50:04Z | Determinantal Clustering Processes - A Nonparametric Bayesian Approach
to Kernel Based Semi-Supervised Clustering | Semi-supervised clustering is the task of clustering data points into
clusters where only a fraction of the points are labelled. The true number of
clusters in the data is often unknown and most models require this parameter as
an input. Dirichlet process mixture models are appealing as they can infer the
number of clusters from the data. However, these models do not deal with high
dimensional data well and can encounter difficulties in inference. We present a
novel nonparameteric Bayesian kernel based method to cluster data points
without the need to prespecify the number of clusters or to model complicated
densities from which data points are assumed to be generated from. The key
insight is to use determinants of submatrices of a kernel matrix as a measure
of how close together a set of points are. We explore some theoretical
properties of the model and derive a natural Gibbs based algorithm with MCMC
hyperparameter learning. The model is implemented on a variety of synthetic and
real world data sets.
| [
"['Amar Shah' 'Zoubin Ghahramani']",
"Amar Shah, Zoubin Ghahramani"
] |
cs.LG cs.AI stat.ML | null | 1309.6863 | null | null | http://arxiv.org/pdf/1309.6863v1 | 2013-09-26T12:50:19Z | 2013-09-26T12:50:19Z | Sparse Nested Markov models with Log-linear Parameters | Hidden variables are ubiquitous in practical data analysis, and therefore
modeling marginal densities and doing inference with the resulting models is an
important problem in statistics, machine learning, and causal inference.
Recently, a new type of graphical model, called the nested Markov model, was
developed which captures equality constraints found in marginals of directed
acyclic graph (DAG) models. Some of these constraints, such as the so called
`Verma constraint', strictly generalize conditional independence. To make
modeling and inference with nested Markov models practical, it is necessary to
limit the number of parameters in the model, while still correctly capturing
the constraints in the marginal of a DAG model. Placing such limits is similar
in spirit to sparsity methods for undirected graphical models, and regression
models. In this paper, we give a log-linear parameterization which allows
sparse modeling with nested Markov models. We illustrate the advantages of this
parameterization with a simulation study.
| [
"['Ilya Shpitser' 'Robin J. Evans' 'Thomas S. Richardson' 'James M. Robins']",
"Ilya Shpitser, Robin J. Evans, Thomas S. Richardson, James M. Robins"
] |
cs.LG cs.IR stat.ML | null | 1309.6865 | null | null | http://arxiv.org/pdf/1309.6865v1 | 2013-09-26T12:50:54Z | 2013-09-26T12:50:54Z | Modeling Documents with Deep Boltzmann Machines | We introduce a Deep Boltzmann Machine model suitable for modeling and
extracting latent semantic representations from a large unstructured collection
of documents. We overcome the apparent difficulty of training a DBM with
judicious parameter tying. This parameter tying enables an efficient
pretraining algorithm and a state initialization scheme that aids inference.
The model can be trained just as efficiently as a standard Restricted Boltzmann
Machine. Our experiments show that the model assigns better log probability to
unseen data than the Replicated Softmax model. Features extracted from our
model outperform LDA, Replicated Softmax, and DocNADE models on document
retrieval and document classification tasks.
| [
"Nitish Srivastava, Ruslan R Salakhutdinov, Geoffrey E. Hinton",
"['Nitish Srivastava' 'Ruslan R Salakhutdinov' 'Geoffrey E. Hinton']"
] |
cs.LG stat.ME | null | 1309.6867 | null | null | http://arxiv.org/pdf/1309.6867v1 | 2013-09-26T12:51:22Z | 2013-09-26T12:51:22Z | Speedy Model Selection (SMS) for Copula Models | We tackle the challenge of efficiently learning the structure of expressive
multivariate real-valued densities of copula graphical models. We start by
theoretically substantiating the conjecture that for many copula families the
magnitude of Spearman's rank correlation coefficient is monotone in the
expected contribution of an edge in network, namely the negative copula
entropy. We then build on this theory and suggest a novel Bayesian approach
that makes use of a prior over values of Spearman's rho for learning
copula-based models that involve a mix of copula families. We demonstrate the
generalization effectiveness of our highly efficient approach on sizable and
varied real-life datasets.
| [
"Yaniv Tenzer, Gal Elidan",
"['Yaniv Tenzer' 'Gal Elidan']"
] |
cs.LG stat.ML | null | 1309.6868 | null | null | http://arxiv.org/pdf/1309.6868v1 | 2013-09-26T12:51:47Z | 2013-09-26T12:51:47Z | Approximate Kalman Filter Q-Learning for Continuous State-Space MDPs | We seek to learn an effective policy for a Markov Decision Process (MDP) with
continuous states via Q-Learning. Given a set of basis functions over state
action pairs we search for a corresponding set of linear weights that minimizes
the mean Bellman residual. Our algorithm uses a Kalman filter model to estimate
those weights and we have developed a simpler approximate Kalman filter model
that outperforms the current state of the art projected TD-Learning methods on
several standard benchmark problems.
| [
"['Charles Tripp' 'Ross D. Shachter']",
"Charles Tripp, Ross D. Shachter"
] |
cs.LG stat.ML | null | 1309.6869 | null | null | http://arxiv.org/pdf/1309.6869v1 | 2013-09-26T12:52:20Z | 2013-09-26T12:52:20Z | Finite-Time Analysis of Kernelised Contextual Bandits | We tackle the problem of online reward maximisation over a large finite set
of actions described by their contexts. We focus on the case when the number of
actions is too big to sample all of them even once. However we assume that we
have access to the similarities between actions' contexts and that the expected
reward is an arbitrary linear function of the contexts' images in the related
reproducing kernel Hilbert space (RKHS). We propose KernelUCB, a kernelised UCB
algorithm, and give a cumulative regret bound through a frequentist analysis.
For contextual bandits, the related algorithm GP-UCB turns out to be a special
case of our algorithm, and our finite-time analysis improves the regret bound
of GP-UCB for the agnostic case, both in the terms of the kernel-dependent
quantity and the RKHS norm of the reward function. Moreover, for the linear
kernel, our regret bound matches the lower bound for contextual linear bandits.
| [
"['Michal Valko' 'Nathaniel Korda' 'Remi Munos' 'Ilias Flaounas'\n 'Nelo Cristianini']",
"Michal Valko, Nathaniel Korda, Remi Munos, Ilias Flaounas, Nelo\n Cristianini"
] |
cs.LG cs.CL cs.IR stat.ML | null | 1309.6874 | null | null | http://arxiv.org/pdf/1309.6874v1 | 2013-09-26T12:54:02Z | 2013-09-26T12:54:02Z | Integrating Document Clustering and Topic Modeling | Document clustering and topic modeling are two closely related tasks which
can mutually benefit each other. Topic modeling can project documents into a
topic space which facilitates effective document clustering. Cluster labels
discovered by document clustering can be incorporated into topic models to
extract local topics specific to each cluster and global topics shared by all
clusters. In this paper, we propose a multi-grain clustering topic model
(MGCTM) which integrates document clustering and topic modeling into a unified
framework and jointly performs the two tasks to achieve the overall best
performance. Our model tightly couples two components: a mixture component used
for discovering latent groups in document collection and a topic model
component used for mining multi-grain topics including local topics specific to
each cluster and global topics shared across clusters.We employ variational
inference to approximate the posterior of hidden variables and learn model
parameters. Experiments on two datasets demonstrate the effectiveness of our
model.
| [
"Pengtao Xie, Eric P. Xing",
"['Pengtao Xie' 'Eric P. Xing']"
] |
cs.LG stat.ML | null | 1309.6875 | null | null | http://arxiv.org/pdf/1309.6875v1 | 2013-09-26T12:54:31Z | 2013-09-26T12:54:31Z | Active Learning with Expert Advice | Conventional learning with expert advice methods assumes a learner is always
receiving the outcome (e.g., class labels) of every incoming training instance
at the end of each trial. In real applications, acquiring the outcome from
oracle can be costly or time consuming. In this paper, we address a new problem
of active learning with expert advice, where the outcome of an instance is
disclosed only when it is requested by the online learner. Our goal is to learn
an accurate prediction model by asking the oracle the number of questions as
small as possible. To address this challenge, we propose a framework of active
forecasters for online active learning with expert advice, which attempts to
extend two regular forecasters, i.e., Exponentially Weighted Average Forecaster
and Greedy Forecaster, to tackle the task of active learning with expert
advice. We prove that the proposed algorithms satisfy the Hannan consistency
under some proper assumptions, and validate the efficacy of our technique by an
extensive set of experiments.
| [
"['Peilin Zhao' 'Steven Hoi' 'Jinfeng Zhuang']",
"Peilin Zhao, Steven Hoi, Jinfeng Zhuang"
] |
stat.ML cs.LG | null | 1309.6876 | null | null | http://arxiv.org/pdf/1309.6876v1 | 2013-09-26T12:54:57Z | 2013-09-26T12:54:57Z | Bennett-type Generalization Bounds: Large-deviation Case and Faster Rate
of Convergence | In this paper, we present the Bennett-type generalization bounds of the
learning process for i.i.d. samples, and then show that the generalization
bounds have a faster rate of convergence than the traditional results. In
particular, we first develop two types of Bennett-type deviation inequality for
the i.i.d. learning process: one provides the generalization bounds based on
the uniform entropy number; the other leads to the bounds based on the
Rademacher complexity. We then adopt a new method to obtain the alternative
expressions of the Bennett-type generalization bounds, which imply that the
bounds have a faster rate o(N^{-1/2}) of convergence than the traditional
results O(N^{-1/2}). Additionally, we find that the rate of the bounds will
become faster in the large-deviation case, which refers to a situation where
the empirical risk is far away from (at least not close to) the expected risk.
Finally, we analyze the asymptotical convergence of the learning process and
compare our analysis with the existing results.
| [
"['Chao Zhang']",
"Chao Zhang"
] |
math.ST cs.LG stat.ML stat.TH | null | 1309.6933 | null | null | http://arxiv.org/pdf/1309.6933v1 | 2013-09-26T15:18:22Z | 2013-09-26T15:18:22Z | Estimating Undirected Graphs Under Weak Assumptions | We consider the problem of providing nonparametric confidence guarantees for
undirected graphs under weak assumptions. In particular, we do not assume
sparsity, incoherence or Normality. We allow the dimension $D$ to increase with
the sample size $n$. First, we prove lower bounds that show that if we want
accurate inferences with low assumptions then there are limitations on the
dimension as a function of sample size. When the dimension increases slowly
with sample size, we show that methods based on Normal approximations and on
the bootstrap lead to valid inferences and we provide Berry-Esseen bounds on
the accuracy of the Normal approximation. When the dimension is large relative
to sample size, accurate inferences for graphs under low assumptions are not
possible. Instead we propose to estimate something less demanding than the
entire partial correlation graph. In particular, we consider: cluster graphs,
restricted partial correlation graphs and correlation graphs.
| [
"Larry Wasserman, Mladen Kolar and Alessandro Rinaldo",
"['Larry Wasserman' 'Mladen Kolar' 'Alessandro Rinaldo']"
] |
cs.CE cs.LG q-fin.ST | 10.1504/IJBIDM.2014.065091 | 1309.7119 | null | null | http://arxiv.org/abs/1309.7119v3 | 2017-01-07T00:01:32Z | 2013-09-27T05:35:50Z | Stock price direction prediction by directly using prices data: an
empirical study on the KOSPI and HSI | The prediction of a stock market direction may serve as an early
recommendation system for short-term investors and as an early financial
distress warning system for long-term shareholders. Many stock prediction
studies focus on using macroeconomic indicators, such as CPI and GDP, to train
the prediction model. However, daily data of the macroeconomic indicators are
almost impossible to obtain. Thus, those methods are difficult to be employed
in practice. In this paper, we propose a method that directly uses prices data
to predict market index direction and stock price direction. An extensive
empirical study of the proposed method is presented on the Korean Composite
Stock Price Index (KOSPI) and Hang Seng Index (HSI), as well as the individual
constituents included in the indices. The experimental results show notably
high hit ratios in predicting the movements of the individual constituents in
the KOSPI and HIS.
| [
"Yanshan Wang",
"['Yanshan Wang']"
] |
cs.CY cs.LG | null | 1309.7261 | null | null | http://arxiv.org/pdf/1309.7261v1 | 2013-09-27T15:04:05Z | 2013-09-27T15:04:05Z | Detecting Fake Escrow Websites using Rich Fraud Cues and Kernel Based
Methods | The ability to automatically detect fraudulent escrow websites is important
in order to alleviate online auction fraud. Despite research on related topics,
fake escrow website categorization has received little attention. In this study
we evaluated the effectiveness of various features and techniques for detecting
fake escrow websites. Our analysis included a rich set of features extracted
from web page text, image, and link information. We also proposed a composite
kernel tailored to represent the properties of fake websites, including content
duplication and structural attributes. Experiments were conducted to assess the
proposed features, techniques, and kernels on a test bed encompassing nearly
90,000 web pages derived from 410 legitimate and fake escrow sites. The
combination of an extended feature set and the composite kernel attained over
98% accuracy when differentiating fake sites from real ones, using the support
vector machines algorithm. The results suggest that automated web-based
information systems for detecting fake escrow sites could be feasible and may
be utilized as authentication mechanisms.
| [
"Ahmed Abbasi and Hsinchun Chen",
"['Ahmed Abbasi' 'Hsinchun Chen']"
] |
cs.CY cs.LG | null | 1309.7266 | null | null | http://arxiv.org/pdf/1309.7266v1 | 2013-09-27T15:09:24Z | 2013-09-27T15:09:24Z | Evaluating Link-Based Techniques for Detecting Fake Pharmacy Websites | Fake online pharmacies have become increasingly pervasive, constituting over
90% of online pharmacy websites. There is a need for fake website detection
techniques capable of identifying fake online pharmacy websites with a high
degree of accuracy. In this study, we compared several well-known link-based
detection techniques on a large-scale test bed with the hyperlink graph
encompassing over 80 million links between 15.5 million web pages, including
1.2 million known legitimate and fake pharmacy pages. We found that the QoC and
QoL class propagation algorithms achieved an accuracy of over 90% on our
dataset. The results revealed that algorithms that incorporate dual class
propagation as well as inlink and outlink information, on page-level or
site-level graphs, are better suited for detecting fake pharmacy websites. In
addition, site-level analysis yielded significantly better results than
page-level analysis for most algorithms evaluated.
| [
"['Ahmed Abbasi' 'Siddharth Kaza' 'F. Mariam Zahedi']",
"Ahmed Abbasi, Siddharth Kaza and F. Mariam Zahedi"
] |
stat.ML cs.LG | 10.1017/S0956796814000057 | 1309.7311 | null | null | http://arxiv.org/abs/1309.7311v1 | 2013-09-27T17:53:57Z | 2013-09-27T17:53:57Z | Bayesian Inference in Sparse Gaussian Graphical Models | One of the fundamental tasks of science is to find explainable relationships
between observed phenomena. One approach to this task that has received
attention in recent years is based on probabilistic graphical modelling with
sparsity constraints on model structures. In this paper, we describe two new
approaches to Bayesian inference of sparse structures of Gaussian graphical
models (GGMs). One is based on a simple modification of the cutting-edge block
Gibbs sampler for sparse GGMs, which results in significant computational gains
in high dimensions. The other method is based on a specific construction of the
Hamiltonian Monte Carlo sampler, which results in further significant
improvements. We compare our fully Bayesian approaches with the popular
regularisation-based graphical LASSO, and demonstrate significant advantages of
the Bayesian treatment under the same computing costs. We apply the methods to
a broad range of simulated data sets, and a real-life financial data set.
| [
"['Peter Orchard' 'Felix Agakov' 'Amos Storkey']",
"Peter Orchard, Felix Agakov, Amos Storkey"
] |
cs.NI cs.LG math.OC | null | 1309.7367 | null | null | http://arxiv.org/pdf/1309.7367v5 | 2017-01-18T10:47:41Z | 2013-09-27T20:56:41Z | Stochastic Online Shortest Path Routing: The Value of Feedback | This paper studies online shortest path routing over multi-hop networks. Link
costs or delays are time-varying and modeled by independent and identically
distributed random processes, whose parameters are initially unknown. The
parameters, and hence the optimal path, can only be estimated by routing
packets through the network and observing the realized delays. Our aim is to
find a routing policy that minimizes the regret (the cumulative difference of
expected delay) between the path chosen by the policy and the unknown optimal
path. We formulate the problem as a combinatorial bandit optimization problem
and consider several scenarios that differ in where routing decisions are made
and in the information available when making the decisions. For each scenario,
we derive a tight asymptotic lower bound on the regret that has to be satisfied
by any online routing policy. These bounds help us to understand the
performance improvements we can expect when (i) taking routing decisions at
each hop rather than at the source only, and (ii) observing per-link delays
rather than end-to-end path delays. In particular, we show that (i) is of no
use while (ii) can have a spectacular impact. Three algorithms, with a
trade-off between computational complexity and performance, are proposed. The
regret upper bounds of these algorithms improve over those of the existing
algorithms, and they significantly outperform state-of-the-art algorithms in
numerical experiments.
| [
"M. Sadegh Talebi, Zhenhua Zou, Richard Combes, Alexandre Proutiere,\n Mikael Johansson",
"['M. Sadegh Talebi' 'Zhenhua Zou' 'Richard Combes' 'Alexandre Proutiere'\n 'Mikael Johansson']"
] |
cs.NI cs.LG | null | 1309.7439 | null | null | http://arxiv.org/pdf/1309.7439v1 | 2013-09-28T07:44:11Z | 2013-09-28T07:44:11Z | Optimal Hybrid Channel Allocation:Based On Machine Learning Algorithms | Recent advances in cellular communication systems resulted in a huge increase
in spectrum demand. To meet the requirements of the ever-growing need for
spectrum, efficient utilization of the existing resources is of utmost
importance. Channel Allocation, has thus become an inevitable research topic in
wireless communications. In this paper, we propose an optimal channel
allocation scheme, Optimal Hybrid Channel Allocation (OHCA) for an effective
allocation of channels. We improvise upon the existing Fixed Channel Allocation
(FCA) technique by imparting intelligence to the existing system by employing
the multilayer perceptron technique.
| [
"['K Viswanadh' 'Dr. G Rama Murthy']",
"K Viswanadh and Dr.G Rama Murthy"
] |
cs.CV cs.LG stat.ML | null | 1309.7512 | null | null | http://arxiv.org/pdf/1309.7512v2 | 2013-10-01T02:45:20Z | 2013-09-28T23:55:01Z | Structured learning of sum-of-submodular higher order energy functions | Submodular functions can be exactly minimized in polynomial time, and the
special case that graph cuts solve with max flow \cite{KZ:PAMI04} has had
significant impact in computer vision
\cite{BVZ:PAMI01,Kwatra:SIGGRAPH03,Rother:GrabCut04}. In this paper we address
the important class of sum-of-submodular (SoS) functions
\cite{Arora:ECCV12,Kolmogorov:DAM12}, which can be efficiently minimized via a
variant of max flow called submodular flow \cite{Edmonds:ADM77}. SoS functions
can naturally express higher order priors involving, e.g., local image patches;
however, it is difficult to fully exploit their expressive power because they
have so many parameters. Rather than trying to formulate existing higher order
priors as an SoS function, we take a discriminative learning approach,
effectively searching the space of SoS functions for a higher order prior that
performs well on our training set. We adopt a structural SVM approach
\cite{Joachims/etal/09a,Tsochantaridis/etal/04} and formulate the training
problem in terms of quadratic programming; as a result we can efficiently
search the space of SoS priors via an extended cutting-plane algorithm. We also
show how the state-of-the-art max flow method for vision problems
\cite{Goldberg:ESA11} can be modified to efficiently solve the submodular flow
problem. Experimental comparisons are made against the OpenCV implementation of
the GrabCut interactive segmentation technique \cite{Rother:GrabCut04}, which
uses hand-tuned parameters instead of machine learning. On a standard dataset
\cite{Gulshan:CVPR10} our method learns higher order priors with hundreds of
parameter values, and produces significantly better segmentations. While our
focus is on binary labeling problems, we show that our techniques can be
naturally generalized to handle more than two labels.
| [
"['Alexander Fix' 'Thorsten Joachims' 'Sam Park' 'Ramin Zabih']",
"Alexander Fix and Thorsten Joachims and Sam Park and Ramin Zabih"
] |
cs.LG | null | 1309.7598 | null | null | http://arxiv.org/pdf/1309.7598v1 | 2013-09-29T13:48:52Z | 2013-09-29T13:48:52Z | On Sampling from the Gibbs Distribution with Random Maximum A-Posteriori
Perturbations | In this paper we describe how MAP inference can be used to sample efficiently
from Gibbs distributions. Specifically, we provide means for drawing either
approximate or unbiased samples from Gibbs' distributions by introducing low
dimensional perturbations and solving the corresponding MAP assignments. Our
approach also leads to new ways to derive lower bounds on partition functions.
We demonstrate empirically that our method excels in the typical "high signal -
high coupling" regime. The setting results in ragged energy landscapes that are
challenging for alternative approaches to sampling and/or lower bounds.
| [
"['Tamir Hazan' 'Subhransu Maji' 'Tommi Jaakkola']",
"Tamir Hazan, Subhransu Maji and Tommi Jaakkola"
] |
cs.LG cs.IR | null | 1309.7611 | null | null | http://arxiv.org/pdf/1309.7611v1 | 2013-09-29T15:50:45Z | 2013-09-29T15:50:45Z | Context-aware recommendations from implicit data via scalable tensor
factorization | Albeit the implicit feedback based recommendation problem - when only the
user history is available but there are no ratings - is the most typical
setting in real-world applications, it is much less researched than the
explicit feedback case. State-of-the-art algorithms that are efficient on the
explicit case cannot be automatically transformed to the implicit case if
scalability should be maintained. There are few implicit feedback benchmark
data sets, therefore new ideas are usually experimented on explicit benchmarks.
In this paper, we propose a generic context-aware implicit feedback recommender
algorithm, coined iTALS. iTALS applies a fast, ALS-based tensor factorization
learning method that scales linearly with the number of non-zero elements in
the tensor. We also present two approximate and faster variants of iTALS using
coordinate descent and conjugate gradient methods at learning. The method also
allows us to incorporate various contextual information into the model while
maintaining its computational efficiency. We present two context-aware variants
of iTALS incorporating seasonality and item purchase sequentiality into the
model to distinguish user behavior at different time intervals, and product
types with different repetitiveness. Experiments run on six data sets shows
that iTALS clearly outperforms context-unaware models and context aware
baselines, while it is on par with factorization machines (beats 7 times out of
12 cases) both in terms of recall and MAP.
| [
"['Balázs Hidasi' 'Domonkos Tikk']",
"Bal\\'azs Hidasi, Domonkos Tikk"
] |
cs.LG stat.ML | null | 1309.7676 | null | null | http://arxiv.org/pdf/1309.7676v1 | 2013-09-29T23:45:59Z | 2013-09-29T23:45:59Z | An upper bound on prototype set size for condensed nearest neighbor | The condensed nearest neighbor (CNN) algorithm is a heuristic for reducing
the number of prototypical points stored by a nearest neighbor classifier,
while keeping the classification rule given by the reduced prototypical set
consistent with the full set. I present an upper bound on the number of
prototypical points accumulated by CNN. The bound originates in a bound on the
number of times the decision rule is updated during training in the multiclass
perceptron algorithm, and thus is independent of training set size.
| [
"['Eric Christiansen']",
"Eric Christiansen"
] |
cs.LG | null | 1309.7750 | null | null | http://arxiv.org/pdf/1309.7750v2 | 2014-02-11T22:46:36Z | 2013-09-30T08:24:14Z | An Extensive Experimental Study on the Cluster-based Reference Set
Reduction for speeding-up the k-NN Classifier | The k-Nearest Neighbor (k-NN) classification algorithm is one of the most
widely-used lazy classifiers because of its simplicity and ease of
implementation. It is considered to be an effective classifier and has many
applications. However, its major drawback is that when sequential search is
used to find the neighbors, it involves high computational cost. Speeding-up
k-NN search is still an active research field. Hwang and Cho have recently
proposed an adaptive cluster-based method for fast Nearest Neighbor searching.
The effectiveness of this method is based on the adjustment of three
parameters. However, the authors evaluated their method by setting specific
parameter values and using only one dataset. In this paper, an extensive
experimental study of this method is presented. The results, which are based on
five real life datasets, illustrate that if the parameters of the method are
carefully defined, one can achieve even better classification performance.
| [
"Stefanos Ougiaroglou, Georgios Evangelidis, Dimitris A. Dervos",
"['Stefanos Ougiaroglou' 'Georgios Evangelidis' 'Dimitris A. Dervos']"
] |
stat.ML cs.LG math.ST stat.TH | 10.3150/12-BEJSP17 | 1309.7804 | null | null | http://arxiv.org/abs/1309.7804v1 | 2013-09-30T11:51:23Z | 2013-09-30T11:51:23Z | On statistics, computation and scalability | How should statistical procedures be designed so as to be scalable
computationally to the massive datasets that are increasingly the norm? When
coupled with the requirement that an answer to an inferential question be
delivered within a certain time budget, this question has significant
repercussions for the field of statistics. With the goal of identifying
"time-data tradeoffs," we investigate some of the statistical consequences of
computational perspectives on scability, in particular divide-and-conquer
methodology and hierarchies of convex relaxations.
| [
"Michael I. Jordan",
"['Michael I. Jordan']"
] |
cs.GT cs.LG math.ST stat.TH | null | 1309.7824 | null | null | http://arxiv.org/pdf/1309.7824v3 | 2019-12-12T23:47:00Z | 2013-09-30T12:48:35Z | Linear Regression from Strategic Data Sources | Linear regression is a fundamental building block of statistical data
analysis. It amounts to estimating the parameters of a linear model that maps
input features to corresponding outputs. In the classical setting where the
precision of each data point is fixed, the famous Aitken/Gauss-Markov theorem
in statistics states that generalized least squares (GLS) is a so-called "Best
Linear Unbiased Estimator" (BLUE). In modern data science, however, one often
faces strategic data sources, namely, individuals who incur a cost for
providing high-precision data.
In this paper, we study a setting in which features are public but
individuals choose the precision of the outputs they reveal to an analyst. We
assume that the analyst performs linear regression on this dataset, and
individuals benefit from the outcome of this estimation. We model this scenario
as a game where individuals minimize a cost comprising two components: (a) an
(agent-specific) disclosure cost for providing high-precision data; and (b) a
(global) estimation cost representing the inaccuracy in the linear model
estimate. In this game, the linear model estimate is a public good that
benefits all individuals. We establish that this game has a unique non-trivial
Nash equilibrium. We study the efficiency of this equilibrium and we prove
tight bounds on the price of stability for a large class of disclosure and
estimation costs. Finally, we study the estimator accuracy achieved at
equilibrium. We show that, in general, Aitken's theorem does not hold under
strategic data sources, though it does hold if individuals have identical
disclosure costs (up to a multiplicative factor). When individuals have
non-identical costs, we derive a bound on the improvement of the equilibrium
estimation cost that can be achieved by deviating from GLS, under mild
assumptions on the disclosure cost functions.
| [
"['Nicolas Gast' 'Stratis Ioannidis' 'Patrick Loiseau'\n 'Benjamin Roussillon']",
"Nicolas Gast, Stratis Ioannidis, Patrick Loiseau, and Benjamin\n Roussillon"
] |
cs.CY cs.LG | null | 1309.7958 | null | null | http://arxiv.org/pdf/1309.7958v1 | 2013-09-27T15:05:21Z | 2013-09-27T15:05:21Z | A Statistical Learning Based System for Fake Website Detection | Existing fake website detection systems are unable to effectively detect fake
websites. In this study, we advocate the development of fake website detection
systems that employ classification methods grounded in statistical learning
theory (SLT). Experimental results reveal that a prototype system developed
using SLT-based methods outperforms seven existing fake website detection
systems on a test bed encompassing 900 real and fake websites.
| [
"Ahmed Abbasi, Zhu Zhang and Hsinchun Chen",
"['Ahmed Abbasi' 'Zhu Zhang' 'Hsinchun Chen']"
] |
cs.LG cs.CV math.DS | null | 1309.7959 | null | null | http://arxiv.org/pdf/1309.7959v1 | 2013-09-19T07:10:53Z | 2013-09-19T07:10:53Z | Exploration and Exploitation in Visuomotor Prediction of Autonomous
Agents | This paper discusses various techniques to let an agent learn how to predict
the effects of its own actions on its sensor data autonomously, and their
usefulness to apply them to visual sensors. An Extreme Learning Machine is used
for visuomotor prediction, while various autonomous control techniques that can
aid the prediction process by balancing exploration and exploitation are
discussed and tested in a simple system: a camera moving over a 2D greyscale
image.
| [
"Laurens Bliek",
"['Laurens Bliek']"
] |
cs.LG | null | 1309.7982 | null | null | http://arxiv.org/pdf/1309.7982v1 | 2013-09-26T14:44:10Z | 2013-09-26T14:44:10Z | On the Feature Discovery for App Usage Prediction in Smartphones | With the increasing number of mobile Apps developed, they are now closely
integrated into daily life. In this paper, we develop a framework to predict
mobile Apps that are most likely to be used regarding the current device status
of a smartphone. Such an Apps usage prediction framework is a crucial
prerequisite for fast App launching, intelligent user experience, and power
management of smartphones. By analyzing real App usage log data, we discover
two kinds of features: The Explicit Feature (EF) from sensing readings of
built-in sensors, and the Implicit Feature (IF) from App usage relations. The
IF feature is derived by constructing the proposed App Usage Graph (abbreviated
as AUG) that models App usage transitions. In light of AUG, we are able to
discover usage relations among Apps. Since users may have different usage
behaviors on their smartphones, we further propose one personalized feature
selection algorithm. We explore minimum description length (MDL) from the
training data and select those features which need less length to describe the
training data. The personalized feature selection can successfully reduce the
log size and the prediction time. Finally, we adopt the kNN classification
model to predict Apps usage. Note that through the features selected by the
proposed personalized feature selection algorithm, we only need to keep these
features, which in turn reduces the prediction time and avoids the curse of
dimensionality when using the kNN classifier. We conduct a comprehensive
experimental study based on a real mobile App usage dataset. The results
demonstrate the effectiveness of the proposed framework and show the predictive
capability for App usage prediction.
| [
"['Zhung-Xun Liao' 'Shou-Chung Li' 'Wen-Chih Peng' 'Philip S Yu']",
"Zhung-Xun Liao, Shou-Chung Li, Wen-Chih Peng, Philip S Yu"
] |
cs.IT cs.LG math.IT | null | 1310.0110 | null | null | http://arxiv.org/pdf/1310.0110v1 | 2013-10-01T00:52:42Z | 2013-10-01T00:52:42Z | An information measure for comparing top $k$ lists | Comparing the top $k$ elements between two or more ranked results is a common
task in many contexts and settings. A few measures have been proposed to
compare top $k$ lists with attractive mathematical properties, but they face a
number of pitfalls and shortcomings in practice. This work introduces a new
measure to compare any two top k lists based on measuring the information these
lists convey. Our method investigates the compressibility of the lists, and the
length of the message to losslessly encode them gives a natural and robust
measure of their variability. This information-theoretic measure objectively
reconciles all the main considerations that arise when measuring
(dis-)similarity between lists: the extent of their non-overlapping elements in
each of the lists; the amount of disarray among overlapping elements between
the lists; the measurement of displacement of actual ranks of their overlapping
elements.
| [
"['Arun Konagurthu' 'James Collier']",
"Arun Konagurthu and James Collier"
] |
cs.IT cs.LG math.IT stat.ML | 10.1109/TIT.2015.2415195 | 1310.0154 | null | null | http://arxiv.org/abs/1310.0154v4 | 2015-02-13T11:18:26Z | 2013-10-01T06:37:18Z | Incoherence-Optimal Matrix Completion | This paper considers the matrix completion problem. We show that it is not
necessary to assume joint incoherence, which is a standard but unintuitive and
restrictive condition that is imposed by previous studies. This leads to a
sample complexity bound that is order-wise optimal with respect to the
incoherence parameter (as well as to the rank $r$ and the matrix dimension $n$
up to a log factor). As a consequence, we improve the sample complexity of
recovering a semidefinite matrix from $O(nr^{2}\log^{2}n)$ to $O(nr\log^{2}n)$,
and the highest allowable rank from $\Theta(\sqrt{n}/\log n)$ to
$\Theta(n/\log^{2}n)$. The key step in proof is to obtain new bounds on the
$\ell_{\infty,2}$-norm, defined as the maximum of the row and column norms of a
matrix. To illustrate the applicability of our techniques, we discuss
extensions to SVD projection, structured matrix completion and semi-supervised
clustering, for which we provide order-wise improvements over existing results.
Finally, we turn to the closely-related problem of low-rank-plus-sparse matrix
decomposition. We show that the joint incoherence condition is unavoidable here
for polynomial-time algorithms conditioned on the Planted Clique conjecture.
This means it is intractable in general to separate a rank-$\omega(\sqrt{n})$
positive semidefinite matrix and a sparse matrix. Interestingly, our results
show that the standard and joint incoherence conditions are associated
respectively with the information (statistical) and computational aspects of
the matrix decomposition problem.
| [
"['Yudong Chen']",
"Yudong Chen"
] |
cs.CV cs.LG | null | 1310.0354 | null | null | http://arxiv.org/pdf/1310.0354v3 | 2013-12-06T17:00:03Z | 2013-10-01T15:42:54Z | Deep and Wide Multiscale Recursive Networks for Robust Image Labeling | Feedforward multilayer networks trained by supervised learning have recently
demonstrated state of the art performance on image labeling problems such as
boundary prediction and scene parsing. As even very low error rates can limit
practical usage of such systems, methods that perform closer to human accuracy
remain desirable. In this work, we propose a new type of network with the
following properties that address what we hypothesize to be limiting aspects of
existing methods: (1) a `wide' structure with thousands of features, (2) a
large field of view, (3) recursive iterations that exploit statistical
dependencies in label space, and (4) a parallelizable architecture that can be
trained in a fraction of the time compared to benchmark multilayer
convolutional networks. For the specific image labeling problem of boundary
prediction, we also introduce a novel example weighting algorithm that improves
segmentation accuracy. Experiments in the challenging domain of connectomic
reconstruction of neural circuity from 3d electron microscopy data show that
these "Deep And Wide Multiscale Recursive" (DAWMR) networks lead to new levels
of image labeling performance. The highest performing architecture has twelve
layers, interwoven supervised and unsupervised stages, and uses an input field
of view of 157,464 voxels ($54^3$) to make a prediction at each image location.
We present an associated open source software package that enables the simple
and flexible creation of DAWMR networks.
| [
"Gary B. Huang and Viren Jain",
"['Gary B. Huang' 'Viren Jain']"
] |
math.OC cs.LG cs.SI stat.ML | null | 1310.0432 | null | null | http://arxiv.org/pdf/1310.0432v1 | 2013-10-01T19:08:04Z | 2013-10-01T19:08:04Z | Online Learning of Dynamic Parameters in Social Networks | This paper addresses the problem of online learning in a dynamic setting. We
consider a social network in which each individual observes a private signal
about the underlying state of the world and communicates with her neighbors at
each time period. Unlike many existing approaches, the underlying state is
dynamic, and evolves according to a geometric random walk. We view the scenario
as an optimization problem where agents aim to learn the true state while
suffering the smallest possible loss. Based on the decomposition of the global
loss function, we introduce two update mechanisms, each of which generates an
estimate of the true state. We establish a tight bound on the rate of change of
the underlying state, under which individuals can track the parameter with a
bounded variance. Then, we characterize explicit expressions for the steady
state mean-square deviation(MSD) of the estimates from the truth, per
individual. We observe that only one of the estimators recovers the optimal
MSD, which underscores the impact of the objective function decomposition on
the learning quality. Finally, we provide an upper bound on the regret of the
proposed methods, measured as an average of errors in estimating the parameter
in a finite time.
| [
"Shahin Shahrampour, Alexander Rakhlin, Ali Jadbabaie",
"['Shahin Shahrampour' 'Alexander Rakhlin' 'Ali Jadbabaie']"
] |
cs.LG stat.ML | null | 1310.0509 | null | null | http://arxiv.org/pdf/1310.0509v4 | 2013-11-25T08:43:59Z | 2013-10-01T22:34:18Z | Summary Statistics for Partitionings and Feature Allocations | Infinite mixture models are commonly used for clustering. One can sample from
the posterior of mixture assignments by Monte Carlo methods or find its maximum
a posteriori solution by optimization. However, in some problems the posterior
is diffuse and it is hard to interpret the sampled partitionings. In this
paper, we introduce novel statistics based on block sizes for representing
sample sets of partitionings and feature allocations. We develop an
element-based definition of entropy to quantify segmentation among their
elements. Then we propose a simple algorithm called entropy agglomeration (EA)
to summarize and visualize this information. Experiments on various infinite
mixture posteriors as well as a feature allocation dataset demonstrate that the
proposed statistics are useful in practice.
| [
"['Işık Barış Fidaner' 'Ali Taylan Cemgil']",
"I\\c{s}{\\i}k Bar{\\i}\\c{s} Fidaner and Ali Taylan Cemgil"
] |
cs.LG cs.AI cs.LO math.LO | null | 1310.0576 | null | null | http://arxiv.org/pdf/1310.0576v1 | 2013-10-02T06:06:02Z | 2013-10-02T06:06:02Z | Learning Lambek grammars from proof frames | In addition to their limpid interface with semantics, categorial grammars
enjoy another important property: learnability. This was first noticed by
Buskowsky and Penn and further studied by Kanazawa, for Bar-Hillel categorial
grammars.
What about Lambek categorial grammars? In a previous paper we showed that
product free Lambek grammars where learnable from structured sentences, the
structures being incomplete natural deductions. These grammars were shown to be
unlearnable from strings by Foret and Le Nir. In the present paper we show that
Lambek grammars, possibly with product, are learnable from proof frames that
are incomplete proof nets.
After a short reminder on grammatical inference \`a la Gold, we provide an
algorithm that learns Lambek grammars with product from proof frames and we
prove its convergence. We do so for 1-valued also known as rigid Lambek
grammars with product, since standard techniques can extend our result to
$k$-valued grammars. Because of the correspondence between cut-free proof nets
and normal natural deductions, our initial result on product free Lambek
grammars can be recovered.
We are sad to dedicate the present paper to Philippe Darondeau, with whom we
started to study such questions in Rennes at the beginning of the millennium,
and who passed away prematurely.
We are glad to dedicate the present paper to Jim Lambek for his 90 birthday:
he is the living proof that research is an eternal learning process.
| [
"['Roberto Bonato' 'Christian Retoré']",
"Roberto Bonato and Christian Retor\\'e"
] |
stat.ML cs.LG stat.ME | null | 1310.0740 | null | null | http://arxiv.org/pdf/1310.0740v4 | 2014-04-07T09:42:58Z | 2013-10-02T15:29:28Z | Pseudo-Marginal Bayesian Inference for Gaussian Processes | The main challenges that arise when adopting Gaussian Process priors in
probabilistic modeling are how to carry out exact Bayesian inference and how to
account for uncertainty on model parameters when making model-based predictions
on out-of-sample data. Using probit regression as an illustrative working
example, this paper presents a general and effective methodology based on the
pseudo-marginal approach to Markov chain Monte Carlo that efficiently addresses
both of these issues. The results presented in this paper show improvements
over existing sampling methods to simulate from the posterior distribution over
the parameters defining the covariance function of the Gaussian Process prior.
This is particularly important as it offers a powerful tool to carry out full
Bayesian inference of Gaussian Process based hierarchic statistical models in
general. The results also demonstrate that Monte Carlo based integration of all
model parameters is actually feasible in this class of models providing a
superior quantification of uncertainty in predictions. Extensive comparisons
with respect to state-of-the-art probabilistic classifiers confirm this
assertion.
| [
"Maurizio Filippone and Mark Girolami",
"['Maurizio Filippone' 'Mark Girolami']"
] |
cs.IT cs.LG math.IT math.NA math.ST stat.ML stat.TH | null | 1310.0807 | null | null | http://arxiv.org/pdf/1310.0807v5 | 2015-03-19T21:23:21Z | 2013-10-02T19:52:57Z | Exact and Stable Covariance Estimation from Quadratic Sampling via
Convex Programming | Statistical inference and information processing of high-dimensional data
often require efficient and accurate estimation of their second-order
statistics. With rapidly changing data, limited processing power and storage at
the acquisition devices, it is desirable to extract the covariance structure
from a single pass over the data and a small number of stored measurements. In
this paper, we explore a quadratic (or rank-one) measurement model which
imposes minimal memory requirements and low computational complexity during the
sampling process, and is shown to be optimal in preserving various
low-dimensional covariance structures. Specifically, four popular structural
assumptions of covariance matrices, namely low rank, Toeplitz low rank,
sparsity, jointly rank-one and sparse structure, are investigated, while
recovery is achieved via convex relaxation paradigms for the respective
structure.
The proposed quadratic sampling framework has a variety of potential
applications including streaming data processing, high-frequency wireless
communication, phase space tomography and phase retrieval in optics, and
non-coherent subspace detection. Our method admits universally accurate
covariance estimation in the absence of noise, as soon as the number of
measurements exceeds the information theoretic limits. We also demonstrate the
robustness of this approach against noise and imperfect structural assumptions.
Our analysis is established upon a novel notion called the mixed-norm
restricted isometry property (RIP-$\ell_{2}/\ell_{1}$), as well as the
conventional RIP-$\ell_{2}/\ell_{2}$ for near-isotropic and bounded
measurements. In addition, our results improve upon the best-known phase
retrieval (including both dense and sparse signals) guarantees using PhaseLift
with a significantly simpler approach.
| [
"['Yuxin Chen' 'Yuejie Chi' 'Andrea Goldsmith']",
"Yuxin Chen and Yuejie Chi and Andrea Goldsmith"
] |
stat.ML cs.LG cs.SY | 10.1109/JSTSP.2014.2336611 | 1310.0865 | null | null | http://arxiv.org/abs/1310.0865v2 | 2014-03-05T17:33:35Z | 2013-10-02T23:51:38Z | Electricity Market Forecasting via Low-Rank Multi-Kernel Learning | The smart grid vision entails advanced information technology and data
analytics to enhance the efficiency, sustainability, and economics of the power
grid infrastructure. Aligned to this end, modern statistical learning tools are
leveraged here for electricity market inference. Day-ahead price forecasting is
cast as a low-rank kernel learning problem. Uniquely exploiting the market
clearing process, congestion patterns are modeled as rank-one components in the
matrix of spatio-temporally varying prices. Through a novel nuclear norm-based
regularization, kernels across pricing nodes and hours can be systematically
selected. Even though market-wide forecasting is beneficial from a learning
perspective, it involves processing high-dimensional market data. The latter
becomes possible after devising a block-coordinate descent algorithm for
solving the non-convex optimization problem involved. The algorithm utilizes
results from block-sparse vector recovery and is guaranteed to converge to a
stationary point. Numerical tests on real data from the Midwest ISO (MISO)
market corroborate the prediction accuracy, computational efficiency, and the
interpretative merits of the developed approach over existing alternatives.
| [
"Vassilis Kekatos and Yu Zhang and Georgios B. Giannakis",
"['Vassilis Kekatos' 'Yu Zhang' 'Georgios B. Giannakis']"
] |
cs.LG cs.CE | null | 1310.0890 | null | null | http://arxiv.org/pdf/1310.0890v1 | 2013-10-03T03:53:22Z | 2013-10-03T03:53:22Z | Multiple Kernel Learning in the Primal for Multi-modal Alzheimer's
Disease Classification | To achieve effective and efficient detection of Alzheimer's disease (AD),
many machine learning methods have been introduced into this realm. However,
the general case of limited training samples, as well as different feature
representations typically makes this problem challenging. In this work, we
propose a novel multiple kernel learning framework to combine multi-modal
features for AD classification, which is scalable and easy to implement.
Contrary to the usual way of solving the problem in the dual space, we look at
the optimization from a new perspective. By conducting Fourier transform on the
Gaussian kernel, we explicitly compute the mapping function, which leads to a
more straightforward solution of the problem in the primal space. Furthermore,
we impose the mixed $L_{21}$ norm constraint on the kernel weights, known as
the group lasso regularization, to enforce group sparsity among different
feature modalities. This actually acts as a role of feature modality selection,
while at the same time exploiting complementary information among different
kernels. Therefore it is able to extract the most discriminative features for
classification. Experiments on the ADNI data set demonstrate the effectiveness
of the proposed method.
| [
"Fayao Liu, Luping Zhou, Chunhua Shen, Jianping Yin",
"['Fayao Liu' 'Luping Zhou' 'Chunhua Shen' 'Jianping Yin']"
] |
cs.CV cs.LG | null | 1310.0900 | null | null | http://arxiv.org/pdf/1310.0900v1 | 2013-10-03T05:50:40Z | 2013-10-03T05:50:40Z | Efficient pedestrian detection by directly optimize the partial area
under the ROC curve | Many typical applications of object detection operate within a prescribed
false-positive range. In this situation the performance of a detector should be
assessed on the basis of the area under the ROC curve over that range, rather
than over the full curve, as the performance outside the range is irrelevant.
This measure is labelled as the partial area under the ROC curve (pAUC).
Effective cascade-based classification, for example, depends on training node
classifiers that achieve the maximal detection rate at a moderate false
positive rate, e.g., around 40% to 50%. We propose a novel ensemble learning
method which achieves a maximal detection rate at a user-defined range of false
positive rates by directly optimizing the partial AUC using structured
learning. By optimizing for different ranges of false positive rates, the
proposed method can be used to train either a single strong classifier or a
node classifier forming part of a cascade classifier. Experimental results on
both synthetic and real-world data sets demonstrate the effectiveness of our
approach, and we show that it is possible to train state-of-the-art pedestrian
detectors using the proposed structured ensemble learning method.
| [
"['Sakrapee Paisitkriangkrai' 'Chunhua Shen' 'Anton van den Hengel']",
"Sakrapee Paisitkriangkrai, Chunhua Shen, Anton van den Hengel"
] |
stat.ME cs.DS cs.IT cs.LG math.IT | null | 1310.1076 | null | null | http://arxiv.org/pdf/1310.1076v1 | 2013-10-03T19:48:44Z | 2013-10-03T19:48:44Z | Compressed Counting Meets Compressed Sensing | Compressed sensing (sparse signal recovery) has been a popular and important
research topic in recent years. By observing that natural signals are often
nonnegative, we propose a new framework for nonnegative signal recovery using
Compressed Counting (CC). CC is a technique built on maximally-skewed p-stable
random projections originally developed for data stream computations. Our
recovery procedure is computationally very efficient in that it requires only
one linear scan of the coordinates. Our analysis demonstrates that, when
0<p<=0.5, it suffices to use M= O(C/eps^p log N) measurements so that all
coordinates will be recovered within eps additive precision, in one scan of the
coordinates. The constant C=1 when p->0 and C=pi/2 when p=0.5. In particular,
when p->0 the required number of measurements is essentially M=K\log N, where K
is the number of nonzero coordinates of the signal.
| [
"Ping Li, Cun-Hui Zhang, Tong Zhang",
"['Ping Li' 'Cun-Hui Zhang' 'Tong Zhang']"
] |
cs.LG | null | 1310.1177 | null | null | http://arxiv.org/pdf/1310.1177v2 | 2016-05-06T23:35:13Z | 2013-10-04T06:18:59Z | Clustering on Multiple Incomplete Datasets via Collective Kernel
Learning | Multiple datasets containing different types of features may be available for
a given task. For instance, users' profiles can be used to group users for
recommendation systems. In addition, a model can also use users' historical
behaviors and credit history to group users. Each dataset contains different
information and suffices for learning. A number of clustering algorithms on
multiple datasets were proposed during the past few years. These algorithms
assume that at least one dataset is complete. So far as we know, all the
previous methods will not be applicable if there is no complete dataset
available. However, in reality, there are many situations where no dataset is
complete. As in building a recommendation system, some new users may not have a
profile or historical behaviors, while some may not have a credit history.
Hence, no available dataset is complete. In order to solve this problem, we
propose an approach called Collective Kernel Learning to infer hidden sample
similarity from multiple incomplete datasets. The idea is to collectively
completes the kernel matrices of incomplete datasets by optimizing the
alignment of the shared instances of the datasets. Furthermore, a clustering
algorithm is proposed based on the kernel matrix. The experiments on both
synthetic and real datasets demonstrate the effectiveness of the proposed
approach. The proposed clustering algorithm outperforms the comparison
algorithms by as much as two times in normalized mutual information.
| [
"['Weixiang Shao' 'Xiaoxiao Shi' 'Philip S. Yu']",
"Weixiang Shao (1), Xiaoxiao Shi (1) and Philip S. Yu (1) ((1)\n University of Illinois at Chicago)"
] |
stat.ML cs.AI cs.LG | 10.1007/s10618-014-0355-0 | 1310.1187 | null | null | http://arxiv.org/abs/1310.1187v1 | 2013-10-04T07:29:08Z | 2013-10-04T07:29:08Z | Labeled Directed Acyclic Graphs: a generalization of context-specific
independence in directed graphical models | We introduce a novel class of labeled directed acyclic graph (LDAG) models
for finite sets of discrete variables. LDAGs generalize earlier proposals for
allowing local structures in the conditional probability distribution of a
node, such that unrestricted label sets determine which edges can be deleted
from the underlying directed acyclic graph (DAG) for a given context. Several
properties of these models are derived, including a generalization of the
concept of Markov equivalence classes. Efficient Bayesian learning of LDAGs is
enabled by introducing an LDAG-based factorization of the Dirichlet prior for
the model parameters, such that the marginal likelihood can be calculated
analytically. In addition, we develop a novel prior distribution for the model
structures that can appropriately penalize a model for its labeling complexity.
A non-reversible Markov chain Monte Carlo algorithm combined with a greedy hill
climbing approach is used for illustrating the useful properties of LDAG models
for both real and synthetic data sets.
| [
"['Johan Pensar' 'Henrik Nyman' 'Timo Koski' 'Jukka Corander']",
"Johan Pensar, Henrik Nyman, Timo Koski and Jukka Corander"
] |
cs.NE cs.LG physics.data-an | 10.1007/s00500-017-2525-7 | 1310.1250 | null | null | http://arxiv.org/abs/1310.1250v1 | 2013-08-15T10:16:49Z | 2013-08-15T10:16:49Z | Learning ambiguous functions by neural networks | It is not, in general, possible to have access to all variables that
determine the behavior of a system. Having identified a number of variables
whose values can be accessed, there may still be hidden variables which
influence the dynamics of the system. The result is model ambiguity in the
sense that, for the same (or very similar) input values, different objective
outputs should have been obtained. In addition, the degree of ambiguity may
vary widely across the whole range of input values. Thus, to evaluate the
accuracy of a model it is of utmost importance to create a method to obtain the
degree of reliability of each output result. In this paper we present such a
scheme composed of two coupled artificial neural networks: the first one being
responsible for outputting the predicted value, whereas the other evaluates the
reliability of the output, which is learned from the error values of the first
one. As an illustration, the scheme is applied to a model for tracking slopes
in a straw chamber and to a credit scoring model.
| [
"Rui Ligeiro and R. Vilela Mendes",
"['Rui Ligeiro' 'R. Vilela Mendes']"
] |
stat.ML cs.LG | 10.1214/15-AOAS812 | 1310.1363 | null | null | http://arxiv.org/abs/1310.1363v3 | 2015-09-15T07:57:08Z | 2013-10-04T18:34:54Z | Weakly supervised clustering: Learning fine-grained signals from coarse
labels | Consider a classification problem where we do not have access to labels for
individual training examples, but only have average labels over subpopulations.
We give practical examples of this setup and show how such a classification
task can usefully be analyzed as a weakly supervised clustering problem. We
propose three approaches to solving the weakly supervised clustering problem,
including a latent variables model that performs well in our experiments. We
illustrate our methods on an analysis of aggregated elections data and an
industry data set that was the original motivation for this research.
| [
"Stefan Wager, Alexander Blocker, Niall Cardin",
"['Stefan Wager' 'Alexander Blocker' 'Niall Cardin']"
] |
stat.ML cs.LG stat.ME | null | 1310.1404 | null | null | http://arxiv.org/pdf/1310.1404v1 | 2013-10-04T20:19:56Z | 2013-10-04T20:19:56Z | Sequential Monte Carlo Bandits | In this paper we propose a flexible and efficient framework for handling
multi-armed bandits, combining sequential Monte Carlo algorithms with
hierarchical Bayesian modeling techniques. The framework naturally encompasses
restless bandits, contextual bandits, and other bandit variants under a single
inferential model. Despite the model's generality, we propose efficient Monte
Carlo algorithms to make inference scalable, based on recent developments in
sequential Monte Carlo methods. Through two simulation studies, the framework
is shown to outperform other empirical methods, while also naturally scaling to
more complex problems for which existing approaches can not cope. Additionally,
we successfully apply our framework to online video-based advertising
recommendation, and show its increased efficacy as compared to current state of
the art bandit algorithms.
| [
"['Michael Cherkassky' 'Luke Bornn']",
"Michael Cherkassky and Luke Bornn"
] |
stat.ML cs.LG | null | 1310.1415 | null | null | http://arxiv.org/pdf/1310.1415v1 | 2013-10-04T22:33:35Z | 2013-10-04T22:33:35Z | Narrowing the Gap: Random Forests In Theory and In Practice | Despite widespread interest and practical use, the theoretical properties of
random forests are still not well understood. In this paper we contribute to
this understanding in two ways. We present a new theoretically tractable
variant of random regression forests and prove that our algorithm is
consistent. We also provide an empirical evaluation, comparing our algorithm
and other theoretically tractable random forest models to the random forest
algorithm used in practice. Our experiments provide insight into the relative
importance of different simplifications that theoreticians have made to obtain
tractable models for analysis.
| [
"Misha Denil, David Matheson, Nando de Freitas",
"['Misha Denil' 'David Matheson' 'Nando de Freitas']"
] |
math.NA cs.LG stat.ML | null | 1310.1502 | null | null | http://arxiv.org/pdf/1310.1502v3 | 2014-05-15T16:32:14Z | 2013-10-05T18:09:50Z | Randomized Approximation of the Gram Matrix: Exact Computation and
Probabilistic Bounds | Given a real matrix A with n columns, the problem is to approximate the Gram
product AA^T by c << n weighted outer products of columns of A. Necessary and
sufficient conditions for the exact computation of AA^T (in exact arithmetic)
from c >= rank(A) columns depend on the right singular vector matrix of A. For
a Monte-Carlo matrix multiplication algorithm by Drineas et al. that samples
outer products, we present probabilistic bounds for the 2-norm relative error
due to randomization. The bounds depend on the stable rank or the rank of A,
but not on the matrix dimensions. Numerical experiments illustrate that the
bounds are informative, even for stringent success probabilities and matrices
of small dimension. We also derive bounds for the smallest singular value and
the condition number of matrices obtained by sampling rows from orthonormal
matrices.
| [
"John T. Holodnak, Ilse C. F. Ipsen",
"['John T. Holodnak' 'Ilse C. F. Ipsen']"
] |
cs.LG stat.ML | null | 1310.1518 | null | null | http://arxiv.org/pdf/1310.1518v1 | 2013-10-05T20:49:37Z | 2013-10-05T20:49:37Z | Contraction Principle based Robust Iterative Algorithms for Machine
Learning | Iterative algorithms are ubiquitous in the field of data mining. Widely known
examples of such algorithms are the least mean square algorithm,
backpropagation algorithm of neural networks. Our contribution in this paper is
an improvement upon this iterative algorithms in terms of their respective
performance metrics and robustness. This improvement is achieved by a new
scaling factor which is multiplied to the error term. Our analysis shows that
in essence, we are minimizing the corresponding LASSO cost function, which is
the reason of its increased robustness. We also give closed form expressions
for the number of iterations for convergence and the MSE floor of the original
cost function for a minimum targeted value of the L1 norm. As a concluding
theme based on the stochastic subgradient algorithm, we give a comparison
between the well known Dantzig selector and our algorithm based on contraction
principle. By these simulations we attempt to show the optimality of our
approach for any widely used parent iterative optimization problem.
| [
"['Rangeet Mitra' 'Amit Kumar Mishra']",
"Rangeet Mitra, Amit Kumar Mishra"
] |
stat.ME cs.LG stat.ML | 10.1214/14-AOS1260 | 1310.1533 | null | null | http://arxiv.org/abs/1310.1533v2 | 2014-12-01T12:31:45Z | 2013-10-06T03:12:34Z | CAM: Causal additive models, high-dimensional order search and penalized
regression | We develop estimation for potentially high-dimensional additive structural
equation models. A key component of our approach is to decouple order search
among the variables from feature or edge selection in a directed acyclic graph
encoding the causal structure. We show that the former can be done with
nonregularized (restricted) maximum likelihood estimation while the latter can
be efficiently addressed using sparse regression techniques. Thus, we
substantially simplify the problem of structure search and estimation for an
important class of causal models. We establish consistency of the (restricted)
maximum likelihood estimator for low- and high-dimensional scenarios, and we
also allow for misspecification of the error distribution. Furthermore, we
develop an efficient computational algorithm which can deal with many
variables, and the new method's accuracy and performance is illustrated on
simulated and real data.
| [
"['Peter Bühlmann' 'Jonas Peters' 'Jan Ernest']",
"Peter B\\\"uhlmann, Jonas Peters, Jan Ernest"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.