categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
cs.IT cs.LG math.IT | 10.1016/j.jmp.2015.01.001 | 1406.7424 | null | null | http://arxiv.org/abs/1406.7424v3 | 2015-01-23T16:28:08Z | 2014-06-28T17:58:59Z | Complexity Measures and Concept Learning | The nature of concept learning is a core question in cognitive science.
Theories must account for the relative difficulty of acquiring different
concepts by supervised learners. For a canonical set of six category types, two
distinct orderings of classification difficulty have been found. One ordering,
which we call paradigm-specific, occurs when adult human learners classify
objects with easily distinguishable characteristics such as size, shape, and
shading. The general order occurs in all other known cases: when adult humans
classify objects with characteristics that are not readily distinguished (e.g.,
brightness, saturation, hue); for children and monkeys; and when categorization
difficulty is extrapolated from errors in identification learning. The
paradigm-specific order was found to be predictable mathematically by measuring
the logical complexity of tasks, i.e., how concisely the solution can be
represented by logical rules.
However, logical complexity explains only the paradigm-specific order but not
the general order. Here we propose a new difficulty measurement, information
complexity, that calculates the amount of uncertainty remaining when a subset
of the dimensions are specified. This measurement is based on Shannon entropy.
We show that, when the metric extracts minimal uncertainties, this new
measurement predicts the paradigm-specific order for the canonical six category
types, and when the metric extracts average uncertainties, this new measurement
predicts the general order. Moreover, for learning category types beyond the
canonical six, we find that the minimal-uncertainty formulation correctly
predicts the paradigm-specific order as well or better than existing metrics
(Boolean complexity and GIST) in most cases.
| [
"Andreas D. Pape, Kenneth J. Kurtz, Hiroki Sayama",
"['Andreas D. Pape' 'Kenneth J. Kurtz' 'Hiroki Sayama']"
]
|
cs.LG | null | 1406.7429 | null | null | http://arxiv.org/pdf/1406.7429v1 | 2014-06-28T18:59:44Z | 2014-06-28T18:59:44Z | Comparison of SVM Optimization Techniques in the Primal | This paper examines the efficacy of different optimization techniques in a
primal formulation of a support vector machine (SVM). Three main techniques are
compared. The dataset used to compare all three techniques was the Sentiment
Analysis on Movie Reviews dataset, from kaggle.com.
| [
"['Jonathan Katzman' 'Diane Duros']",
"Jonathan Katzman and Diane Duros"
]
|
cs.LG cs.AI stat.ML | null | 1406.7443 | null | null | http://arxiv.org/pdf/1406.7443v4 | 2017-01-31T05:32:13Z | 2014-06-28T21:50:56Z | Efficient Learning in Large-Scale Combinatorial Semi-Bandits | A stochastic combinatorial semi-bandit is an online learning problem where at
each step a learning agent chooses a subset of ground items subject to
combinatorial constraints, and then observes stochastic weights of these items
and receives their sum as a payoff. In this paper, we consider efficient
learning in large-scale combinatorial semi-bandits with linear generalization,
and as a solution, propose two learning algorithms called Combinatorial Linear
Thompson Sampling (CombLinTS) and Combinatorial Linear UCB (CombLinUCB). Both
algorithms are computationally efficient as long as the offline version of the
combinatorial problem can be solved efficiently. We establish that CombLinTS
and CombLinUCB are also provably statistically efficient under reasonable
assumptions, by developing regret bounds that are independent of the problem
scale (number of items) and sublinear in time. We also evaluate CombLinTS on a
variety of problems with thousands of items. Our experiment results demonstrate
that CombLinTS is scalable, robust to the choice of algorithm parameters, and
significantly outperforms the best of our baselines.
| [
"Zheng Wen, Branislav Kveton, and Azin Ashkan",
"['Zheng Wen' 'Branislav Kveton' 'Azin Ashkan']"
]
|
cs.CV cs.LG | null | 1406.7444 | null | null | http://arxiv.org/pdf/1406.7444v1 | 2014-06-28T21:56:31Z | 2014-06-28T21:56:31Z | Learning to Deblur | We describe a learning-based approach to blind image deconvolution. It uses a
deep layered architecture, parts of which are borrowed from recent work on
neural network learning, and parts of which incorporate computations that are
specific to image deconvolution. The system is trained end-to-end on a set of
artificially generated training examples, enabling competitive performance in
blind deconvolution, both with respect to quality and runtime.
| [
"Christian J. Schuler, Michael Hirsch, Stefan Harmeling, Bernhard\n Sch\\\"olkopf",
"['Christian J. Schuler' 'Michael Hirsch' 'Stefan Harmeling'\n 'Bernhard Schölkopf']"
]
|
cs.LG | null | 1406.7445 | null | null | http://arxiv.org/pdf/1406.7445v1 | 2014-06-28T22:13:52Z | 2014-06-28T22:13:52Z | Contrastive Feature Induction for Efficient Structure Learning of
Conditional Random Fields | Structure learning of Conditional Random Fields (CRFs) can be cast into an
L1-regularized optimization problem. To avoid optimizing over a fully linked
model, gain-based or gradient-based feature selection methods start from an
empty model and incrementally add top ranked features to it. However, for
high-dimensional problems like statistical relational learning, training time
of these incremental methods can be dominated by the cost of evaluating the
gain or gradient of a large collection of candidate features. In this study we
propose a fast feature evaluation algorithm called Contrastive Feature
Induction (CFI), which only evaluates a subset of features that involve both
variables with high signals (deviation from mean) and variables with high
errors (residue). We prove that the gradient of candidate features can be
represented solely as a function of signals and errors, and that CFI is an
efficient approximation of gradient-based evaluation methods. Experiments on
synthetic and real data sets show competitive learning speed and accuracy of
CFI on pairwise CRFs, compared to state-of-the-art structure learning methods
such as full optimization over all features, and Grafting.
| [
"Ni Lao, Jun Zhu",
"['Ni Lao' 'Jun Zhu']"
]
|
cs.LG | null | 1406.7447 | null | null | http://arxiv.org/pdf/1406.7447v2 | 2015-03-06T13:24:33Z | 2014-06-28T23:45:30Z | Unimodal Bandits without Smoothness | We consider stochastic bandit problems with a continuous set of arms and
where the expected reward is a continuous and unimodal function of the arm. No
further assumption is made regarding the smoothness and the structure of the
expected reward function. For these problems, we propose the Stochastic
Pentachotomy (SP) algorithm, and derive finite-time upper bounds on its regret
and optimization error. In particular, we show that, for any expected reward
function $\mu$ that behaves as $\mu(x)=\mu(x^\star)-C|x-x^\star|^\xi$ locally
around its maximizer $x^\star$ for some $\xi, C>0$, the SP algorithm is
order-optimal. Namely its regret and optimization error scale as
$O(\sqrt{T\log(T)})$ and $O(\sqrt{\log(T)/T})$, respectively, when the time
horizon $T$ grows large. These scalings are achieved without the knowledge of
$\xi$ and $C$. Our algorithm is based on asymptotically optimal sequential
statistical tests used to successively trim an interval that contains the best
arm with high probability. To our knowledge, the SP algorithm constitutes the
first sequential arm selection rule that achieves a regret and optimization
error scaling as $O(\sqrt{T})$ and $O(1/\sqrt{T})$, respectively, up to a
logarithmic factor for non-smooth expected reward functions, as well as for
smooth functions with unknown smoothness.
| [
"['Richard Combes' 'Alexandre Proutiere']",
"Richard Combes and Alexandre Proutiere"
]
|
stat.ML cs.LG | null | 1406.7498 | null | null | http://arxiv.org/pdf/1406.7498v3 | 2015-03-31T07:37:46Z | 2014-06-29T12:34:45Z | Thompson Sampling for Learning Parameterized Markov Decision Processes | We consider reinforcement learning in parameterized Markov Decision Processes
(MDPs), where the parameterization may induce correlation across transition
probabilities or rewards. Consequently, observing a particular state transition
might yield useful information about other, unobserved, parts of the MDP. We
present a version of Thompson sampling for parameterized reinforcement learning
problems, and derive a frequentist regret bound for priors over general
parameter spaces. The result shows that the number of instants where suboptimal
actions are chosen scales logarithmically with time, with high probability. It
holds for prior distributions that put significant probability near the true
model, without any additional, specific closed-form structure such as conjugate
or product-form priors. The constant factor in the logarithmic scaling encodes
the information complexity of learning the MDP in terms of the Kullback-Leibler
geometry of the parameter space.
| [
"['Aditya Gopalan' 'Shie Mannor']",
"Aditya Gopalan, Shie Mannor"
]
|
stat.ML cs.LG | null | 1406.7758 | null | null | http://arxiv.org/pdf/1406.7758v1 | 2014-06-30T14:35:58Z | 2014-06-30T14:35:58Z | Theoretical Analysis of Bayesian Optimisation with Unknown Gaussian
Process Hyper-Parameters | Bayesian optimisation has gained great popularity as a tool for optimising
the parameters of machine learning algorithms and models. Somewhat ironically,
setting up the hyper-parameters of Bayesian optimisation methods is notoriously
hard. While reasonable practical solutions have been advanced, they can often
fail to find the best optima. Surprisingly, there is little theoretical
analysis of this crucial problem in the literature. To address this, we derive
a cumulative regret bound for Bayesian optimisation with Gaussian processes and
unknown kernel hyper-parameters in the stochastic setting. The bound, which
applies to the expected improvement acquisition function and sub-Gaussian
observation noise, provides us with guidelines on how to design hyper-parameter
estimation methods. A simple simulation demonstrates the importance of
following these guidelines.
| [
"['Ziyu Wang' 'Nando de Freitas']",
"Ziyu Wang, Nando de Freitas"
]
|
cs.CL cs.LG cs.NE stat.ML | null | 1406.7806 | null | null | http://arxiv.org/pdf/1406.7806v2 | 2015-01-20T07:44:15Z | 2014-06-30T16:42:25Z | Building DNN Acoustic Models for Large Vocabulary Speech Recognition | Deep neural networks (DNNs) are now a central component of nearly all
state-of-the-art speech recognition systems. Building neural network acoustic
models requires several design decisions including network architecture, size,
and training loss function. This paper offers an empirical investigation on
which aspects of DNN acoustic model design are most important for speech
recognition system performance. We report DNN classifier performance and final
speech recognizer word error rates, and compare DNNs using several metrics to
quantify factors influencing differences in task performance. Our first set of
experiments use the standard Switchboard benchmark corpus, which contains
approximately 300 hours of conversational telephone speech. We compare standard
DNNs to convolutional networks, and present the first experiments using
locally-connected, untied neural networks for acoustic modeling. We
additionally build systems on a corpus of 2,100 hours of training data by
combining the Switchboard and Fisher corpora. This larger corpus allows us to
more thoroughly examine performance of large DNN models -- with up to ten times
more parameters than those typically used in speech recognition systems. Our
results suggest that a relatively simple DNN architecture and optimization
technique produces strong results. These findings, along with previous work,
help establish a set of best practices for building DNN hybrid speech
recognition systems with maximum likelihood training. Our experiments in DNN
optimization additionally serve as a case study for training DNNs with
discriminative loss functions for speech tasks, as well as DNN classifiers more
generally.
| [
"['Andrew L. Maas' 'Peng Qi' 'Ziang Xie' 'Awni Y. Hannun'\n 'Christopher T. Lengerich' 'Daniel Jurafsky' 'Andrew Y. Ng']",
"Andrew L. Maas, Peng Qi, Ziang Xie, Awni Y. Hannun, Christopher T.\n Lengerich, Daniel Jurafsky and Andrew Y. Ng"
]
|
cs.LG cs.SI stat.ML | null | 1406.7842 | null | null | http://arxiv.org/pdf/1406.7842v3 | 2016-02-19T22:12:47Z | 2014-06-30T18:33:59Z | Learning Laplacian Matrix in Smooth Graph Signal Representations | The construction of a meaningful graph plays a crucial role in the success of
many graph-based representations and algorithms for handling structured data,
especially in the emerging field of graph signal processing. However, a
meaningful graph is not always readily available from the data, nor easy to
define depending on the application domain. In particular, it is often
desirable in graph signal processing applications that a graph is chosen such
that the data admit certain regularity or smoothness on the graph. In this
paper, we address the problem of learning graph Laplacians, which is equivalent
to learning graph topologies, such that the input data form graph signals with
smooth variations on the resulting topology. To this end, we adopt a factor
analysis model for the graph signals and impose a Gaussian probabilistic prior
on the latent variables that control these signals. We show that the Gaussian
prior leads to an efficient representation that favors the smoothness property
of the graph signals. We then propose an algorithm for learning graphs that
enforces such property and is based on minimizing the variations of the signals
on the learned graph. Experiments on both synthetic and real world data
demonstrate that the proposed graph learning framework can efficiently infer
meaningful graph topologies from signal observations under the smoothness
prior.
| [
"Xiaowen Dong, Dorina Thanou, Pascal Frossard, Pierre Vandergheynst",
"['Xiaowen Dong' 'Dorina Thanou' 'Pascal Frossard' 'Pierre Vandergheynst']"
]
|
stat.ML cs.CE cs.LG | 10.1007/978-3-319-53070-3_2 | 1406.7865 | null | null | http://arxiv.org/abs/1406.7865v4 | 2014-11-18T14:18:42Z | 2014-06-30T19:34:23Z | Simple connectome inference from partial correlation statistics in
calcium imaging | In this work, we propose a simple yet effective solution to the problem of
connectome inference in calcium imaging data. The proposed algorithm consists
of two steps. First, processing the raw signals to detect neural peak
activities. Second, inferring the degree of association between neurons from
partial correlation statistics. This paper summarises the methodology that led
us to win the Connectomics Challenge, proposes a simplified version of our
method, and finally compares our results with respect to other inference
methods.
| [
"Antonio Sutera, Arnaud Joly, Vincent Fran\\c{c}ois-Lavet, Zixiao Aaron\n Qiu, Gilles Louppe, Damien Ernst and Pierre Geurts",
"['Antonio Sutera' 'Arnaud Joly' 'Vincent François-Lavet'\n 'Zixiao Aaron Qiu' 'Gilles Louppe' 'Damien Ernst' 'Pierre Geurts']"
]
|
cs.NA cs.LG math.ST stat.TH | null | 1407.0013 | null | null | http://arxiv.org/pdf/1407.0013v1 | 2014-06-30T12:19:17Z | 2014-06-30T12:19:17Z | Relevance Singular Vector Machine for low-rank matrix sensing | In this paper we develop a new Bayesian inference method for low rank matrix
reconstruction. We call the new method the Relevance Singular Vector Machine
(RSVM) where appropriate priors are defined on the singular vectors of the
underlying matrix to promote low rank. To accelerate computations, a
numerically efficient approximation is developed. The proposed algorithms are
applied to matrix completion and matrix reconstruction problems and their
performance is studied numerically.
| [
"['Martin Sundin' 'Saikat Chatterjee' 'Magnus Jansson' 'Cristian R. Rojas']",
"Martin Sundin, Saikat Chatterjee, Magnus Jansson and Cristian R. Rojas"
]
|
cs.LG math.ST stat.ML stat.TH | null | 1407.0067 | null | null | http://arxiv.org/pdf/1407.0067v2 | 2014-07-02T00:44:29Z | 2014-06-30T22:00:57Z | Rates of Convergence for Nearest Neighbor Classification | Nearest neighbor methods are a popular class of nonparametric estimators with
several desirable properties, such as adaptivity to different distance scales
in different regions of space. Prior work on convergence rates for nearest
neighbor classification has not fully reflected these subtle properties. We
analyze the behavior of these estimators in metric spaces and provide
finite-sample, distribution-dependent rates of convergence under minimal
assumptions. As a by-product, we are able to establish the universal
consistency of nearest neighbor in a broader range of data spaces than was
previously known. We illustrate our upper and lower bounds by introducing
smoothness classes that are customized for nearest neighbor classification.
| [
"['Kamalika Chaudhuri' 'Sanjoy Dasgupta']",
"Kamalika Chaudhuri and Sanjoy Dasgupta"
]
|
cs.LG | null | 1407.0107 | null | null | http://arxiv.org/pdf/1407.0107v3 | 2014-07-26T19:16:39Z | 2014-07-01T05:57:43Z | Randomized Block Coordinate Descent for Online and Stochastic
Optimization | Two types of low cost-per-iteration gradient descent methods have been
extensively studied in parallel. One is online or stochastic gradient descent
(OGD/SGD), and the other is randomzied coordinate descent (RBCD). In this
paper, we combine the two types of methods together and propose online
randomized block coordinate descent (ORBCD). At each iteration, ORBCD only
computes the partial gradient of one block coordinate of one mini-batch
samples. ORBCD is well suited for the composite minimization problem where one
function is the average of the losses of a large number of samples and the
other is a simple regularizer defined on high dimensional variables. We show
that the iteration complexity of ORBCD has the same order as OGD or SGD. For
strongly convex functions, by reducing the variance of stochastic gradients, we
show that ORBCD can converge at a geometric rate in expectation, matching the
convergence rate of SGD with variance reduction and RBCD.
| [
"['Huahua Wang' 'Arindam Banerjee']",
"Huahua Wang and Arindam Banerjee"
]
|
stat.ML cs.LG | null | 1407.0179 | null | null | http://arxiv.org/pdf/1407.0179v1 | 2014-07-01T10:44:49Z | 2014-07-01T10:44:49Z | Mind the Nuisance: Gaussian Process Classification using Privileged
Noise | The learning with privileged information setting has recently attracted a lot
of attention within the machine learning community, as it allows the
integration of additional knowledge into the training process of a classifier,
even when this comes in the form of a data modality that is not available at
test time. Here, we show that privileged information can naturally be treated
as noise in the latent function of a Gaussian Process classifier (GPC). That
is, in contrast to the standard GPC setting, the latent function is not just a
nuisance but a feature: it becomes a natural measure of confidence about the
training data by modulating the slope of the GPC sigmoid likelihood function.
Extensive experiments on public datasets show that the proposed GPC method
using privileged noise, called GPC+, improves over a standard GPC without
privileged knowledge, and also over the current state-of-the-art SVM-based
method, SVM+. Moreover, we show that advanced neural networks and deep learning
methods can be compressed as privileged information.
| [
"Daniel Hern\\'andez-Lobato, Viktoriia Sharmanska, Kristian Kersting,\n Christoph H. Lampert, Novi Quadrianto",
"['Daniel Hernández-Lobato' 'Viktoriia Sharmanska' 'Kristian Kersting'\n 'Christoph H. Lampert' 'Novi Quadrianto']"
]
|
cs.LG math.OC stat.ML | null | 1407.0202 | null | null | http://arxiv.org/pdf/1407.0202v3 | 2014-12-16T08:44:27Z | 2014-07-01T11:47:56Z | SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly
Convex Composite Objectives | In this work we introduce a new optimisation method called SAGA in the spirit
of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient
algorithms with fast linear convergence rates. SAGA improves on the theory
behind SAG and SVRG, with better theoretical convergence rates, and has support
for composite objectives where a proximal operator is used on the regulariser.
Unlike SDCA, SAGA supports non-strongly convex problems directly, and is
adaptive to any inherent strong convexity of the problem. We give experimental
results showing the effectiveness of our method.
| [
"Aaron Defazio, Francis Bach (INRIA Paris - Rocquencourt, LIENS, MSR -\n INRIA), Simon Lacoste-Julien (INRIA Paris - Rocquencourt, LIENS, MSR - INRIA)",
"['Aaron Defazio' 'Francis Bach' 'Simon Lacoste-Julien']"
]
|
cs.LG stat.ML | null | 1407.0208 | null | null | http://arxiv.org/pdf/1407.0208v4 | 2018-08-17T09:09:37Z | 2014-07-01T12:08:10Z | A Bayes consistent 1-NN classifier | We show that a simple modification of the 1-nearest neighbor classifier
yields a strongly Bayes consistent learner. Prior to this work, the only
strongly Bayes consistent proximity-based method was the k-nearest neighbor
classifier, for k growing appropriately with sample size. We will argue that a
margin-regularized 1-NN enjoys considerable statistical and algorithmic
advantages over the k-NN classifier. These include user-friendly finite-sample
error bounds, as well as time- and memory-efficient learning and test-point
evaluation algorithms with a principled speed-accuracy tradeoff. Encouraging
empirical results are reported.
| [
"Aryeh Kontorovich and Roi Weiss",
"['Aryeh Kontorovich' 'Roi Weiss']"
]
|
cs.NA cs.LG stat.ML | null | 1407.0286 | null | null | http://arxiv.org/pdf/1407.0286v2 | 2014-07-02T08:28:33Z | 2014-07-01T15:45:05Z | DC approximation approaches for sparse optimization | Sparse optimization refers to an optimization problem involving the zero-norm
in objective or constraints. In this paper, nonconvex approximation approaches
for sparse optimization have been studied with a unifying point of view in DC
(Difference of Convex functions) programming framework. Considering a common DC
approximation of the zero-norm including all standard sparse inducing penalty
functions, we studied the consistency between global minimums (resp. local
minimums) of approximate and original problems. We showed that, in several
cases, some global minimizers (resp. local minimizers) of the approximate
problem are also those of the original problem. Using exact penalty techniques
in DC programming, we proved stronger results for some particular
approximations, namely, the approximate problem, with suitable parameters, is
equivalent to the original problem. The efficiency of several sparse inducing
penalty functions have been fully analyzed. Four DCA (DC Algorithm) schemes
were developed that cover all standard algorithms in nonconvex sparse
approximation approaches as special versions. They can be viewed as, an $\ell
_{1}$-perturbed algorithm / reweighted-$\ell _{1}$ algorithm / reweighted-$\ell
_{1}$ algorithm. We offer a unifying nonconvex approximation approach, with
solid theoretical tools as well as efficient algorithms based on DC programming
and DCA, to tackle the zero-norm and sparse optimization. As an application, we
implemented our methods for the feature selection in SVM (Support Vector
Machine) problem and performed empirical comparative numerical experiments on
the proposed algorithms with various approximation functions.
| [
"Hoai An Le Thi, Tao Pham Dinh, Hoai Minh Le, Xuan Thanh Vo",
"['Hoai An Le Thi' 'Tao Pham Dinh' 'Hoai Minh Le' 'Xuan Thanh Vo']"
]
|
cs.IT cs.LG math.IT stat.ML | 10.1109/TSP.2015.2401536 | 1407.0312 | null | null | http://arxiv.org/abs/1407.0312v3 | 2014-11-19T02:08:49Z | 2014-07-01T16:37:22Z | Identifying Outliers in Large Matrices via Randomized Adaptive
Compressive Sampling | This paper examines the problem of locating outlier columns in a large,
otherwise low-rank, matrix. We propose a simple two-step adaptive sensing and
inference approach and establish theoretical guarantees for its performance;
our results show that accurate outlier identification is achievable using very
few linear summaries of the original data matrix -- as few as the squared rank
of the low-rank component plus the number of outliers, times constant and
logarithmic factors. We demonstrate the performance of our approach
experimentally in two stylized applications, one motivated by robust
collaborative filtering tasks, and the other by saliency map estimation tasks
arising in computer vision and automated surveillance, and also investigate
extensions to settings where the data are noisy, or possibly incomplete.
| [
"Xingguo Li and Jarvis Haupt",
"['Xingguo Li' 'Jarvis Haupt']"
]
|
stat.ME cs.LG stat.ML | null | 1407.0316 | null | null | http://arxiv.org/pdf/1407.0316v3 | 2015-01-30T16:11:17Z | 2014-07-01T16:53:51Z | Significant Subgraph Mining with Multiple Testing Correction | The problem of finding itemsets that are statistically significantly enriched
in a class of transactions is complicated by the need to correct for multiple
hypothesis testing. Pruning untestable hypotheses was recently proposed as a
strategy for this task of significant itemset mining. It was shown to lead to
greater statistical power, the discovery of more truly significant itemsets,
than the standard Bonferroni correction on real-world datasets. An open
question, however, is whether this strategy of excluding untestable hypotheses
also leads to greater statistical power in subgraph mining, in which the number
of hypotheses is much larger than in itemset mining. Here we answer this
question by an empirical investigation on eight popular graph benchmark
datasets. We propose a new efficient search strategy, which always returns the
same solution as the state-of-the-art approach and is approximately two orders
of magnitude faster. Moreover, we exploit the dependence between subgraphs by
considering the effective number of tests and thereby further increase the
statistical power.
| [
"['Mahito Sugiyama' 'Felipe Llinares López' 'Niklas Kasenburg'\n 'Karsten M. Borgwardt']",
"Mahito Sugiyama, Felipe Llinares L\\'opez, Niklas Kasenburg, Karsten M.\n Borgwardt"
]
|
cs.SD cs.LG | null | 1407.0380 | null | null | http://arxiv.org/pdf/1407.0380v1 | 2014-06-27T20:34:05Z | 2014-06-27T20:34:05Z | A Multi Level Data Fusion Approach for Speaker Identification on
Telephone Speech | Several speaker identification systems are giving good performance with clean
speech but are affected by the degradations introduced by noisy audio
conditions. To deal with this problem, we investigate the use of complementary
information at different levels for computing a combined match score for the
unknown speaker. In this work, we observe the effect of two supervised machine
learning approaches including support vectors machines (SVM) and na\"ive bayes
(NB). We define two feature vector sets based on mel frequency cepstral
coefficients (MFCC) and relative spectral perceptual linear predictive
coefficients (RASTA-PLP). Each feature is modeled using the Gaussian Mixture
Model (GMM). Several ways of combining these information sources give
significant improvements in a text-independent speaker identification task
using a very large telephone degraded NTIMIT database.
| [
"Imen Trabelsi and Dorra Ben Ayed",
"['Imen Trabelsi' 'Dorra Ben Ayed']"
]
|
cs.LG cs.CV | null | 1407.0439 | null | null | http://arxiv.org/pdf/1407.0439v3 | 2015-01-13T07:20:12Z | 2014-07-02T01:55:37Z | Geometric Tight Frame based Stylometry for Art Authentication of van
Gogh Paintings | This paper is about authenticating genuine van Gogh paintings from forgeries.
The authentication process depends on two key steps: feature extraction and
outlier detection. In this paper, a geometric tight frame and some simple
statistics of the tight frame coefficients are used to extract features from
the paintings. Then a forward stage-wise rank boosting is used to select a
small set of features for more accurate classification so that van Gogh
paintings are highly concentrated towards some center point while forgeries are
spread out as outliers. Numerical results show that our method can achieve
86.08% classification accuracy under the leave-one-out cross-validation
procedure. Our method also identifies five features that are much more
predominant than other features. Using just these five features for
classification, our method can give 88.61% classification accuracy which is the
highest so far reported in literature. Evaluation of the five features is also
performed on two hundred datasets generated by bootstrap sampling with
replacement. The median and the mean are 88.61% and 87.77% respectively. Our
results show that a small set of statistics of the tight frame coefficients
along certain orientations can serve as discriminative features for van Gogh
paintings. It is more important to look at the tail distributions of such
directional coefficients than mean values and standard deviations. It reflects
a highly consistent style in van Gogh's brushstroke movements, where many
forgeries demonstrate a more diverse spread in these features.
| [
"Haixia Liu, Raymond H. Chan, and Yuan Yao",
"['Haixia Liu' 'Raymond H. Chan' 'Yuan Yao']"
]
|
cs.LG cs.SY math.OC stat.ML | null | 1407.0449 | null | null | http://arxiv.org/pdf/1407.0449v1 | 2014-07-02T03:19:43Z | 2014-07-02T03:19:43Z | Classification-based Approximate Policy Iteration: Experiments and
Extended Discussions | Tackling large approximate dynamic programming or reinforcement learning
problems requires methods that can exploit regularities, or intrinsic
structure, of the problem in hand. Most current methods are geared towards
exploiting the regularities of either the value function or the policy. We
introduce a general classification-based approximate policy iteration (CAPI)
framework, which encompasses a large class of algorithms that can exploit
regularities of both the value function and the policy space, depending on what
is advantageous. This framework has two main components: a generic value
function estimator and a classifier that learns a policy based on the estimated
value function. We establish theoretical guarantees for the sample complexity
of CAPI-style algorithms, which allow the policy evaluation step to be
performed by a wide variety of algorithms (including temporal-difference-style
methods), and can handle nonparametric representations of policies. Our bounds
on the estimation error of the performance loss are tighter than existing
results. We also illustrate this approach empirically on several problems,
including a large HIV control task.
| [
"['Amir-massoud Farahmand' 'Doina Precup' 'André M. S. Barreto'\n 'Mohammad Ghavamzadeh']",
"Amir-massoud Farahmand, Doina Precup, Andr\\'e M.S. Barreto, Mohammad\n Ghavamzadeh"
]
|
stat.ML cs.LG cs.NE | 10.1007/978-3-319-07695-9_1 | 1407.0611 | null | null | http://arxiv.org/abs/1407.0611v1 | 2014-07-02T15:31:20Z | 2014-07-02T15:31:20Z | How Many Dissimilarity/Kernel Self Organizing Map Variants Do We Need? | In numerous applicative contexts, data are too rich and too complex to be
represented by numerical vectors. A general approach to extend machine learning
and data mining techniques to such data is to really on a dissimilarity or on a
kernel that measures how different or similar two objects are. This approach
has been used to define several variants of the Self Organizing Map (SOM). This
paper reviews those variants in using a common set of notations in order to
outline differences and similarities between them. It discusses the advantages
and drawbacks of the variants, as well as the actual relevance of the
dissimilarity/kernel SOM for practical applications.
| [
"['Fabrice Rossi']",
"Fabrice Rossi (SAMM)"
]
|
stat.ML cs.LG | 10.1007/978-3-319-02999-3_2 | 1407.0612 | null | null | http://arxiv.org/abs/1407.0612v1 | 2014-07-02T15:32:10Z | 2014-07-02T15:32:10Z | Nonparametric Hierarchical Clustering of Functional Data | In this paper, we deal with the problem of curves clustering. We propose a
nonparametric method which partitions the curves into clusters and discretizes
the dimensions of the curve points into intervals. The cross-product of these
partitions forms a data-grid which is obtained using a Bayesian model selection
approach while making no assumptions regarding the curves. Finally, a
post-processing technique, aiming at reducing the number of clusters in order
to improve the interpretability of the clustering, is proposed. It consists in
optimally merging the clusters step by step, which corresponds to an
agglomerative hierarchical classification whose dissimilarity measure is the
variation of the criterion. Interestingly this measure is none other than the
sum of the Kullback-Leibler divergences between clusters distributions before
and after the merges. The practical interest of the approach for functional
data exploratory analysis is presented and compared with an alternative
approach on an artificial and a real world data set.
| [
"Marc Boull\\'e, Romain Guigour\\`es (SAMM), Fabrice Rossi (SAMM)",
"['Marc Boullé' 'Romain Guigourès' 'Fabrice Rossi']"
]
|
stat.ML cs.LG math.ST stat.TH | null | 1407.0726 | null | null | http://arxiv.org/pdf/1407.0726v2 | 2014-12-19T20:11:13Z | 2014-07-02T21:27:23Z | Fast Algorithm for Low-rank matrix recovery in Poisson noise | This paper describes a fast algorithm for recovering low-rank matrices from
their linear measurements contaminated with Poisson noise: the Poisson noise
Maximum Likelihood Singular Value thresholding (PMLSV) algorithm. We propose a
convex optimization formulation with a cost function consisting of the sum of a
likelihood function and a regularization function which the nuclear norm of the
matrix. Instead of solving the optimization problem directly by semi-definite
program (SDP), we derive an iterative singular value thresholding algorithm by
expanding the likelihood function. We demonstrate the good performance of the
proposed algorithm on recovery of solar flare images with Poisson noise: the
algorithm is more efficient than solving SDP using the interior-point algorithm
and it generates a good approximate solution compared to that solved from SDP.
| [
"Yang Cao and Yao Xie",
"['Yang Cao' 'Yao Xie']"
]
|
cs.LG stat.ML | null | 1407.0749 | null | null | http://arxiv.org/pdf/1407.0749v2 | 2014-10-08T06:30:20Z | 2014-07-03T00:19:08Z | Projecting Ising Model Parameters for Fast Mixing | Inference in general Ising models is difficult, due to high treewidth making
tree-based algorithms intractable. Moreover, when interactions are strong,
Gibbs sampling may take exponential time to converge to the stationary
distribution. We present an algorithm to project Ising model parameters onto a
parameter set that is guaranteed to be fast mixing, under several divergences.
We find that Gibbs sampling using the projected parameters is more accurate
than with the original parameters when interaction strengths are strong and
when limited time is available for sampling.
| [
"['Justin Domke' 'Xianghang Liu']",
"Justin Domke and Xianghang Liu"
]
|
math.OC cs.LG math.NA stat.ML | 10.1137/140998135 | 1407.0753 | null | null | http://arxiv.org/abs/1407.0753v6 | 2015-11-04T02:27:46Z | 2014-07-03T00:29:25Z | Global convergence of splitting methods for nonconvex composite
optimization | We consider the problem of minimizing the sum of a smooth function $h$ with a
bounded Hessian, and a nonsmooth function. We assume that the latter function
is a composition of a proper closed function $P$ and a surjective linear map
$\cal M$, with the proximal mappings of $\tau P$, $\tau > 0$, simple to
compute. This problem is nonconvex in general and encompasses many important
applications in engineering and machine learning. In this paper, we examined
two types of splitting methods for solving this nonconvex optimization problem:
alternating direction method of multipliers and proximal gradient algorithm.
For the direct adaptation of the alternating direction method of multipliers,
we show that, if the penalty parameter is chosen sufficiently large and the
sequence generated has a cluster point, then it gives a stationary point of the
nonconvex problem. We also establish convergence of the whole sequence under an
additional assumption that the functions $h$ and $P$ are semi-algebraic.
Furthermore, we give simple sufficient conditions to guarantee boundedness of
the sequence generated. These conditions can be satisfied for a wide range of
applications including the least squares problem with the $\ell_{1/2}$
regularization. Finally, when $\cal M$ is the identity so that the proximal
gradient algorithm can be efficiently applied, we show that any cluster point
is stationary under a slightly more flexible constant step-size rule than what
is known in the literature for a nonconvex $h$.
| [
"Guoyin Li, Ting Kei Pong",
"['Guoyin Li' 'Ting Kei Pong']"
]
|
cs.LG stat.ML | null | 1407.0754 | null | null | http://arxiv.org/pdf/1407.0754v1 | 2014-07-03T00:48:34Z | 2014-07-03T00:48:34Z | Structured Learning via Logistic Regression | A successful approach to structured learning is to write the learning
objective as a joint function of linear parameters and inference messages, and
iterate between updates to each. This paper observes that if the inference
problem is "smoothed" through the addition of entropy terms, for fixed
messages, the learning objective reduces to a traditional (non-structured)
logistic regression problem with respect to parameters. In these logistic
regression problems, each training example has a bias term determined by the
current set of messages. Based on this insight, the structured energy function
can be extended from linear factors to any function class where an "oracle"
exists to minimize a logistic loss.
| [
"Justin Domke",
"['Justin Domke']"
]
|
cs.IR cs.LG stat.ML | null | 1407.0822 | null | null | http://arxiv.org/pdf/1407.0822v1 | 2014-07-03T09:05:33Z | 2014-07-03T09:05:33Z | Reducing Offline Evaluation Bias in Recommendation Systems | Recommendation systems have been integrated into the majority of large online
systems. They tailor those systems to individual users by filtering and ranking
information according to user profiles. This adaptation process influences the
way users interact with the system and, as a consequence, increases the
difficulty of evaluating a recommendation algorithm with historical data (via
offline evaluation). This paper analyses this evaluation bias and proposes a
simple item weighting solution that reduces its impact. The efficiency of the
proposed solution is evaluated on real world data extracted from Viadeo
professional social network.
| [
"Arnaud De Myttenaere (SAMM), B\\'en\\'edicte Le Grand (CRI), Boris\n Golden (Viadeo), Fabrice Rossi (SAMM)",
"['Arnaud De Myttenaere' 'Bénédicte Le Grand' 'Boris Golden'\n 'Fabrice Rossi']"
]
|
stat.ML cs.LG | null | 1407.0880 | null | null | http://arxiv.org/pdf/1407.0880v2 | 2014-09-16T19:43:54Z | 2014-07-03T12:16:50Z | Anomaly Detection Based on Aggregation of Indicators | Automatic anomaly detection is a major issue in various areas. Beyond mere
detection, the identification of the origin of the problem that produced the
anomaly is also essential. This paper introduces a general methodology that can
assist human operators who aim at classifying monitoring signals. The main idea
is to leverage expert knowledge by generating a very large number of
indicators. A feature selection method is used to keep only the most
discriminant indicators which are used as inputs of a Naive Bayes classifier.
The parameters of the classifier have been optimized indirectly by the
selection process. Simulated data designed to reproduce some of the anomaly
types observed in real world engines.
| [
"Tsirizo Rabenoro (SAMM), J\\'er\\^ome Lacaille, Marie Cottrell (SAMM),\n Fabrice Rossi (SAMM)",
"['Tsirizo Rabenoro' 'Jérôme Lacaille' 'Marie Cottrell' 'Fabrice Rossi']"
]
|
cs.LG | null | 1407.1082 | null | null | http://arxiv.org/pdf/1407.1082v1 | 2014-07-03T23:06:10Z | 2014-07-03T23:06:10Z | Online Submodular Maximization under a Matroid Constraint with
Application to Learning Assignments | Which ads should we display in sponsored search in order to maximize our
revenue? How should we dynamically rank information sources to maximize the
value of the ranking? These applications exhibit strong diminishing returns:
Redundancy decreases the marginal utility of each ad or information source. We
show that these and other problems can be formalized as repeatedly selecting an
assignment of items to positions to maximize a sequence of monotone submodular
functions that arrive one by one. We present an efficient algorithm for this
general problem and analyze it in the no-regret model. Our algorithm possesses
strong theoretical guarantees, such as a performance ratio that converges to
the optimal constant of 1 - 1/e. We empirically evaluate our algorithm on two
real-world online optimization problems on the web: ad allocation with
submodular utilities, and dynamically ranking blogs to detect information
cascades. Finally, we present a second algorithm that handles the more general
case in which the feasible sets are given by a matroid constraint, while still
maintaining a 1 - 1/e asymptotic performance ratio.
| [
"['Daniel Golovin' 'Andreas Krause' 'Matthew Streeter']",
"Daniel Golovin, Andreas Krause, Matthew Streeter"
]
|
math.OC cs.LG stat.ML | null | 1407.1097 | null | null | http://arxiv.org/pdf/1407.1097v1 | 2014-07-04T00:39:00Z | 2014-07-04T00:39:00Z | Robust Optimization using Machine Learning for Uncertainty Sets | Our goal is to build robust optimization problems for making decisions based
on complex data from the past. In robust optimization (RO) generally, the goal
is to create a policy for decision-making that is robust to our uncertainty
about the future. In particular, we want our policy to best handle the the
worst possible situation that could arise, out of an uncertainty set of
possible situations. Classically, the uncertainty set is simply chosen by the
user, or it might be estimated in overly simplistic ways with strong
assumptions; whereas in this work, we learn the uncertainty set from data
collected in the past. The past data are drawn randomly from an (unknown)
possibly complicated high-dimensional distribution. We propose a new
uncertainty set design and show how tools from statistical learning theory can
be employed to provide probabilistic guarantees on the robustness of the
policy.
| [
"['Theja Tulabandhula' 'Cynthia Rudin']",
"Theja Tulabandhula, Cynthia Rudin"
]
|
cs.CV cs.LG stat.ML | null | 1407.1123 | null | null | http://arxiv.org/pdf/1407.1123v1 | 2014-07-04T05:34:38Z | 2014-07-04T05:34:38Z | Expanding the Family of Grassmannian Kernels: An Embedding Perspective | Modeling videos and image-sets as linear subspaces has proven beneficial for
many visual recognition tasks. However, it also incurs challenges arising from
the fact that linear subspaces do not obey Euclidean geometry, but lie on a
special type of Riemannian manifolds known as Grassmannian. To leverage the
techniques developed for Euclidean spaces (e.g, support vector machines) with
subspaces, several recent studies have proposed to embed the Grassmannian into
a Hilbert space by making use of a positive definite kernel. Unfortunately,
only two Grassmannian kernels are known, none of which -as we will show- is
universal, which limits their ability to approximate a target function
arbitrarily well. Here, we introduce several positive definite Grassmannian
kernels, including universal ones, and demonstrate their superiority over
previously-known kernels in various tasks, such as classification, clustering,
sparse coding and hashing.
| [
"Mehrtash T. Harandi and Mathieu Salzmann and Sadeep Jayasumana and\n Richard Hartley and Hongdong Li",
"['Mehrtash T. Harandi' 'Mathieu Salzmann' 'Sadeep Jayasumana'\n 'Richard Hartley' 'Hongdong Li']"
]
|
cs.LG cs.CV | null | 1407.1151 | null | null | http://arxiv.org/pdf/1407.1151v1 | 2014-07-04T08:18:45Z | 2014-07-04T08:18:45Z | Optimizing Ranking Measures for Compact Binary Code Learning | Hashing has proven a valuable tool for large-scale information retrieval.
Despite much success, existing hashing methods optimize over simple objectives
such as the reconstruction error or graph Laplacian related loss functions,
instead of the performance evaluation criteria of interest---multivariate
performance measures such as the AUC and NDCG. Here we present a general
framework (termed StructHash) that allows one to directly optimize multivariate
performance measures. The resulting optimization problem can involve
exponentially or infinitely many variables and constraints, which is more
challenging than standard structured output learning. To solve the StructHash
optimization problem, we use a combination of column generation and
cutting-plane techniques. We demonstrate the generality of StructHash by
applying it to ranking prediction and image retrieval, and show that it
outperforms a few state-of-the-art hashing methods.
| [
"['Guosheng Lin' 'Chunhua Shen' 'Jianxin Wu']",
"Guosheng Lin, Chunhua Shen, Jianxin Wu"
]
|
stat.ML cs.LG | null | 1407.1176 | null | null | http://arxiv.org/pdf/1407.1176v1 | 2014-07-04T10:17:43Z | 2014-07-04T10:17:43Z | Identifying Higher-order Combinations of Binary Features | Finding statistically significant interactions between binary variables is
computationally and statistically challenging in high-dimensional settings, due
to the combinatorial explosion in the number of hypotheses. Terada et al.
recently showed how to elegantly address this multiple testing problem by
excluding non-testable hypotheses. Still, it remains unclear how their approach
scales to large datasets.
We here proposed strategies to speed up the approach by Terada et al. and
evaluate them thoroughly in 11 real-world benchmark datasets. We observe that
one approach, incremental search with early stopping, is orders of magnitude
faster than the current state-of-the-art approach.
| [
"['Felipe Llinares' 'Mahito Sugiyama' 'Karsten M. Borgwardt']",
"Felipe Llinares, Mahito Sugiyama, Karsten M. Borgwardt"
]
|
cs.LG cs.NE | 10.1007/978-3-642-29347-4_20 | 1407.1201 | null | null | http://arxiv.org/abs/1407.1201v1 | 2014-07-04T12:14:48Z | 2014-07-04T12:14:48Z | Improving Performance of Self-Organising Maps with Distance Metric
Learning Method | Self-Organising Maps (SOM) are Artificial Neural Networks used in Pattern
Recognition tasks. Their major advantage over other architectures is human
readability of a model. However, they often gain poorer accuracy. Mostly used
metric in SOM is the Euclidean distance, which is not the best approach to some
problems. In this paper, we study an impact of the metric change on the SOM's
performance in classification problems. In order to change the metric of the
SOM we applied a distance metric learning method, so-called 'Large Margin
Nearest Neighbour'. It computes the Mahalanobis matrix, which assures small
distance between nearest neighbour points from the same class and separation of
points belonging to different classes by large margin. Results are presented on
several real data sets, containing for example recognition of written digits,
spoken letters or faces.
| [
"Piotr P{\\l}o\\'nski, Krzysztof Zaremba",
"['Piotr Płoński' 'Krzysztof Zaremba']"
]
|
cs.CV cs.LG | null | 1407.1208 | null | null | http://arxiv.org/pdf/1407.1208v1 | 2014-07-04T12:53:15Z | 2014-07-04T12:53:15Z | Weakly Supervised Action Labeling in Videos Under Ordering Constraints | We are given a set of video clips, each one annotated with an {\em ordered}
list of actions, such as "walk" then "sit" then "answer phone" extracted from,
for example, the associated text script. We seek to temporally localize the
individual actions in each clip as well as to learn a discriminative classifier
for each action. We formulate the problem as a weakly supervised temporal
assignment with ordering constraints. Each video clip is divided into small
time intervals and each time interval of each video clip is assigned one action
label, while respecting the order in which the action labels appear in the
given annotations. We show that the action label assignment can be determined
together with learning a classifier for each action in a discriminative manner.
We evaluate the proposed model on a new and challenging dataset of 937 video
clips with a total of 787720 frames containing sequences of 16 different
actions from 69 Hollywood movies.
| [
"['Piotr Bojanowski' 'Rémi Lajugie' 'Francis Bach' 'Ivan Laptev'\n 'Jean Ponce' 'Cordelia Schmid' 'Josef Sivic']",
"Piotr Bojanowski, R\\'emi Lajugie, Francis Bach, Ivan Laptev, Jean\n Ponce, Cordelia Schmid, Josef Sivic"
]
|
cs.CE cs.LG math.OC stat.AP | null | 1407.1291 | null | null | http://arxiv.org/pdf/1407.1291v2 | 2014-09-17T12:59:13Z | 2014-07-04T18:37:33Z | Reinforcement Learning Based Algorithm for the Maximization of EV
Charging Station Revenue | This paper presents an online reinforcement learning based application which
increases the revenue of one particular electric vehicles (EV) station,
connected to a renewable source of energy. Moreover, the proposed application
adapts to changes in the trends of the station's average number of customers
and their types. Most of the parameters in the model are simulated
stochastically and the algorithm used is a Q-learning algorithm. A computer
simulation was implemented which demonstrates and confirms the utility of the
model.
| [
"['Stoyan Dimitrov' 'Redouane Lguensat']",
"Stoyan Dimitrov, Redouane Lguensat"
]
|
cs.NA cs.LG | null | 1407.1399 | null | null | http://arxiv.org/pdf/1407.1399v1 | 2014-07-05T11:58:30Z | 2014-07-05T11:58:30Z | Generalized Higher-Order Tensor Decomposition via Parallel ADMM | Higher-order tensors are becoming prevalent in many scientific areas such as
computer vision, social network analysis, data mining and neuroscience.
Traditional tensor decomposition approaches face three major challenges: model
selecting, gross corruptions and computational efficiency. To address these
problems, we first propose a parallel trace norm regularized tensor
decomposition method, and formulate it as a convex optimization problem. This
method does not require the rank of each mode to be specified beforehand, and
can automatically determine the number of factors in each mode through our
optimization scheme. By considering the low-rank structure of the observed
tensor, we analyze the equivalent relationship of the trace norm between a
low-rank tensor and its core tensor. Then, we cast a non-convex tensor
decomposition model into a weighted combination of multiple much smaller-scale
matrix trace norm minimization. Finally, we develop two parallel alternating
direction methods of multipliers (ADMM) to solve our problems. Experimental
results verify that our regularized formulation is effective, and our methods
are robust to noise or outliers.
| [
"['Fanhua Shang' 'Yuanyuan Liu' 'James Cheng']",
"Fanhua Shang and Yuanyuan Liu and James Cheng"
]
|
cs.DS cs.LG cs.NA math.OC stat.ML | null | 1407.1537 | null | null | http://arxiv.org/pdf/1407.1537v5 | 2016-11-07T19:30:37Z | 2014-07-06T20:11:48Z | Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent | First-order methods play a central role in large-scale machine learning. Even
though many variations exist, each suited to a particular problem, almost all
such methods fundamentally rely on two types of algorithmic steps: gradient
descent, which yields primal progress, and mirror descent, which yields dual
progress.
We observe that the performances of gradient and mirror descent are
complementary, so that faster algorithms can be designed by LINEARLY COUPLING
the two. We show how to reconstruct Nesterov's accelerated gradient methods
using linear coupling, which gives a cleaner interpretation than Nesterov's
original proofs. We also discuss the power of linear coupling by extending it
to many other settings that Nesterov's methods cannot apply to.
| [
"Zeyuan Allen-Zhu, Lorenzo Orecchia",
"['Zeyuan Allen-Zhu' 'Lorenzo Orecchia']"
]
|
cs.LG | null | 1407.1538 | null | null | http://arxiv.org/pdf/1407.1538v1 | 2014-07-06T20:13:48Z | 2014-07-06T20:13:48Z | Large-Scale Multi-Label Learning with Incomplete Label Assignments | Multi-label learning deals with the classification problems where each
instance can be assigned with multiple labels simultaneously. Conventional
multi-label learning approaches mainly focus on exploiting label correlations.
It is usually assumed, explicitly or implicitly, that the label sets for
training instances are fully labeled without any missing labels. However, in
many real-world multi-label datasets, the label assignments for training
instances can be incomplete. Some ground-truth labels can be missed by the
labeler from the label set. This problem is especially typical when the number
instances is very large, and the labeling cost is very high, which makes it
almost impossible to get a fully labeled training set. In this paper, we study
the problem of large-scale multi-label learning with incomplete label
assignments. We propose an approach, called MPU, based upon positive and
unlabeled stochastic gradient descent and stacked models. Unlike prior works,
our method can effectively and efficiently consider missing labels and label
correlations simultaneously, and is very scalable, that has linear time
complexities over the size of the data. Extensive experiments on two real-world
multi-label datasets show that our MPU model consistently outperform other
commonly-used baselines.
| [
"Xiangnan Kong and Zhaoming Wu and Li-Jia Li and Ruofei Zhang and\n Philip S. Yu and Hang Wu and Wei Fan",
"['Xiangnan Kong' 'Zhaoming Wu' 'Li-Jia Li' 'Ruofei Zhang' 'Philip S. Yu'\n 'Hang Wu' 'Wei Fan']"
]
|
cs.DS cs.LG stat.ML | null | 1407.1543 | null | null | http://arxiv.org/pdf/1407.1543v2 | 2014-11-07T21:32:44Z | 2014-07-06T20:42:05Z | Dictionary Learning and Tensor Decomposition via the Sum-of-Squares
Method | We give a new approach to the dictionary learning (also known as "sparse
coding") problem of recovering an unknown $n\times m$ matrix $A$ (for $m \geq
n$) from examples of the form \[ y = Ax + e, \] where $x$ is a random vector in
$\mathbb R^m$ with at most $\tau m$ nonzero coordinates, and $e$ is a random
noise vector in $\mathbb R^n$ with bounded magnitude. For the case $m=O(n)$,
our algorithm recovers every column of $A$ within arbitrarily good constant
accuracy in time $m^{O(\log m/\log(\tau^{-1}))}$, in particular achieving
polynomial time if $\tau = m^{-\delta}$ for any $\delta>0$, and time $m^{O(\log
m)}$ if $\tau$ is (a sufficiently small) constant. Prior algorithms with
comparable assumptions on the distribution required the vector $x$ to be much
sparser---at most $\sqrt{n}$ nonzero coordinates---and there were intrinsic
barriers preventing these algorithms from applying for denser $x$.
We achieve this by designing an algorithm for noisy tensor decomposition that
can recover, under quite general conditions, an approximate rank-one
decomposition of a tensor $T$, given access to a tensor $T'$ that is
$\tau$-close to $T$ in the spectral norm (when considered as a matrix). To our
knowledge, this is the first algorithm for tensor decomposition that works in
the constant spectral-norm noise regime, where there is no guarantee that the
local optima of $T$ and $T'$ have similar structures.
Our algorithm is based on a novel approach to using and analyzing the Sum of
Squares semidefinite programming hierarchy (Parrilo 2000, Lasserre 2001), and
it can be viewed as an indication of the utility of this very general and
powerful tool for unsupervised learning problems.
| [
"Boaz Barak, Jonathan A. Kelner, David Steurer",
"['Boaz Barak' 'Jonathan A. Kelner' 'David Steurer']"
]
|
cs.CL cs.LG | null | 1407.1640 | null | null | http://arxiv.org/pdf/1407.1640v1 | 2014-07-07T09:31:21Z | 2014-07-07T09:31:21Z | WordRep: A Benchmark for Research on Learning Word Representations | WordRep is a benchmark collection for the research on learning distributed
word representations (or word embeddings), released by Microsoft Research. In
this paper, we describe the details of the WordRep collection and show how to
use it in different types of machine learning research related to word
embedding. Specifically, we describe how the evaluation tasks in WordRep are
selected, how the data are sampled, and how the evaluation tool is built. We
then compare several state-of-the-art word representations on WordRep, report
their evaluation performance, and make discussions on the results. After that,
we discuss new potential research topics that can be supported by WordRep, in
addition to algorithm comparison. We hope that this paper can help people gain
deeper understanding of WordRep, and enable more interesting research on
learning distributed word representations and related topics.
| [
"Bin Gao, Jiang Bian, and Tie-Yan Liu",
"['Bin Gao' 'Jiang Bian' 'Tie-Yan Liu']"
]
|
cs.CL cs.LG | null | 1407.1687 | null | null | http://arxiv.org/pdf/1407.1687v3 | 2014-09-05T15:58:35Z | 2014-07-07T12:45:10Z | KNET: A General Framework for Learning Word Embedding using
Morphological Knowledge | Neural network techniques are widely applied to obtain high-quality
distributed representations of words, i.e., word embeddings, to address text
mining, information retrieval, and natural language processing tasks. Recently,
efficient methods have been proposed to learn word embeddings from context that
captures both semantic and syntactic relationships between words. However, it
is challenging to handle unseen words or rare words with insufficient context.
In this paper, inspired by the study on word recognition process in cognitive
psychology, we propose to take advantage of seemingly less obvious but
essentially important morphological knowledge to address these challenges. In
particular, we introduce a novel neural network architecture called KNET that
leverages both contextual information and morphological word similarity built
based on morphological knowledge to learn word embeddings. Meanwhile, the
learning architecture is also able to refine the pre-defined morphological
knowledge and obtain more accurate word similarity. Experiments on an
analogical reasoning task and a word similarity task both demonstrate that the
proposed KNET framework can greatly enhance the effectiveness of word
embeddings.
| [
"['Qing Cui' 'Bin Gao' 'Jiang Bian' 'Siyu Qiu' 'Tie-Yan Liu']",
"Qing Cui, Bin Gao, Jiang Bian, Siyu Qiu, and Tie-Yan Liu"
]
|
cs.LG stat.ML | null | 1407.1890 | null | null | http://arxiv.org/pdf/1407.1890v1 | 2014-07-07T21:23:42Z | 2014-07-07T21:23:42Z | Recommending Learning Algorithms and Their Associated Hyperparameters | The success of machine learning on a given task dependson, among other
things, which learning algorithm is selected and its associated
hyperparameters. Selecting an appropriate learning algorithm and setting its
hyperparameters for a given data set can be a challenging task, especially for
users who are not experts in machine learning. Previous work has examined using
meta-features to predict which learning algorithm and hyperparameters should be
used. However, choosing a set of meta-features that are predictive of algorithm
performance is difficult. Here, we propose to apply collaborative filtering
techniques to learning algorithm and hyperparameter selection, and find that
doing so avoids determining which meta-features to use and outperforms
traditional meta-learning approaches in many cases.
| [
"['Michael R. Smith' 'Logan Mitchell' 'Christophe Giraud-Carrier'\n 'Tony Martinez']",
"Michael R. Smith, Logan Mitchell, Christophe Giraud-Carrier, Tony\n Martinez"
]
|
cs.IR cs.LG stat.ML | 10.1109/TASLP.2015.2416655 | 1407.2433 | null | null | http://arxiv.org/abs/1407.2433v3 | 2015-05-17T15:53:43Z | 2014-07-09T11:04:15Z | Identifying Cover Songs Using Information-Theoretic Measures of
Similarity | This paper investigates methods for quantifying similarity between audio
signals, specifically for the task of of cover song detection. We consider an
information-theoretic approach, where we compute pairwise measures of
predictability between time series. We compare discrete-valued approaches
operating on quantised audio features, to continuous-valued approaches. In the
discrete case, we propose a method for computing the normalised compression
distance, where we account for correlation between time series. In the
continuous case, we propose to compute information-based measures of similarity
as statistics of the prediction error between time series. We evaluate our
methods on two cover song identification tasks using a data set comprised of
300 Jazz standards and using the Million Song Dataset. For both datasets, we
observe that continuous-valued approaches outperform discrete-valued
approaches. We consider approaches to estimating the normalised compression
distance (NCD) based on string compression and prediction, where we observe
that our proposed normalised compression distance with alignment (NCDA)
improves average performance over NCD, for sequential compression algorithms.
Finally, we demonstrate that continuous-valued distances may be combined to
improve performance with respect to baseline approaches. Using a large-scale
filter-and-refine approach, we demonstrate state-of-the-art performance for
cover song identification using the Million Song Dataset.
| [
"['Peter Foster' 'Simon Dixon' 'Anssi Klapuri']",
"Peter Foster, Simon Dixon, Anssi Klapuri"
]
|
stat.ML cs.AI cs.LG | null | 1407.2483 | null | null | http://arxiv.org/pdf/1407.2483v2 | 2014-07-12T17:23:57Z | 2014-07-09T14:02:01Z | Counting Markov Blanket Structures | Learning Markov blanket (MB) structures has proven useful in performing
feature selection, learning Bayesian networks (BNs), and discovering causal
relationships. We present a formula for efficiently determining the number of
MB structures given a target variable and a set of other variables. As
expected, the number of MB structures grows exponentially. However, we show
quantitatively that there are many fewer MB structures that contain the target
variable than there are BN structures that contain it. In particular, the ratio
of BN structures to MB structures appears to increase exponentially in the
number of variables.
| [
"Shyam Visweswaran and Gregory F. Cooper",
"['Shyam Visweswaran' 'Gregory F. Cooper']"
]
|
cs.SI cs.IR cs.LG physics.soc-ph | null | 1407.2515 | null | null | http://arxiv.org/pdf/1407.2515v4 | 2019-04-11T14:23:37Z | 2014-07-09T15:05:56Z | RankMerging: A supervised learning-to-rank framework to predict links in
large social network | Uncovering unknown or missing links in social networks is a difficult task
because of their sparsity and because links may represent different types of
relationships, characterized by different structural patterns. In this paper,
we define a simple yet efficient supervised learning-to-rank framework, called
RankMerging, which aims at combining information provided by various
unsupervised rankings. We illustrate our method on three different kinds of
social networks and show that it substantially improves the performances of
unsupervised metrics of ranking. We also compare it to other combination
strategies based on standard methods. Finally, we explore various aspects of
RankMerging, such as feature selection and parameter estimation and discuss its
area of relevance: the prediction of an adjustable number of links on large
networks.
| [
"Lionel Tabourier, Daniel Faria Bernardes, Anne-Sophie Libert, Renaud\n Lambiotte",
"['Lionel Tabourier' 'Daniel Faria Bernardes' 'Anne-Sophie Libert'\n 'Renaud Lambiotte']"
]
|
cs.LG | null | 1407.2538 | null | null | http://arxiv.org/pdf/1407.2538v3 | 2015-04-27T21:11:32Z | 2014-07-09T15:54:27Z | Learning Deep Structured Models | Many problems in real-world applications involve predicting several random
variables which are statistically related. Markov random fields (MRFs) are a
great mathematical tool to encode such relationships. The goal of this paper is
to combine MRFs with deep learning algorithms to estimate complex
representations while taking into account the dependencies between the output
random variables. Towards this goal, we propose a training algorithm that is
able to learn structured models jointly with deep features that form the MRF
potentials. Our approach is efficient as it blends learning and inference and
makes use of GPU acceleration. We demonstrate the effectiveness of our
algorithm in the tasks of predicting words from noisy images, as well as
multi-class classification of Flickr photographs. We show that joint learning
of the deep features and the MRF parameters results in significant performance
gains.
| [
"Liang-Chieh Chen and Alexander G. Schwing and Alan L. Yuille and\n Raquel Urtasun",
"['Liang-Chieh Chen' 'Alexander G. Schwing' 'Alan L. Yuille'\n 'Raquel Urtasun']"
]
|
cs.AI cs.LG stat.ML | null | 1407.2646 | null | null | http://arxiv.org/pdf/1407.2646v1 | 2014-07-09T22:06:18Z | 2014-07-09T22:06:18Z | Learning Probabilistic Programs | We develop a technique for generalising from data in which models are
samplers represented as program text. We establish encouraging empirical
results that suggest that Markov chain Monte Carlo probabilistic programming
inference techniques coupled with higher-order probabilistic programming
languages are now sufficiently powerful to enable successful inference of this
kind in nontrivial domains. We also introduce a new notion of probabilistic
program compilation and show how the same machinery might be used in the future
to compile probabilistic programs for efficient reusable predictive inference.
| [
"Yura N. Perov, Frank D. Wood",
"['Yura N. Perov' 'Frank D. Wood']"
]
|
cs.LG stat.ML | null | 1407.2657 | null | null | http://arxiv.org/pdf/1407.2657v2 | 2014-07-11T23:35:49Z | 2014-07-10T00:34:16Z | Beyond Disagreement-based Agnostic Active Learning | We study agnostic active learning, where the goal is to learn a classifier in
a pre-specified hypothesis class interactively with as few label queries as
possible, while making no assumptions on the true function generating the
labels. The main algorithms for this problem are {\em{disagreement-based active
learning}}, which has a high label requirement, and {\em{margin-based active
learning}}, which only applies to fairly restricted settings. A major challenge
is to find an algorithm which achieves better label complexity, is consistent
in an agnostic setting, and applies to general classification problems.
In this paper, we provide such an algorithm. Our solution is based on two
novel contributions -- a reduction from consistent active learning to
confidence-rated prediction with guaranteed error, and a novel confidence-rated
predictor.
| [
"['Chicheng Zhang' 'Kamalika Chaudhuri']",
"Chicheng Zhang and Kamalika Chaudhuri"
]
|
cs.LG cs.CR | null | 1407.2662 | null | null | http://arxiv.org/pdf/1407.2662v3 | 2015-07-01T20:28:50Z | 2014-07-10T00:55:39Z | Learning Privately with Labeled and Unlabeled Examples | A private learner is an algorithm that given a sample of labeled individual
examples outputs a generalizing hypothesis while preserving the privacy of each
individual. In 2008, Kasiviswanathan et al. (FOCS 2008) gave a generic
construction of private learners, in which the sample complexity is (generally)
higher than what is needed for non-private learners. This gap in the sample
complexity was then further studied in several followup papers, showing that
(at least in some cases) this gap is unavoidable. Moreover, those papers
considered ways to overcome the gap, by relaxing either the privacy or the
learning guarantees of the learner.
We suggest an alternative approach, inspired by the (non-private) models of
semi-supervised learning and active-learning, where the focus is on the sample
complexity of labeled examples whereas unlabeled examples are of a
significantly lower cost. We consider private semi-supervised learners that
operate on a random sample, where only a (hopefully small) portion of this
sample is labeled. The learners have no control over which of the sample
elements are labeled. Our main result is that the labeled sample complexity of
private learners is characterized by the VC dimension.
We present two generic constructions of private semi-supervised learners. The
first construction is of learners where the labeled sample complexity is
proportional to the VC dimension of the concept class, however, the unlabeled
sample complexity of the algorithm is as big as the representation length of
domain elements. Our second construction presents a new technique for
decreasing the labeled sample complexity of a given private learner, while
roughly maintaining its unlabeled sample complexity. In addition, we show that
in some settings the labeled sample complexity does not depend on the privacy
parameters of the learner.
| [
"Amos Beimel, Kobbi Nissim, Uri Stemmer",
"['Amos Beimel' 'Kobbi Nissim' 'Uri Stemmer']"
]
|
cs.LG cs.CR stat.ML | null | 1407.2674 | null | null | http://arxiv.org/pdf/1407.2674v1 | 2014-07-10T01:42:44Z | 2014-07-10T01:42:44Z | Private Learning and Sanitization: Pure vs. Approximate Differential
Privacy | We compare the sample complexity of private learning [Kasiviswanathan et al.
2008] and sanitization~[Blum et al. 2008] under pure $\epsilon$-differential
privacy [Dwork et al. TCC 2006] and approximate
$(\epsilon,\delta)$-differential privacy [Dwork et al. Eurocrypt 2006]. We show
that the sample complexity of these tasks under approximate differential
privacy can be significantly lower than that under pure differential privacy.
We define a family of optimization problems, which we call Quasi-Concave
Promise Problems, that generalizes some of our considered tasks. We observe
that a quasi-concave promise problem can be privately approximated using a
solution to a smaller instance of a quasi-concave promise problem. This allows
us to construct an efficient recursive algorithm solving such problems
privately. Specifically, we construct private learners for point functions,
threshold functions, and axis-aligned rectangles in high dimension. Similarly,
we construct sanitizers for point functions and threshold functions.
We also examine the sample complexity of label-private learners, a relaxation
of private learning where the learner is required to only protect the privacy
of the labels in the sample. We show that the VC dimension completely
characterizes the sample complexity of such learners, that is, the sample
complexity of learning with label privacy is equal (up to constants) to
learning without privacy.
| [
"Amos Beimel, Kobbi Nissim, Uri Stemmer",
"['Amos Beimel' 'Kobbi Nissim' 'Uri Stemmer']"
]
|
math.OC cs.AI cs.LG cs.SY stat.ML | null | 1407.2676 | null | null | http://arxiv.org/pdf/1407.2676v2 | 2014-07-14T00:24:14Z | 2014-07-10T02:34:15Z | A New Optimal Stepsize For Approximate Dynamic Programming | Approximate dynamic programming (ADP) has proven itself in a wide range of
applications spanning large-scale transportation problems, health care, revenue
management, and energy systems. The design of effective ADP algorithms has many
dimensions, but one crucial factor is the stepsize rule used to update a value
function approximation. Many operations research applications are
computationally intensive, and it is important to obtain good results quickly.
Furthermore, the most popular stepsize formulas use tunable parameters and can
produce very poor results if tuned improperly. We derive a new stepsize rule
that optimizes the prediction error in order to improve the short-term
performance of an ADP algorithm. With only one, relatively insensitive tunable
parameter, the new rule adapts to the level of noise in the problem and
produces faster convergence in numerical experiments.
| [
"['Ilya O. Ryzhov' 'Peter I. Frazier' 'Warren B. Powell']",
"Ilya O. Ryzhov and Peter I. Frazier and Warren B. Powell"
]
|
cs.LG stat.ML | null | 1407.2697 | null | null | http://arxiv.org/pdf/1407.2697v1 | 2014-07-10T05:45:17Z | 2014-07-10T05:45:17Z | A Convex Formulation for Learning Scale-Free Networks via Submodular
Relaxation | A key problem in statistics and machine learning is the determination of
network structure from data. We consider the case where the structure of the
graph to be reconstructed is known to be scale-free. We show that in such cases
it is natural to formulate structured sparsity inducing priors using submodular
functions, and we use their Lov\'asz extension to obtain a convex relaxation.
For tractable classes such as Gaussian graphical models, this leads to a convex
optimization problem that can be efficiently solved. We show that our method
results in an improvement in the accuracy of reconstructed networks for
synthetic data. We also show how our prior encourages scale-free
reconstructions on a bioinfomatics dataset.
| [
"['Aaron J. Defazio' 'Tiberio S. Caetano']",
"Aaron J. Defazio and Tiberio S. Caetano"
]
|
cs.LG stat.ML | null | 1407.2710 | null | null | http://arxiv.org/pdf/1407.2710v1 | 2014-07-10T07:01:31Z | 2014-07-10T07:01:31Z | Finito: A Faster, Permutable Incremental Gradient Method for Big Data
Problems | Recent advances in optimization theory have shown that smooth strongly convex
finite sums can be minimized faster than by treating them as a black box
"batch" problem. In this work we introduce a new method in this class with a
theoretical convergence rate four times faster than existing methods, for sums
with sufficiently many terms. This method is also amendable to a sampling
without replacement scheme that in practice gives further speed-ups. We give
empirical results showing state of the art performance.
| [
"['Aaron J. Defazio' 'Tibério S. Caetano' 'Justin Domke']",
"Aaron J. Defazio and Tib\\'erio S. Caetano and Justin Domke"
]
|
cs.LG | null | 1407.2736 | null | null | http://arxiv.org/pdf/1407.2736v1 | 2014-07-10T09:39:24Z | 2014-07-10T09:39:24Z | A multi-instance learning algorithm based on a stacked ensemble of lazy
learners | This document describes a novel learning algorithm that classifies "bags" of
instances rather than individual instances. A bag is labeled positive if it
contains at least one positive instance (which may or may not be specifically
identified), and negative otherwise. This class of problems is known as
multi-instance learning problems, and is useful in situations where the class
label at an instance level may be unavailable or imprecise or difficult to
obtain, or in situations where the problem is naturally posed as one of
classifying instance groups. The algorithm described here is an ensemble-based
method, wherein the members of the ensemble are lazy learning classifiers
learnt using the Citation Nearest Neighbour method. Diversity among the
ensemble members is achieved by optimizing their parameters using a
multi-objective optimization method, with the objectives being to maximize
Class 1 accuracy and minimize false positive rate. The method has been found to
be effective on the Musk1 benchmark dataset.
| [
"Ramasubramanian Sundararajan, Hima Patel, Manisha Srivastava",
"['Ramasubramanian Sundararajan' 'Hima Patel' 'Manisha Srivastava']"
]
|
cs.CV cs.AI cs.LG q-bio.NC | null | 1407.2776 | null | null | http://arxiv.org/pdf/1407.2776v1 | 2014-07-10T13:15:18Z | 2014-07-10T13:15:18Z | What you need to know about the state-of-the-art computational models of
object-vision: A tour through the models | Models of object vision have been of great interest in computer vision and
visual neuroscience. During the last decades, several models have been
developed to extract visual features from images for object recognition tasks.
Some of these were inspired by the hierarchical structure of primate visual
system, and some others were engineered models. The models are varied in
several aspects: models that are trained by supervision, models trained without
supervision, and models (e.g. feature extractors) that are fully hard-wired and
do not need training. Some of the models come with a deep hierarchical
structure consisting of several layers, and some others are shallow and come
with only one or two layers of processing. More recently, new models have been
developed that are not hand-tuned but trained using millions of images, through
which they learn how to extract informative task-related features. Here I will
survey all these different models and provide the reader with an intuitive, as
well as a more detailed, understanding of the underlying computations in each
of the models.
| [
"['Seyed-Mahdi Khaligh-Razavi']",
"Seyed-Mahdi Khaligh-Razavi"
]
|
cs.LG cs.IR stat.ML | null | 1407.2806 | null | null | http://arxiv.org/pdf/1407.2806v1 | 2014-07-10T14:32:37Z | 2014-07-10T14:32:37Z | Bandits Warm-up Cold Recommender Systems | We address the cold start problem in recommendation systems assuming no
contextual information is available neither about users, nor items. We consider
the case in which we only have access to a set of ratings of items by users.
Most of the existing works consider a batch setting, and use cross-validation
to tune parameters. The classical method consists in minimizing the root mean
square error over a training subset of the ratings which provides a
factorization of the matrix of ratings, interpreted as a latent representation
of items and users. Our contribution in this paper is 5-fold. First, we
explicit the issues raised by this kind of batch setting for users or items
with very few ratings. Then, we propose an online setting closer to the actual
use of recommender systems; this setting is inspired by the bandit framework.
The proposed methodology can be used to turn any recommender system dataset
(such as Netflix, MovieLens,...) into a sequential dataset. Then, we explicit a
strong and insightful link between contextual bandit algorithms and matrix
factorization; this leads us to a new algorithm that tackles the
exploration/exploitation dilemma associated to the cold start problem in a
strikingly new perspective. Finally, experimental evidence confirm that our
algorithm is effective in dealing with the cold start problem on publicly
available datasets. Overall, the goal of this paper is to bridge the gap
between recommender systems based on matrix factorizations and those based on
contextual bandits.
| [
"J\\'er\\'emie Mary (INRIA Lille - Nord Europe, LIFL), Romaric Gaudel\n (INRIA Lille - Nord Europe, LIFL), Preux Philippe (INRIA Lille - Nord Europe,\n LIFL)",
"['Jérémie Mary' 'Romaric Gaudel' 'Preux Philippe']"
]
|
cs.DB cs.AI cs.IR cs.LG | 10.1016/j.knosys.2014.04.044 | 1407.2845 | null | null | http://arxiv.org/abs/1407.2845v1 | 2014-07-10T16:14:11Z | 2014-07-10T16:14:11Z | XML Matchers: approaches and challenges | Schema Matching, i.e. the process of discovering semantic correspondences
between concepts adopted in different data source schemas, has been a key topic
in Database and Artificial Intelligence research areas for many years. In the
past, it was largely investigated especially for classical database models
(e.g., E/R schemas, relational databases, etc.). However, in the latest years,
the widespread adoption of XML in the most disparate application fields pushed
a growing number of researchers to design XML-specific Schema Matching
approaches, called XML Matchers, aiming at finding semantic matchings between
concepts defined in DTDs and XSDs. XML Matchers do not just take well-known
techniques originally designed for other data models and apply them on
DTDs/XSDs, but they exploit specific XML features (e.g., the hierarchical
structure of a DTD/XSD) to improve the performance of the Schema Matching
process. The design of XML Matchers is currently a well-established research
area. The main goal of this paper is to provide a detailed description and
classification of XML Matchers. We first describe to what extent the
specificities of DTDs/XSDs impact on the Schema Matching task. Then we
introduce a template, called XML Matcher Template, that describes the main
components of an XML Matcher, their role and behavior. We illustrate how each
of these components has been implemented in some popular XML Matchers. We
consider our XML Matcher Template as the baseline for objectively comparing
approaches that, at first glance, might appear as unrelated. The introduction
of this template can be useful in the design of future XML Matchers. Finally,
we analyze commercial tools implementing XML Matchers and introduce two
challenging issues strictly related to this topic, namely XML source clustering
and uncertainty management in XML Matchers.
| [
"['Santa Agreste' 'Pasquale De Meo' 'Emilio Ferrara' 'Domenico Ursino']",
"Santa Agreste, Pasquale De Meo, Emilio Ferrara, Domenico Ursino"
]
|
stat.ML cs.CV cs.LG math.SP math.ST stat.TH | null | 1407.2904 | null | null | http://arxiv.org/pdf/1407.2904v1 | 2014-07-10T19:04:49Z | 2014-07-10T19:04:49Z | An eigenanalysis of data centering in machine learning | Many pattern recognition methods rely on statistical information from
centered data, with the eigenanalysis of an empirical central moment, such as
the covariance matrix in principal component analysis (PCA), as well as partial
least squares regression, canonical-correlation analysis and Fisher
discriminant analysis. Recently, many researchers advocate working on
non-centered data. This is the case for instance with the singular value
decomposition approach, with the (kernel) entropy component analysis, with the
information-theoretic learning framework, and even with nonnegative matrix
factorization. Moreover, one can also consider a non-centered PCA by using the
second-order non-central moment.
The main purpose of this paper is to bridge the gap between these two
viewpoints in designing machine learning methods. To provide a study at the
cornerstone of kernel-based machines, we conduct an eigenanalysis of the inner
product matrices from centered and non-centered data. We derive several results
connecting their eigenvalues and their eigenvectors. Furthermore, we explore
the outer product matrices, by providing several results connecting the largest
eigenvectors of the covariance matrix and its non-centered counterpart. These
results lay the groundwork to several extensions beyond conventional centering,
with the weighted mean shift, the rank-one update, and the multidimensional
scaling. Experiments conducted on simulated and real data illustrate the
relevance of this work.
| [
"['Paul Honeine']",
"Paul Honeine"
]
|
cs.IR cs.LG | null | 1407.2919 | null | null | http://arxiv.org/pdf/1407.2919v1 | 2014-07-09T01:39:03Z | 2014-07-09T01:39:03Z | Collaborative Recommendation with Auxiliary Data: A Transfer Learning
View | Intelligent recommendation technology has been playing an increasingly
important role in various industry applications such as e-commerce product
promotion and Internet advertisement display. Besides users' feedbacks (e.g.,
numerical ratings) on items as usually exploited by some typical recommendation
algorithms, there are often some additional data such as users' social circles
and other behaviors. Such auxiliary data are usually related to users'
preferences on items behind the numerical ratings. Collaborative recommendation
with auxiliary data (CRAD) aims to leverage such additional information so as
to improve the personalization services, which have received much attention
from both researchers and practitioners.
Transfer learning (TL) is proposed to extract and transfer knowledge from
some auxiliary data in order to assist the learning task on some target data.
In this paper, we consider the CRAD problem from a transfer learning view,
especially on how to achieve knowledge transfer from some auxiliary data.
First, we give a formal definition of transfer learning for CRAD (TL-CRAD).
Second, we extend the existing categorization of TL techniques (i.e., adaptive,
collective and integrative knowledge transfer algorithm styles) with three
knowledge transfer strategies (i.e., prediction rule, regularization and
constraint). Third, we propose a novel generic knowledge transfer framework for
TL-CRAD. Fourth, we describe some representative works of each specific
knowledge transfer strategy of each algorithm style in detail, which are
expected to inspire further works. Finally, we conclude the paper with some
summary discussions and several future directions.
| [
"Weike Pan",
"['Weike Pan']"
]
|
cs.CV cs.AI cs.IR cs.LG | null | 1407.2987 | null | null | http://arxiv.org/pdf/1407.2987v1 | 2014-07-10T23:52:44Z | 2014-07-10T23:52:44Z | FAME: Face Association through Model Evolution | We attack the problem of learning face models for public faces from
weakly-labelled images collected from web through querying a name. The data is
very noisy even after face detection, with several irrelevant faces
corresponding to other people. We propose a novel method, Face Association
through Model Evolution (FAME), that is able to prune the data in an iterative
way, for the face models associated to a name to evolve. The idea is based on
capturing discriminativeness and representativeness of each instance and
eliminating the outliers. The final models are used to classify faces on novel
datasets with possibly different characteristics. On benchmark datasets, our
results are comparable to or better than state-of-the-art studies for the task
of face identification.
| [
"['Eren Golge' 'Pinar Duygulu']",
"Eren Golge and Pinar Duygulu"
]
|
cs.LG cs.CV | null | 1407.3026 | null | null | http://arxiv.org/pdf/1407.3026v1 | 2014-07-11T04:56:49Z | 2014-07-11T04:56:49Z | An SVM Based Approach for Cardiac View Planning | We consider the problem of automatically prescribing oblique planes (short
axis, 4 chamber and 2 chamber views) in Cardiac Magnetic Resonance Imaging
(MRI). A concern with technologist-driven acquisitions of these planes is the
quality and time taken for the total examination. We propose an automated
solution incorporating anatomical features external to the cardiac region. The
solution uses support vector machine regression models wherein complexity and
feature selection are optimized using multi-objective genetic algorithms.
Additionally, we examine the robustness of our approach by training our models
on images with additive Rician-Gaussian mixtures at varying Signal to Noise
(SNR) levels. Our approach has shown promising results, with an angular
deviation of less than 15 degrees on 90% cases across oblique planes, measured
in terms of average 6-fold cross validation performance -- this is generally
within acceptable bounds of variation as specified by clinicians.
| [
"['Ramasubramanian Sundararajan' 'Hima Patel' 'Dattesh Shanbhag'\n 'Vivek Vaidya']",
"Ramasubramanian Sundararajan, Hima Patel, Dattesh Shanbhag, Vivek\n Vaidya"
]
|
cs.CV cs.LG cs.NE | null | 1407.3068 | null | null | http://arxiv.org/pdf/1407.3068v2 | 2014-07-28T08:22:50Z | 2014-07-11T08:56:54Z | Deep Networks with Internal Selective Attention through Feedback
Connections | Traditional convolutional neural networks (CNN) are stationary and
feedforward. They neither change their parameters during evaluation nor use
feedback from higher to lower layers. Real brains, however, do. So does our
Deep Attention Selective Network (dasNet) architecture. DasNets feedback
structure can dynamically alter its convolutional filter sensitivities during
classification. It harnesses the power of sequential processing to improve
classification performance, by allowing the network to iteratively focus its
internal attention on some of its convolutional filters. Feedback is trained
through direct policy search in a huge million-dimensional parameter space,
through scalable natural evolution strategies (SNES). On the CIFAR-10 and
CIFAR-100 datasets, dasNet outperforms the previous state-of-the-art model.
| [
"Marijn Stollenga, Jonathan Masci, Faustino Gomez, Juergen Schmidhuber",
"['Marijn Stollenga' 'Jonathan Masci' 'Faustino Gomez'\n 'Juergen Schmidhuber']"
]
|
cs.DS cs.LG stat.ML | null | 1407.3242 | null | null | http://arxiv.org/pdf/1407.3242v1 | 2014-07-11T18:24:15Z | 2014-07-11T18:24:15Z | Density Adaptive Parallel Clustering | In this paper we are going to introduce a new nearest neighbours based
approach to clustering, and compare it with previous solutions; the resulting
algorithm, which takes inspiration from both DBscan and minimum spanning tree
approaches, is deterministic but proves simpler, faster and doesnt require to
set in advance a value for k, the number of clusters.
| [
"['Marcello La Rocca']",
"Marcello La Rocca"
]
|
cs.AI cs.LG cs.NE cs.RO | 10.1016/j.ins.2014.05.001 | 1407.3269 | null | null | http://arxiv.org/abs/1407.3269v1 | 2014-07-11T14:21:22Z | 2014-07-11T14:21:22Z | Multiple chaotic central pattern generators with learning for legged
locomotion and malfunction compensation | An originally chaotic system can be controlled into various periodic
dynamics. When it is implemented into a legged robot's locomotion control as a
central pattern generator (CPG), sophisticated gait patterns arise so that the
robot can perform various walking behaviors. However, such a single chaotic CPG
controller has difficulties dealing with leg malfunction. Specifically, in the
scenarios presented here, its movement permanently deviates from the desired
trajectory. To address this problem, we extend the single chaotic CPG to
multiple CPGs with learning. The learning mechanism is based on a simulated
annealing algorithm. In a normal situation, the CPGs synchronize and their
dynamics are identical. With leg malfunction or disability, the CPGs lose
synchronization leading to independent dynamics. In this case, the learning
mechanism is applied to automatically adjust the remaining legs' oscillation
frequencies so that the robot adapts its locomotion to deal with the
malfunction. As a consequence, the trajectory produced by the multiple chaotic
CPGs resembles the original trajectory far better than the one produced by only
a single CPG. The performance of the system is evaluated first in a physical
simulation of a quadruped as well as a hexapod robot and finally in a real
six-legged walking machine called AMOSII. The experimental results presented
here reveal that using multiple CPGs with learning is an effective approach for
adaptive locomotion generation where, for instance, different body parts have
to perform independent movements for malfunction compensation.
| [
"Guanjiao Ren, Weihai Chen, Sakyasingha Dasgupta, Christoph\n Kolodziejski, Florentin W\\\"org\\\"otter, Poramate Manoonpong",
"['Guanjiao Ren' 'Weihai Chen' 'Sakyasingha Dasgupta'\n 'Christoph Kolodziejski' 'Florentin Wörgötter' 'Poramate Manoonpong']"
]
|
stat.ML cs.LG math.ST stat.TH | null | 1407.3289 | null | null | http://arxiv.org/pdf/1407.3289v2 | 2014-10-31T18:30:18Z | 2014-07-11T20:32:34Z | Altitude Training: Strong Bounds for Single-Layer Dropout | Dropout training, originally designed for deep neural networks, has been
successful on high-dimensional single-layer natural language tasks. This paper
proposes a theoretical explanation for this phenomenon: we show that, under a
generative Poisson topic model with long documents, dropout training improves
the exponent in the generalization bound for empirical risk minimization.
Dropout achieves this gain much like a marathon runner who practices at
altitude: once a classifier learns to perform reasonably well on training
examples that have been artificially corrupted by dropout, it will do very well
on the uncorrupted test set. We also show that, under similar conditions,
dropout preserves the Bayes decision boundary and should therefore induce
minimal bias in high dimensions.
| [
"Stefan Wager, William Fithian, Sida Wang, and Percy Liang",
"['Stefan Wager' 'William Fithian' 'Sida Wang' 'Percy Liang']"
]
|
cs.LG cs.IT math.IT math.ST stat.CO stat.TH | null | 1407.3334 | null | null | http://arxiv.org/pdf/1407.3334v1 | 2014-07-12T01:30:59Z | 2014-07-12T01:30:59Z | Offline to Online Conversion | We consider the problem of converting offline estimators into an online
predictor or estimator with small extra regret. Formally this is the problem of
merging a collection of probability measures over strings of length 1,2,3,...
into a single probability measure over infinite sequences. We describe various
approaches and their pros and cons on various examples. As a side-result we
give an elementary non-heuristic purely combinatoric derivation of Turing's
famous estimator. Our main technical contribution is to determine the
computational complexity of online estimators with good guarantees in general.
| [
"Marcus Hutter",
"['Marcus Hutter']"
]
|
cs.AI cs.LG | null | 1407.3341 | null | null | http://arxiv.org/pdf/1407.3341v1 | 2014-07-12T04:10:43Z | 2014-07-12T04:10:43Z | Extreme State Aggregation Beyond MDPs | We consider a Reinforcement Learning setup where an agent interacts with an
environment in observation-reward-action cycles without any (esp.\ MDP)
assumptions on the environment. State aggregation and more generally feature
reinforcement learning is concerned with mapping histories/raw-states to
reduced/aggregated states. The idea behind both is that the resulting reduced
process (approximately) forms a small stationary finite-state MDP, which can
then be efficiently solved or learnt. We considerably generalize existing
aggregation results by showing that even if the reduced process is not an MDP,
the (q-)value functions and (optimal) policies of an associated MDP with same
state-space size solve the original problem, as long as the solution can
approximately be represented as a function of the reduced states. This implies
an upper bound on the required state space size that holds uniformly for all RL
problems. It may also explain why RL algorithms designed for MDPs sometimes
perform well beyond MDPs.
| [
"Marcus Hutter",
"['Marcus Hutter']"
]
|
stat.ML cs.LG | null | 1407.3422 | null | null | http://arxiv.org/pdf/1407.3422v3 | 2016-02-29T00:29:23Z | 2014-07-12T23:57:07Z | A Spectral Algorithm for Inference in Hidden Semi-Markov Models | Hidden semi-Markov models (HSMMs) are latent variable models which allow
latent state persistence and can be viewed as a generalization of the popular
hidden Markov models (HMMs). In this paper, we introduce a novel spectral
algorithm to perform inference in HSMMs. Unlike expectation maximization (EM),
our approach correctly estimates the probability of given observation sequence
based on a set of training sequences. Our approach is based on estimating
moments from the sample, whose number of dimensions depends only
logarithmically on the maximum length of the hidden state persistence.
Moreover, the algorithm requires only a few matrix inversions and is therefore
computationally efficient. Empirical evaluations on synthetic and real data
demonstrate the advantage of the algorithm over EM in terms of speed and
accuracy, especially for large datasets.
| [
"['Igor Melnyk' 'Arindam Banerjee']",
"Igor Melnyk and Arindam Banerjee"
]
|
cs.RO cs.AI cs.LG cs.NE q-bio.NC | 10.1038/nature14422 | 1407.3501 | null | null | http://arxiv.org/abs/1407.3501v4 | 2015-05-27T22:43:04Z | 2014-07-13T19:06:08Z | Robots that can adapt like animals | As robots leave the controlled environments of factories to autonomously
function in more complex, natural environments, they will have to respond to
the inevitable fact that they will become damaged. However, while animals can
quickly adapt to a wide variety of injuries, current robots cannot "think
outside the box" to find a compensatory behavior when damaged: they are limited
to their pre-specified self-sensing abilities, can diagnose only anticipated
failure modes, and require a pre-programmed contingency plan for every type of
potential damage, an impracticality for complex robots. Here we introduce an
intelligent trial and error algorithm that allows robots to adapt to damage in
less than two minutes, without requiring self-diagnosis or pre-specified
contingency plans. Before deployment, a robot exploits a novel algorithm to
create a detailed map of the space of high-performing behaviors: This map
represents the robot's intuitions about what behaviors it can perform and their
value. If the robot is damaged, it uses these intuitions to guide a
trial-and-error learning algorithm that conducts intelligent experiments to
rapidly discover a compensatory behavior that works in spite of the damage.
Experiments reveal successful adaptations for a legged robot injured in five
different ways, including damaged, broken, and missing legs, and for a robotic
arm with joints broken in 14 different ways. This new technique will enable
more robust, effective, autonomous robots, and suggests principles that animals
may use to adapt to injury.
| [
"Antoine Cully, Jeff Clune, Danesh Tarapore, Jean-Baptiste Mouret",
"['Antoine Cully' 'Jeff Clune' 'Danesh Tarapore' 'Jean-Baptiste Mouret']"
]
|
stat.ML cs.LG | null | 1407.3619 | null | null | http://arxiv.org/pdf/1407.3619v1 | 2014-07-14T12:14:08Z | 2014-07-14T12:14:08Z | On the Power of Adaptivity in Matrix Completion and Approximation | We consider the related tasks of matrix completion and matrix approximation
from missing data and propose adaptive sampling procedures for both problems.
We show that adaptive sampling allows one to eliminate standard incoherence
assumptions on the matrix row space that are necessary for passive sampling
procedures. For exact recovery of a low-rank matrix, our algorithm judiciously
selects a few columns to observe in full and, with few additional measurements,
projects the remaining columns onto their span. This algorithm exactly recovers
an $n \times n$ rank $r$ matrix using $O(nr\mu_0 \log^2(r))$ observations,
where $\mu_0$ is a coherence parameter on the column space of the matrix. In
addition to completely eliminating any row space assumptions that have pervaded
the literature, this algorithm enjoys a better sample complexity than any
existing matrix completion algorithm. To certify that this improvement is due
to adaptive sampling, we establish that row space coherence is necessary for
passive sampling algorithms to achieve non-trivial sample complexity bounds.
For constructing a low-rank approximation to a high-rank input matrix, we
propose a simple algorithm that thresholds the singular values of a zero-filled
version of the input matrix. The algorithm computes an approximation that is
nearly as good as the best rank-$r$ approximation using $O(nr\mu \log^2(n))$
samples, where $\mu$ is a slightly different coherence parameter on the matrix
columns. Again we eliminate assumptions on the row space.
| [
"['Akshay Krishnamurthy' 'Aarti Singh']",
"Akshay Krishnamurthy and Aarti Singh"
]
|
cs.LG cs.DB | null | 1407.3685 | null | null | http://arxiv.org/pdf/1407.3685v1 | 2014-07-14T15:01:57Z | 2014-07-14T15:01:57Z | Finding Motif Sets in Time Series | Time-series motifs are representative subsequences that occur frequently in a
time series; a motif set is the set of subsequences deemed to be instances of a
given motif. We focus on finding motif sets. Our motivation is to detect motif
sets in household electricity-usage profiles, representing repeated patterns of
household usage.
We propose three algorithms for finding motif sets. Two are greedy algorithms
based on pairwise comparison, and the third uses a heuristic measure of set
quality to find the motif set directly. We compare these algorithms on
simulated datasets and on electricity-usage data. We show that Scan MK, the
simplest way of using the best-matching pair to find motif sets, is less
accurate on our synthetic data than Set Finder and Cluster MK, although the
latter is very sensitive to parameter settings. We qualitatively analyse the
outputs for the electricity-usage data and demonstrate that both Scan MK and
Set Finder can discover useful motif sets in such data.
| [
"['Anthony Bagnall' 'Jon Hills' 'Jason Lines']",
"Anthony Bagnall, Jon Hills and Jason Lines"
]
|
quant-ph cs.LG | 10.1140/epjst/e2015-02349-9 | 1407.3897 | null | null | http://arxiv.org/abs/1407.3897v2 | 2014-10-02T19:46:35Z | 2014-07-15T07:22:13Z | Bayesian Network Structure Learning Using Quantum Annealing | We introduce a method for the problem of learning the structure of a Bayesian
network using the quantum adiabatic algorithm. We do so by introducing an
efficient reformulation of a standard posterior-probability scoring function on
graphs as a pseudo-Boolean function, which is equivalent to a system of 2-body
Ising spins, as well as suitable penalty terms for enforcing the constraints
necessary for the reformulation; our proposed method requires $\mathcal O(n^2)$
qubits for $n$ Bayesian network variables. Furthermore, we prove lower bounds
on the necessary weighting of these penalty terms. The logical structure
resulting from the mapping has the appealing property that it is
instance-independent for a given number of Bayesian network variables, as well
as being independent of the number of data cases.
| [
"Bryan O'Gorman, Alejandro Perdomo-Ortiz, Ryan Babbush, Alan\n Aspuru-Guzik, and Vadim Smelyanskiy",
"[\"Bryan O'Gorman\" 'Alejandro Perdomo-Ortiz' 'Ryan Babbush'\n 'Alan Aspuru-Guzik' 'Vadim Smelyanskiy']"
]
|
math.ST cs.LG stat.ME stat.TH | null | 1407.3939 | null | null | http://arxiv.org/pdf/1407.3939v1 | 2014-07-15T11:12:54Z | 2014-07-15T11:12:54Z | Analysis of purely random forests bias | Random forests are a very effective and commonly used statistical method, but
their full theoretical analysis is still an open problem. As a first step,
simplified models such as purely random forests have been introduced, in order
to shed light on the good performance of random forests. In this paper, we
study the approximation error (the bias) of some purely random forest models in
a regression framework, focusing in particular on the influence of the number
of trees in the forest. Under some regularity assumptions on the regression
function, we show that the bias of an infinite forest decreases at a faster
rate (with respect to the size of each tree) than a single tree. As a
consequence, infinite forests attain a strictly better risk rate (with respect
to the sample size) than single trees. Furthermore, our results allow to derive
a minimum number of trees sufficient to reach the same rate as an infinite
forest. As a by-product of our analysis, we also show a link between the bias
of purely random forests and the bias of some kernel estimators.
| [
"['Sylvain Arlot' 'Robin Genuer']",
"Sylvain Arlot (DI-ENS, INRIA Paris - Rocquencourt), Robin Genuer\n (ISPED, INRIA Bordeaux - Sud-Ouest)"
]
|
cs.LG cs.DS stat.ML | null | 1407.4070 | null | null | http://arxiv.org/pdf/1407.4070v1 | 2014-07-15T17:47:44Z | 2014-07-15T17:47:44Z | Fast matrix completion without the condition number | We give the first algorithm for Matrix Completion whose running time and
sample complexity is polynomial in the rank of the unknown target matrix,
linear in the dimension of the matrix, and logarithmic in the condition number
of the matrix. To the best of our knowledge, all previous algorithms either
incurred a quadratic dependence on the condition number of the unknown matrix
or a quadratic dependence on the dimension of the matrix in the running time.
Our algorithm is based on a novel extension of Alternating Minimization which
we show has theoretical guarantees under standard assumptions even in the
presence of noise.
| [
"['Moritz Hardt' 'Mary Wootters']",
"Moritz Hardt and Mary Wootters"
]
|
cs.PL cs.LG | null | 1407.4075 | null | null | http://arxiv.org/pdf/1407.4075v1 | 2014-07-14T17:55:07Z | 2014-07-14T17:55:07Z | Finding representative sets of optimizations for adaptive
multiversioning applications | Iterative compilation is a widely adopted technique to optimize programs for
different constraints such as performance, code size and power consumption in
rapidly evolving hardware and software environments. However, in case of
statically compiled programs, it is often restricted to optimizations for a
specific dataset and may not be applicable to applications that exhibit
different run-time behavior across program phases, multiple datasets or when
executed in heterogeneous, reconfigurable and virtual environments. Several
frameworks have been recently introduced to tackle these problems and enable
run-time optimization and adaptation for statically compiled programs based on
static function multiversioning and monitoring of online program behavior. In
this article, we present a novel technique to select a minimal set of
representative optimization variants (function versions) for such frameworks
while avoiding performance loss across available datasets and code-size
explosion. We developed a novel mapping mechanism using popular decision tree
or rule induction based machine learning techniques to rapidly select best code
versions at run-time based on dataset features and minimize selection overhead.
These techniques enable creation of self-tuning static binaries or libraries
adaptable to changing behavior and environments at run-time using staged
compilation that do not require complex recompilation frameworks while
effectively outperforming traditional single-version non-adaptable code.
| [
"Lianjie Luo and Yang Chen and Chengyong Wu and Shun Long and Grigori\n Fursin",
"['Lianjie Luo' 'Yang Chen' 'Chengyong Wu' 'Shun Long' 'Grigori Fursin']"
]
|
stat.CO cs.DS cs.IR cs.LG stat.ML | null | 1407.4416 | null | null | http://arxiv.org/pdf/1407.4416v1 | 2014-07-16T18:27:02Z | 2014-07-16T18:27:02Z | In Defense of MinHash Over SimHash | MinHash and SimHash are the two widely adopted Locality Sensitive Hashing
(LSH) algorithms for large-scale data processing applications. Deciding which
LSH to use for a particular problem at hand is an important question, which has
no clear answer in the existing literature. In this study, we provide a
theoretical answer (validated by experiments) that MinHash virtually always
outperforms SimHash when the data are binary, as common in practice such as
search.
The collision probability of MinHash is a function of resemblance similarity
($\mathcal{R}$), while the collision probability of SimHash is a function of
cosine similarity ($\mathcal{S}$). To provide a common basis for comparison, we
evaluate retrieval results in terms of $\mathcal{S}$ for both MinHash and
SimHash. This evaluation is valid as we can prove that MinHash is a valid LSH
with respect to $\mathcal{S}$, by using a general inequality $\mathcal{S}^2\leq
\mathcal{R}\leq \frac{\mathcal{S}}{2-\mathcal{S}}$. Our worst case analysis can
show that MinHash significantly outperforms SimHash in high similarity region.
Interestingly, our intensive experiments reveal that MinHash is also
substantially better than SimHash even in datasets where most of the data
points are not too similar to each other. This is partly because, in practical
data, often $\mathcal{R}\geq \frac{\mathcal{S}}{z-\mathcal{S}}$ holds where $z$
is only slightly larger than 2 (e.g., $z\leq 2.1$). Our restricted worst case
analysis by assuming $\frac{\mathcal{S}}{z-\mathcal{S}}\leq \mathcal{R}\leq
\frac{\mathcal{S}}{2-\mathcal{S}}$ shows that MinHash indeed significantly
outperforms SimHash even in low similarity region.
We believe the results in this paper will provide valuable guidelines for
search in practice, especially when the data are sparse.
| [
"['Anshumali Shrivastava' 'Ping Li']",
"Anshumali Shrivastava and Ping Li"
]
|
cs.CV cs.IT cs.LG cs.NE math.IT stat.ML | null | 1407.4420 | null | null | http://arxiv.org/pdf/1407.4420v2 | 2016-03-27T20:44:42Z | 2014-07-16T18:46:41Z | Kernel Nonnegative Matrix Factorization Without the Curse of the
Pre-image - Application to Unmixing Hyperspectral Images | The nonnegative matrix factorization (NMF) is widely used in signal and image
processing, including bio-informatics, blind source separation and
hyperspectral image analysis in remote sensing. A great challenge arises when
dealing with a nonlinear formulation of the NMF. Within the framework of kernel
machines, the models suggested in the literature do not allow the
representation of the factorization matrices, which is a fallout of the curse
of the pre-image. In this paper, we propose a novel kernel-based model for the
NMF that does not suffer from the pre-image problem, by investigating the
estimation of the factorization matrices directly in the input space. For
different kernel functions, we describe two schemes for iterative algorithms:
an additive update rule based on a gradient descent scheme and a multiplicative
update rule in the same spirit as in the Lee and Seung algorithm. Within the
proposed framework, we develop several extensions to incorporate constraints,
including sparseness, smoothness, and spatial regularization with a
total-variation-like penalty. The effectiveness of the proposed method is
demonstrated with the problem of unmixing hyperspectral images, using
well-known real images and results with state-of-the-art techniques.
| [
"Fei Zhu, Paul Honeine, Maya Kallas",
"['Fei Zhu' 'Paul Honeine' 'Maya Kallas']"
]
|
cs.LG | null | 1407.4422 | null | null | http://arxiv.org/pdf/1407.4422v1 | 2014-07-16T18:50:40Z | 2014-07-16T18:50:40Z | Subspace Restricted Boltzmann Machine | The subspace Restricted Boltzmann Machine (subspaceRBM) is a third-order
Boltzmann machine where multiplicative interactions are between one visible and
two hidden units. There are two kinds of hidden units, namely, gate units and
subspace units. The subspace units reflect variations of a pattern in data and
the gate unit is responsible for activating the subspace units. Additionally,
the gate unit can be seen as a pooling feature. We evaluate the behavior of
subspaceRBM through experiments with MNIST digit recognition task, measuring
reconstruction error and classification error.
| [
"['Jakub M. Tomczak' 'Adam Gonczarek']",
"Jakub M. Tomczak and Adam Gonczarek"
]
|
stat.ML cs.LG stat.AP | null | 1407.4430 | null | null | http://arxiv.org/pdf/1407.4430v1 | 2014-07-16T19:05:55Z | 2014-07-16T19:05:55Z | Sequential Logistic Principal Component Analysis (SLPCA): Dimensional
Reduction in Streaming Multivariate Binary-State System | Sequential or online dimensional reduction is of interests due to the
explosion of streaming data based applications and the requirement of adaptive
statistical modeling, in many emerging fields, such as the modeling of energy
end-use profile. Principal Component Analysis (PCA), is the classical way of
dimensional reduction. However, traditional Singular Value Decomposition (SVD)
based PCA fails to model data which largely deviates from Gaussian
distribution. The Bregman Divergence was recently introduced to achieve a
generalized PCA framework. If the random variable under dimensional reduction
follows Bernoulli distribution, which occurs in many emerging fields, the
generalized PCA is called Logistic PCA (LPCA). In this paper, we extend the
batch LPCA to a sequential version (i.e. SLPCA), based on the sequential convex
optimization theory. The convergence property of this algorithm is discussed
compared to the batch version of LPCA (i.e. BLPCA), as well as its performance
in reducing the dimension for multivariate binary-state systems. Its
application in building energy end-use profile modeling is also investigated.
| [
"Zhaoyi Kang and Costas J. Spanos",
"['Zhaoyi Kang' 'Costas J. Spanos']"
]
|
stat.ML cs.LG | null | 1407.4443 | null | null | http://arxiv.org/pdf/1407.4443v2 | 2016-11-14T12:38:45Z | 2014-07-16T19:44:15Z | On the Complexity of Best Arm Identification in Multi-Armed Bandit
Models | The stochastic multi-armed bandit model is a simple abstraction that has
proven useful in many different contexts in statistics and machine learning.
Whereas the achievable limit in terms of regret minimization is now well known,
our aim is to contribute to a better understanding of the performance in terms
of identifying the m best arms. We introduce generic notions of complexity for
the two dominant frameworks considered in the literature: fixed-budget and
fixed-confidence settings. In the fixed-confidence setting, we provide the
first known distribution-dependent lower bound on the complexity that involves
information-theoretic quantities and holds when m is larger than 1 under
general assumptions. In the specific case of two armed-bandits, we derive
refined lower bounds in both the fixed-confidence and fixed-budget settings,
along with matching algorithms for Gaussian and Bernoulli bandit models. These
results show in particular that the complexity of the fixed-budget setting may
be smaller than the complexity of the fixed-confidence setting, contradicting
the familiar behavior observed when testing fully specified alternatives. In
addition, we also provide improved sequential stopping rules that have
guaranteed error probabilities and shorter average running times. The proofs
rely on two technical results that are of independent interest : a deviation
lemma for self-normalized sums (Lemma 19) and a novel change of measure
inequality for bandit models (Lemma 1).
| [
"['Emilie Kaufmann' 'Olivier Cappé' 'Aurélien Garivier']",
"Emilie Kaufmann (SEQUEL, LTCI), Olivier Capp\\'e (LTCI), Aur\\'elien\n Garivier (IMT)"
]
|
cs.IT cs.LG math.IT math.OC math.ST stat.ML stat.TH | null | 1407.4446 | null | null | http://arxiv.org/pdf/1407.4446v3 | 2015-09-23T03:32:33Z | 2014-07-16T19:55:51Z | Probabilistic Group Testing under Sum Observations: A Parallelizable
2-Approximation for Entropy Loss | We consider the problem of group testing with sum observations and noiseless
answers, in which we aim to locate multiple objects by querying the number of
objects in each of a sequence of chosen sets. We study a probabilistic setting
with entropy loss, in which we assume a joint Bayesian prior density on the
locations of the objects and seek to choose the sets queried to minimize the
expected entropy of the Bayesian posterior distribution after a fixed number of
questions. We present a new non-adaptive policy, called the dyadic policy, show
it is optimal among non-adaptive policies, and is within a factor of two of
optimal among adaptive policies. This policy is quick to compute, its
nonadaptive nature makes it easy to parallelize, and our bounds show it
performs well even when compared with adaptive policies. We also study an
adaptive greedy policy, which maximizes the one-step expected reduction in
entropy, and show that it performs at least as well as the dyadic policy,
offering greater query efficiency but reduced parallelism. Numerical
experiments demonstrate that both procedures outperform a divide-and-conquer
benchmark policy from the literature, called sequential bifurcation, and show
how these procedures may be applied in a stylized computer vision problem.
| [
"['Weidong Han' 'Purnima Rajan' 'Peter I. Frazier' 'Bruno M. Jedynak']",
"Weidong Han, Purnima Rajan, Peter I. Frazier, Bruno M. Jedynak"
]
|
cs.LG | null | 1407.4668 | null | null | http://arxiv.org/pdf/1407.4668v1 | 2014-07-17T13:51:55Z | 2014-07-17T13:51:55Z | A feature construction framework based on outlier detection and
discriminative pattern mining | No matter the expressive power and sophistication of supervised learning
algorithms, their effectiveness is restricted by the features describing the
data. This is not a new insight in ML and many methods for feature selection,
transformation, and construction have been developed. But while this is
on-going for general techniques for feature selection and transformation, i.e.
dimensionality reduction, work on feature construction, i.e. enriching the
data, is by now mainly the domain of image, particularly character,
recognition, and NLP.
In this work, we propose a new general framework for feature construction.
The need for feature construction in a data set is indicated by class outliers
and discriminative pattern mining used to derive features on their
k-neighborhoods. We instantiate the framework with LOF and C4.5-Rules, and
evaluate the usefulness of the derived features on a diverse collection of UCI
data sets. The derived features are more often useful than ones derived by
DC-Fringe, and our approach is much less likely to overfit. But while a weak
learner, Naive Bayes, benefits strongly from the feature construction, the
effect is less pronounced for C4.5, and almost vanishes for an SVM leaner.
Keywords: feature construction, classification, outlier detection
| [
"['Albrecht Zimmermann']",
"Albrecht Zimmermann"
]
|
stat.ME cs.LG stat.ML | null | 1407.4729 | null | null | http://arxiv.org/pdf/1407.4729v3 | 2018-03-27T19:02:45Z | 2014-07-17T16:27:36Z | Sparse Partially Linear Additive Models | The generalized partially linear additive model (GPLAM) is a flexible and
interpretable approach to building predictive models. It combines features in
an additive manner, allowing each to have either a linear or nonlinear effect
on the response. However, the choice of which features to treat as linear or
nonlinear is typically assumed known. Thus, to make a GPLAM a viable approach
in situations in which little is known $a~priori$ about the features, one must
overcome two primary model selection challenges: deciding which features to
include in the model and determining which of these features to treat
nonlinearly. We introduce the sparse partially linear additive model (SPLAM),
which combines model fitting and $both$ of these model selection challenges
into a single convex optimization problem. SPLAM provides a bridge between the
lasso and sparse additive models. Through a statistical oracle inequality and
thorough simulation, we demonstrate that SPLAM can outperform other methods
across a broad spectrum of statistical regimes, including the high-dimensional
($p\gg N$) setting. We develop efficient algorithms that are applied to real
data sets with half a million samples and over 45,000 features with excellent
predictive performance.
| [
"Yin Lou, Jacob Bien, Rich Caruana, Johannes Gehrke",
"['Yin Lou' 'Jacob Bien' 'Rich Caruana' 'Johannes Gehrke']"
]
|
cs.CV cs.LG | null | 1407.4739 | null | null | http://arxiv.org/pdf/1407.4739v1 | 2014-07-17T17:10:06Z | 2014-07-17T17:10:06Z | An landcover fuzzy logic classification by maximumlikelihood | In present days remote sensing is most used application in many sectors. This
remote sensing uses different images like multispectral, hyper spectral or
ultra spectral. The remote sensing image classification is one of the
significant method to classify image. In this state we classify the maximum
likelihood classification with fuzzy logic. In this we experimenting fuzzy
logic like spatial, spectral texture methods in that different sub methods to
be used for image classification.
| [
"['T. Sarath' 'G. Nagalakshmi']",
"T.Sarath, G.Nagalakshmi"
]
|
cs.CV cs.LG cs.NE | null | 1407.4764 | null | null | http://arxiv.org/pdf/1407.4764v3 | 2014-11-17T12:10:23Z | 2014-07-17T18:29:38Z | Efficient On-the-fly Category Retrieval using ConvNets and GPUs | We investigate the gains in precision and speed, that can be obtained by
using Convolutional Networks (ConvNets) for on-the-fly retrieval - where
classifiers are learnt at run time for a textual query from downloaded images,
and used to rank large image or video datasets.
We make three contributions: (i) we present an evaluation of state-of-the-art
image representations for object category retrieval over standard benchmark
datasets containing 1M+ images; (ii) we show that ConvNets can be used to
obtain features which are incredibly performant, and yet much lower dimensional
than previous state-of-the-art image representations, and that their
dimensionality can be reduced further without loss in performance by
compression using product quantization or binarization. Consequently, features
with the state-of-the-art performance on large-scale datasets of millions of
images can fit in the memory of even a commodity GPU card; (iii) we show that
an SVM classifier can be learnt within a ConvNet framework on a GPU in parallel
with downloading the new training images, allowing for a continuous refinement
of the model as more images become available, and simultaneous training and
ranking. The outcome is an on-the-fly system that significantly outperforms its
predecessors in terms of: precision of retrieval, memory requirements, and
speed, facilitating accurate on-the-fly learning and ranking in under a second
on a single GPU.
| [
"['Ken Chatfield' 'Karen Simonyan' 'Andrew Zisserman']",
"Ken Chatfield, Karen Simonyan and Andrew Zisserman"
]
|
cs.IR cs.AI cs.LG | null | 1407.4832 | null | null | http://arxiv.org/pdf/1407.4832v1 | 2014-07-16T12:07:36Z | 2014-07-16T12:07:36Z | Collaborative Filtering Ensemble for Personalized Name Recommendation | Out of thousands of names to choose from, picking the right one for your
child is a daunting task. In this work, our objective is to help parents making
an informed decision while choosing a name for their baby. We follow a
recommender system approach and combine, in an ensemble, the individual
rankings produced by simple collaborative filtering algorithms in order to
produce a personalized list of names that meets the individual parents' taste.
Our experiments were conducted using real-world data collected from the query
logs of 'nameling' (nameling.net), an online portal for searching and exploring
names, which corresponds to the dataset released in the context of the ECML
PKDD Discover Challenge 2013. Our approach is intuitive, easy to implement, and
features fast training and prediction steps.
| [
"Bernat Coma-Puig and Ernesto Diaz-Aviles and Wolfgang Nejdl",
"['Bernat Coma-Puig' 'Ernesto Diaz-Aviles' 'Wolfgang Nejdl']"
]
|
cs.CV cs.LG cs.NE | null | 1407.4979 | null | null | http://arxiv.org/pdf/1407.4979v1 | 2014-07-18T13:07:16Z | 2014-07-18T13:07:16Z | Deep Metric Learning for Practical Person Re-Identification | Various hand-crafted features and metric learning methods prevail in the
field of person re-identification. Compared to these methods, this paper
proposes a more general way that can learn a similarity metric from image
pixels directly. By using a "siamese" deep neural network, the proposed method
can jointly learn the color feature, texture feature and metric in a unified
framework. The network has a symmetry structure with two sub-networks which are
connected by Cosine function. To deal with the big variations of person images,
binomial deviance is used to evaluate the cost between similarities and labels,
which is proved to be robust to outliers.
Compared to existing researches, a more practical setting is studied in the
experiments that is training and test on different datasets (cross dataset
person re-identification). Both in "intra dataset" and "cross dataset"
settings, the superiorities of the proposed method are illustrated on VIPeR and
PRID.
| [
"['Dong Yi' 'Zhen Lei' 'Stan Z. Li']",
"Dong Yi and Zhen Lei and Stan Z. Li"
]
|
cs.LG cs.CG | 10.1145/3105576 | 1407.5093 | null | null | http://arxiv.org/abs/1407.5093v1 | 2014-07-18T06:42:35Z | 2014-07-18T06:42:35Z | Classification of Passes in Football Matches using Spatiotemporal Data | A knowledgeable observer of a game of football (soccer) can make a subjective
evaluation of the quality of passes made between players during the game. We
investigate the problem of producing an automated system to make the same
evaluation of passes. We present a model that constructs numerical predictor
variables from spatiotemporal match data using feature functions based on
methods from computational geometry, and then learns a classification function
from labelled examples of the predictor variables. Furthermore, the learned
classifiers are analysed to determine if there is a relationship between the
complexity of the algorithm that computed the predictor variable and the
importance of the variable to the classifier. Experimental results show that we
are able to produce a classifier with 85.8% accuracy on classifying passes as
Good, OK or Bad, and that the predictor variables computed using complex
methods from computational geometry are of moderate importance to the learned
classifiers. Finally, we show that the inter-rater agreement on pass
classification between the machine classifier and a human observer is of
similar magnitude to the agreement between two observers.
| [
"['Michael Horton' 'Joachim Gudmundsson' 'Sanjay Chawla' 'Joël Estephan']",
"Michael Horton, Joachim Gudmundsson, Sanjay Chawla, Jo\\\"el Estephan"
]
|
cs.LG stat.ML | null | 1407.5155 | null | null | http://arxiv.org/pdf/1407.5155v4 | 2015-08-22T12:46:49Z | 2014-07-19T06:50:19Z | Sparse and spurious: dictionary learning with noise and outliers | A popular approach within the signal processing and machine learning
communities consists in modelling signals as sparse linear combinations of
atoms selected from a learned dictionary. While this paradigm has led to
numerous empirical successes in various fields ranging from image to audio
processing, there have only been a few theoretical arguments supporting these
evidences. In particular, sparse coding, or sparse dictionary learning, relies
on a non-convex procedure whose local minima have not been fully analyzed yet.
In this paper, we consider a probabilistic model of sparse signals, and show
that, with high probability, sparse coding admits a local minimum around the
reference dictionary generating the signals. Our study takes into account the
case of over-complete dictionaries, noisy signals, and possible outliers, thus
extending previous work limited to noiseless settings and/or under-complete
dictionaries. The analysis we conduct is non-asymptotic and makes it possible
to understand how the key quantities of the problem, such as the coherence or
the level of noise, can scale with respect to the dimension of the signals, the
number of atoms, the sparsity and the number of observations.
| [
"['Rémi Gribonval' 'Rodolphe Jenatton' 'Francis Bach']",
"R\\'emi Gribonval (PANAMA), Rodolphe Jenatton (CMAP), Francis Bach\n (SIERRA, LIENS)"
]
|
stat.ML cs.LG math.ST stat.TH | null | 1407.5158 | null | null | http://arxiv.org/pdf/1407.5158v2 | 2014-12-04T11:19:07Z | 2014-07-19T07:04:08Z | Tight convex relaxations for sparse matrix factorization | Based on a new atomic norm, we propose a new convex formulation for sparse
matrix factorization problems in which the number of nonzero elements of the
factors is assumed fixed and known. The formulation counts sparse PCA with
multiple factors, subspace clustering and low-rank sparse bilinear regression
as potential applications. We compute slow rates and an upper bound on the
statistical dimension of the suggested norm for rank 1 matrices, showing that
its statistical dimension is an order of magnitude smaller than the usual
$\ell\_1$-norm, trace norm and their combinations. Even though our convex
formulation is in theory hard and does not lead to provably polynomial time
algorithmic schemes, we propose an active set algorithm leveraging the
structure of the convex problem to solve it and show promising numerical
results.
| [
"['Emile Richard' 'Guillaume Obozinski' 'Jean-Philippe Vert']",
"Emile Richard, Guillaume Obozinski (LIGM), Jean-Philippe Vert (CBIO)"
]
|
cs.CV cs.LG | 10.1109/TIP.2016.2514503 | 1407.5245 | null | null | http://arxiv.org/abs/1407.5245v2 | 2016-01-19T03:27:59Z | 2014-07-20T04:42:50Z | Feature and Region Selection for Visual Learning | Visual learning problems such as object classification and action recognition
are typically approached using extensions of the popular bag-of-words (BoW)
model. Despite its great success, it is unclear what visual features the BoW
model is learning: Which regions in the image or video are used to discriminate
among classes? Which are the most discriminative visual words? Answering these
questions is fundamental for understanding existing BoW models and inspiring
better models for visual recognition.
To answer these questions, this paper presents a method for feature selection
and region selection in the visual BoW model. This allows for an intermediate
visualization of the features and regions that are important for visual
learning. The main idea is to assign latent weights to the features or regions,
and jointly optimize these latent variables with the parameters of a classifier
(e.g., support vector machine). There are four main benefits of our approach:
(1) Our approach accommodates non-linear additive kernels such as the popular
$\chi^2$ and intersection kernel; (2) our approach is able to handle both
regions in images and spatio-temporal regions in videos in a unified way; (3)
the feature selection problem is convex, and both problems can be solved using
a scalable reduced gradient method; (4) we point out strong connections with
multiple kernel learning and multiple instance learning approaches.
Experimental results in the PASCAL VOC 2007, MSR Action Dataset II and YouTube
illustrate the benefits of our approach.
| [
"['Ji Zhao' 'Liantao Wang' 'Ricardo Cabral' 'Fernando De la Torre']",
"Ji Zhao, Liantao Wang, Ricardo Cabral, Fernando De la Torre"
]
|
cs.LG cs.AI stat.ML | null | 1407.5358 | null | null | http://arxiv.org/pdf/1407.5358v1 | 2014-07-21T01:20:45Z | 2014-07-21T01:20:45Z | Practical Kernel-Based Reinforcement Learning | Kernel-based reinforcement learning (KBRL) stands out among reinforcement
learning algorithms for its strong theoretical guarantees. By casting the
learning problem as a local kernel approximation, KBRL provides a way of
computing a decision policy which is statistically consistent and converges to
a unique solution. Unfortunately, the model constructed by KBRL grows with the
number of sample transitions, resulting in a computational cost that precludes
its application to large-scale or on-line domains. In this paper we introduce
an algorithm that turns KBRL into a practical reinforcement learning tool.
Kernel-based stochastic factorization (KBSF) builds on a simple idea: when a
transition matrix is represented as the product of two stochastic matrices, one
can swap the factors of the multiplication to obtain another transition matrix,
potentially much smaller, which retains some fundamental properties of its
precursor. KBSF exploits such an insight to compress the information contained
in KBRL's model into an approximator of fixed size. This makes it possible to
build an approximation that takes into account both the difficulty of the
problem and the associated computational cost. KBSF's computational complexity
is linear in the number of sample transitions, which is the best one can do
without discarding data. Moreover, the algorithm's simple mechanics allow for a
fully incremental implementation that makes the amount of memory used
independent of the number of sample transitions. The result is a kernel-based
reinforcement learning algorithm that can be applied to large-scale problems in
both off-line and on-line regimes. We derive upper bounds for the distance
between the value functions computed by KBRL and KBSF using the same data. We
also illustrate the potential of our algorithm in an extensive empirical study
in which KBSF is applied to difficult tasks based on real-world data.
| [
"['André M. S. Barreto' 'Doina Precup' 'Joelle Pineau']",
"Andr\\'e M. S. Barreto, Doina Precup, and Joelle Pineau"
]
|
cs.LO cs.AI cs.LG cs.PL | 10.4204/EPTCS.157.10 | 1407.5397 | null | null | http://arxiv.org/abs/1407.5397v1 | 2014-07-21T07:28:49Z | 2014-07-21T07:28:49Z | Are There Good Mistakes? A Theoretical Analysis of CEGIS | Counterexample-guided inductive synthesis CEGIS is used to synthesize
programs from a candidate space of programs. The technique is guaranteed to
terminate and synthesize the correct program if the space of candidate programs
is finite. But the technique may or may not terminate with the correct program
if the candidate space of programs is infinite. In this paper, we perform a
theoretical analysis of counterexample-guided inductive synthesis technique. We
investigate whether the set of candidate spaces for which the correct program
can be synthesized using CEGIS depends on the counterexamples used in inductive
synthesis, that is, whether there are good mistakes which would increase the
synthesis power. We investigate whether the use of minimal counterexamples
instead of arbitrary counterexamples expands the set of candidate spaces of
programs for which inductive synthesis can successfully synthesize a correct
program. We consider two kinds of counterexamples: minimal counterexamples and
history bounded counterexamples. The history bounded counterexample used in any
iteration of CEGIS is bounded by the examples used in previous iterations of
inductive synthesis. We examine the relative change in power of inductive
synthesis in both cases. We show that the synthesis technique using minimal
counterexamples MinCEGIS has the same synthesis power as CEGIS but the
synthesis technique using history bounded counterexamples HCEGIS has different
power than that of CEGIS, but none dominates the other.
| [
"['Susmit Jha' 'Sanjit A. Seshia']",
"Susmit Jha (Strategic CAD Labs, Intel), Sanjit A. Seshia (EECS, UC\n Berkeley)"
]
|
cs.LG stat.ML | null | 1407.5599 | null | null | http://arxiv.org/pdf/1407.5599v4 | 2015-09-10T16:40:45Z | 2014-07-21T19:05:47Z | Scalable Kernel Methods via Doubly Stochastic Gradients | The general perception is that kernel methods are not scalable, and neural
nets are the methods of choice for nonlinear learning problems. Or have we
simply not tried hard enough for kernel methods? Here we propose an approach
that scales up kernel methods using a novel concept called "doubly stochastic
functional gradients". Our approach relies on the fact that many kernel methods
can be expressed as convex optimization problems, and we solve the problems by
making two unbiased stochastic approximations to the functional gradient, one
using random training points and another using random functions associated with
the kernel, and then descending using this noisy functional gradient. We show
that a function produced by this procedure after $t$ iterations converges to
the optimal function in the reproducing kernel Hilbert space in rate $O(1/t)$,
and achieves a generalization performance of $O(1/\sqrt{t})$. This doubly
stochasticity also allows us to avoid keeping the support vectors and to
implement the algorithm in a small memory footprint, which is linear in number
of iterations and independent of data dimension. Our approach can readily scale
kernel methods up to the regimes which are dominated by neural nets. We show
that our method can achieve competitive performance to neural nets in datasets
such as 8 million handwritten digits from MNIST, 2.3 million energy materials
from MolecularSpace, and 1 million photos from ImageNet.
| [
"Bo Dai, Bo Xie, Niao He, Yingyu Liang, Anant Raj, Maria-Florina\n Balcan, Le Song",
"['Bo Dai' 'Bo Xie' 'Niao He' 'Yingyu Liang' 'Anant Raj'\n 'Maria-Florina Balcan' 'Le Song']"
]
|
cs.AI cs.LG | null | 1407.5656 | null | null | http://arxiv.org/pdf/1407.5656v2 | 2014-08-19T21:06:27Z | 2014-07-21T20:26:32Z | PGMHD: A Scalable Probabilistic Graphical Model for Massive Hierarchical
Data Problems | In the big data era, scalability has become a crucial requirement for any
useful computational model. Probabilistic graphical models are very useful for
mining and discovering data insights, but they are not scalable enough to be
suitable for big data problems. Bayesian Networks particularly demonstrate this
limitation when their data is represented using few random variables while each
random variable has a massive set of values. With hierarchical data - data that
is arranged in a treelike structure with several levels - one would expect to
see hundreds of thousands or millions of values distributed over even just a
small number of levels. When modeling this kind of hierarchical data across
large data sets, Bayesian networks become infeasible for representing the
probability distributions for the following reasons: i) Each level represents a
single random variable with hundreds of thousands of values, ii) The number of
levels is usually small, so there are also few random variables, and iii) The
structure of the network is predefined since the dependency is modeled top-down
from each parent to each of its child nodes, so the network would contain a
single linear path for the random variables from each parent to each child
node. In this paper we present a scalable probabilistic graphical model to
overcome these limitations for massive hierarchical data. We believe the
proposed model will lead to an easily-scalable, more readable, and expressive
implementation for problems that require probabilistic-based solutions for
massive amounts of hierarchical data. We successfully applied this model to
solve two different challenging probabilistic-based problems on massive
hierarchical data sets for different domains, namely, bioinformatics and latent
semantic discovery over search logs.
| [
"['Khalifeh AlJadda' 'Mohammed Korayem' 'Camilo Ortiz' 'Trey Grainger'\n 'John A. Miller' 'William S. York']",
"Khalifeh AlJadda, Mohammed Korayem, Camilo Ortiz, Trey Grainger, John\n A. Miller, William S. York"
]
|
cs.LG | null | 1407.5908 | null | null | http://arxiv.org/pdf/1407.5908v1 | 2014-07-19T15:16:40Z | 2014-07-19T15:16:40Z | Exploiting Smoothness in Statistical Learning, Sequential Prediction,
and Stochastic Optimization | In the last several years, the intimate connection between convex
optimization and learning problems, in both statistical and sequential
frameworks, has shifted the focus of algorithmic machine learning to examine
this interplay. In particular, on one hand, this intertwinement brings forward
new challenges in reassessment of the performance of learning algorithms
including generalization and regret bounds under the assumptions imposed by
convexity such as analytical properties of loss functions (e.g., Lipschitzness,
strong convexity, and smoothness). On the other hand, emergence of datasets of
an unprecedented size, demands the development of novel and more efficient
optimization algorithms to tackle large-scale learning problems.
The overarching goal of this thesis is to reassess the smoothness of loss
functions in statistical learning, sequential prediction/online learning, and
stochastic optimization and explicate its consequences. In particular we
examine how smoothness of loss function could be beneficial or detrimental in
these settings in terms of sample complexity, statistical consistency, regret
analysis, and convergence rate, and investigate how smoothness can be leveraged
to devise more efficient learning algorithms.
| [
"Mehrdad Mahdavi",
"['Mehrdad Mahdavi']"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.