categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG stat.ML | null | 1409.8576 | null | null | http://arxiv.org/pdf/1409.8576v1 | 2014-09-30T14:55:07Z | 2014-09-30T14:55:07Z | Data Imputation through the Identification of Local Anomalies | We introduce a comprehensive and statistical framework in a model free
setting for a complete treatment of localized data corruptions due to severe
noise sources, e.g., an occluder in the case of a visual recording. Within this
framework, we propose i) a novel algorithm to efficiently separate, i.e.,
detect and localize, possible corruptions from a given suspicious data instance
and ii) a Maximum A Posteriori (MAP) estimator to impute the corrupted data. As
a generalization to Euclidean distance, we also propose a novel distance
measure, which is based on the ranked deviations among the data attributes and
empirically shown to be superior in separating the corruptions. Our algorithm
first splits the suspicious instance into parts through a binary partitioning
tree in the space of data attributes and iteratively tests those parts to
detect local anomalies using the nominal statistics extracted from an
uncorrupted (clean) reference data set. Once each part is labeled as anomalous
vs normal, the corresponding binary patterns over this tree that characterize
corruptions are identified and the affected attributes are imputed. Under a
certain conditional independency structure assumed for the binary patterns, we
analytically show that the false alarm rate of the introduced algorithm in
detecting the corruptions is independent of the data and can be directly set
without any parameter tuning. The proposed framework is tested over several
well-known machine learning data sets with synthetically generated corruptions;
and experimentally shown to produce remarkable improvements in terms of
classification purposes with strong corruption separation capabilities. Our
experiments also indicate that the proposed algorithms outperform the typical
approaches and are robust to varying training phase conditions.
| [
"['Huseyin Ozkan' 'Ozgun S. Pelvan' 'Suleyman S. Kozat']",
"Huseyin Ozkan, Ozgun S. Pelvan and Suleyman S. Kozat"
]
|
math.OC cs.LG cs.SI stat.ML | null | 1409.8606 | null | null | http://arxiv.org/pdf/1409.8606v1 | 2014-09-30T15:49:59Z | 2014-09-30T15:49:59Z | Distributed Detection : Finite-time Analysis and Impact of Network
Topology | This paper addresses the problem of distributed detection in multi-agent
networks. Agents receive private signals about an unknown state of the world.
The underlying state is globally identifiable, yet informative signals may be
dispersed throughout the network. Using an optimization-based framework, we
develop an iterative local strategy for updating individual beliefs. In
contrast to the existing literature which focuses on asymptotic learning, we
provide a finite-time analysis. Furthermore, we introduce a Kullback-Leibler
cost to compare the efficiency of the algorithm to its centralized counterpart.
Our bounds on the cost are expressed in terms of network size, spectral gap,
centrality of each agent and relative entropy of agents' signal structures. A
key observation is that distributing more informative signals to central agents
results in a faster learning rate. Furthermore, optimizing the weights, we can
speed up learning by improving the spectral gap. We also quantify the effect of
link failures on learning speed in symmetric networks. We finally provide
numerical simulations which verify our theoretical results.
| [
"Shahin Shahrampour, Alexander Rakhlin, Ali Jadbabaie",
"['Shahin Shahrampour' 'Alexander Rakhlin' 'Ali Jadbabaie']"
]
|
stat.ML cs.CV cs.LG | null | 1410.0095 | null | null | http://arxiv.org/pdf/1410.0095v1 | 2014-10-01T02:37:12Z | 2014-10-01T02:37:12Z | Riemannian Multi-Manifold Modeling | This paper advocates a novel framework for segmenting a dataset in a
Riemannian manifold $M$ into clusters lying around low-dimensional submanifolds
of $M$. Important examples of $M$, for which the proposed clustering algorithm
is computationally efficient, are the sphere, the set of positive definite
matrices, and the Grassmannian. The clustering problem with these examples of
$M$ is already useful for numerous application domains such as action
identification in video sequences, dynamic texture clustering, brain fiber
segmentation in medical imaging, and clustering of deformed images. The
proposed clustering algorithm constructs a data-affinity matrix by thoroughly
exploiting the intrinsic geometry and then applies spectral clustering. The
intrinsic local geometry is encoded by local sparse coding and more importantly
by directional information of local tangent spaces and geodesics. Theoretical
guarantees are established for a simplified variant of the algorithm even when
the clusters intersect. To avoid complication, these guarantees assume that the
underlying submanifolds are geodesic. Extensive validation on synthetic and
real data demonstrates the resiliency of the proposed method against deviations
from the theoretical model as well as its superior performance over
state-of-the-art techniques.
| [
"Xu Wang, Konstantinos Slavakis, Gilad Lerman",
"['Xu Wang' 'Konstantinos Slavakis' 'Gilad Lerman']"
]
|
cs.LG stat.ML | null | 1410.0123 | null | null | http://arxiv.org/pdf/1410.0123v1 | 2014-10-01T06:55:11Z | 2014-10-01T06:55:11Z | Deep Tempering | Restricted Boltzmann Machines (RBMs) are one of the fundamental building
blocks of deep learning. Approximate maximum likelihood training of RBMs
typically necessitates sampling from these models. In many training scenarios,
computationally efficient Gibbs sampling procedures are crippled by poor
mixing. In this work we propose a novel method of sampling from Boltzmann
machines that demonstrates a computationally efficient way to promote mixing.
Our approach leverages an under-appreciated property of deep generative models
such as the Deep Belief Network (DBN), where Gibbs sampling from deeper levels
of the latent variable hierarchy results in dramatically increased ergodicity.
Our approach is thus to train an auxiliary latent hierarchical model, based on
the DBN. When used in conjunction with parallel-tempering, the method is
asymptotically guaranteed to simulate samples from the target RBM. Experimental
results confirm the effectiveness of this sampling strategy in the context of
RBM training.
| [
"Guillaume Desjardins, Heng Luo, Aaron Courville and Yoshua Bengio",
"['Guillaume Desjardins' 'Heng Luo' 'Aaron Courville' 'Yoshua Bengio']"
]
|
cs.AI cs.CL cs.CV cs.LG | null | 1410.0210 | null | null | http://arxiv.org/pdf/1410.0210v4 | 2015-05-05T17:39:10Z | 2014-10-01T12:59:16Z | A Multi-World Approach to Question Answering about Real-World Scenes
based on Uncertain Input | We propose a method for automatically answering questions about images by
bringing together recent advances from natural language processing and computer
vision. We combine discrete reasoning with uncertain predictions by a
multi-world approach that represents uncertainty about the perceived world in a
bayesian framework. Our approach can handle human questions of high complexity
about realistic scenes and replies with range of answer like counts, object
classes, instances and lists of them. The system is directly trained from
question-answer pairs. We establish a first benchmark for this task that can be
seen as a modern attempt at a visual turing test.
| [
"['Mateusz Malinowski' 'Mario Fritz']",
"Mateusz Malinowski and Mario Fritz"
]
|
cs.DS cs.LG | null | 1410.0260 | null | null | http://arxiv.org/pdf/1410.0260v3 | 2015-03-13T17:31:21Z | 2014-10-01T15:41:11Z | ASKIT: Approximate Skeletonization Kernel-Independent Treecode in High
Dimensions | We present a fast algorithm for kernel summation problems in high-dimensions.
These problems appear in computational physics, numerical approximation,
non-parametric statistics, and machine learning. In our context, the sums
depend on a kernel function that is a pair potential defined on a dataset of
points in a high-dimensional Euclidean space. A direct evaluation of the sum
scales quadratically with the number of points. Fast kernel summation methods
can reduce this cost to linear complexity, but the constants involved do not
scale well with the dimensionality of the dataset.
The main algorithmic components of fast kernel summation algorithms are the
separation of the kernel sum between near and far field (which is the basis for
pruning) and the efficient and accurate approximation of the far field.
We introduce novel methods for pruning and approximating the far field. Our
far field approximation requires only kernel evaluations and does not use
analytic expansions. Pruning is not done using bounding boxes but rather
combinatorially using a sparsified nearest-neighbor graph of the input. The
time complexity of our algorithm depends linearly on the ambient dimension. The
error in the algorithm depends on the low-rank approximability of the far
field, which in turn depends on the kernel function and on the intrinsic
dimensionality of the distribution of the points. The error of the far field
approximation does not depend on the ambient dimension.
We present the new algorithm along with experimental results that demonstrate
its performance. We report results for Gaussian kernel sums for 100 million
points in 64 dimensions, for one million points in 1000 dimensions, and for
problems in which the Gaussian kernel has a variable bandwidth. To the best of
our knowledge, all of these experiments are impossible or prohibitively
expensive with existing fast kernel summation methods.
| [
"William B. March, Bo Xiao, George Biros",
"['William B. March' 'Bo Xiao' 'George Biros']"
]
|
cs.CV cs.LG | null | 1410.0311 | null | null | http://arxiv.org/pdf/1410.0311v2 | 2015-03-03T04:38:37Z | 2014-08-26T07:23:04Z | $\ell_1$-K-SVD: A Robust Dictionary Learning Algorithm With Simultaneous
Update | We develop a dictionary learning algorithm by minimizing the $\ell_1$
distortion metric on the data term, which is known to be robust for
non-Gaussian noise contamination. The proposed algorithm exploits the idea of
iterative minimization of weighted $\ell_2$ error. We refer to this algorithm
as $\ell_1$-K-SVD, where the dictionary atoms and the corresponding sparse
coefficients are simultaneously updated to minimize the $\ell_1$ objective,
resulting in noise-robustness. We demonstrate through experiments that the
$\ell_1$-K-SVD algorithm results in higher atom recovery rate compared with the
K-SVD and the robust dictionary learning (RDL) algorithm proposed by Lu et al.,
both in Gaussian and non-Gaussian noise conditions. We also show that, for
fixed values of sparsity, number of dictionary atoms, and data-dimension, the
$\ell_1$-K-SVD algorithm outperforms the K-SVD and RDL algorithms when the
training set available is small. We apply the proposed algorithm for denoising
natural images corrupted by additive Gaussian and Laplacian noise. The images
denoised using $\ell_1$-K-SVD are observed to have slightly higher peak
signal-to-noise ratio (PSNR) over K-SVD for Laplacian noise, but the
improvement in structural similarity index (SSIM) is significant (approximately
$0.1$) for lower values of input PSNR, indicating the efficacy of the $\ell_1$
metric.
| [
"Subhadip Mukherjee, Rupam Basu, and Chandra Sekhar Seelamantula",
"['Subhadip Mukherjee' 'Rupam Basu' 'Chandra Sekhar Seelamantula']"
]
|
stat.ML cs.LG | null | 1410.0334 | null | null | http://arxiv.org/pdf/1410.0334v1 | 2014-10-01T19:09:02Z | 2014-10-01T19:09:02Z | Domain adaptation of weighted majority votes via perturbed
variation-based self-labeling | In machine learning, the domain adaptation problem arrives when the test
(target) and the train (source) data are generated from different
distributions. A key applied issue is thus the design of algorithms able to
generalize on a new distribution, for which we have no label information. We
focus on learning classification models defined as a weighted majority vote
over a set of real-val ued functions. In this context, Germain et al. (2013)
have shown that a measure of disagreement between these functions is crucial to
control. The core of this measure is a theoretical bound--the C-bound (Lacasse
et al., 2007)--which involves the disagreement and leads to a well performing
majority vote learning algorithm in usual non-adaptative supervised setting:
MinCq. In this work, we propose a framework to extend MinCq to a domain
adaptation scenario. This procedure takes advantage of the recent perturbed
variation divergence between distributions proposed by Harel and Mannor (2012).
Justified by a theoretical bound on the target risk of the vote, we provide to
MinCq a target sample labeled thanks to a perturbed variation-based
self-labeling focused on the regions where the source and target marginals
appear similar. We also study the influence of our self-labeling, from which we
deduce an original process for tuning the hyperparameters. Finally, our
framework called PV-MinCq shows very promising results on a rotation and
translation synthetic problem.
| [
"['Emilie Morvant']",
"Emilie Morvant (LHC)"
]
|
stat.ML cs.LG math.OC | null | 1410.0342 | null | null | http://arxiv.org/pdf/1410.0342v4 | 2015-05-05T18:53:24Z | 2014-10-01T19:31:40Z | Generalized Low Rank Models | Principal components analysis (PCA) is a well-known technique for
approximating a tabular data set by a low rank matrix. Here, we extend the idea
of PCA to handle arbitrary data sets consisting of numerical, Boolean,
categorical, ordinal, and other data types. This framework encompasses many
well known techniques in data analysis, such as nonnegative matrix
factorization, matrix completion, sparse and robust PCA, $k$-means, $k$-SVD,
and maximum margin matrix factorization. The method handles heterogeneous data
sets, and leads to coherent schemes for compressing, denoising, and imputing
missing entries across all data types simultaneously. It also admits a number
of interesting interpretations of the low rank factors, which allow clustering
of examples or of features. We propose several parallel algorithms for fitting
generalized low rank models, and describe implementations and numerical
results.
| [
"Madeleine Udell, Corinne Horn, Reza Zadeh and Stephen Boyd",
"['Madeleine Udell' 'Corinne Horn' 'Reza Zadeh' 'Stephen Boyd']"
]
|
cs.LG stat.ML | null | 1410.0440 | null | null | http://arxiv.org/pdf/1410.0440v1 | 2014-10-02T02:28:04Z | 2014-10-02T02:28:04Z | Scalable Nonlinear Learning with Adaptive Polynomial Expansions | Can we effectively learn a nonlinear representation in time comparable to
linear learning? We describe a new algorithm that explicitly and adaptively
expands higher-order interaction features over base linear representations. The
algorithm is designed for extreme computational efficiency, and an extensive
experimental study shows that its computation/prediction tradeoff ability
compares very favorably against strong baselines.
| [
"['Alekh Agarwal' 'Alina Beygelzimer' 'Daniel Hsu' 'John Langford'\n 'Matus Telgarsky']",
"Alekh Agarwal, Alina Beygelzimer, Daniel Hsu, John Langford, Matus\n Telgarsky"
]
|
cs.NE cs.LG q-bio.NC | 10.1109/ICASSP.2014.6853969 | 1410.0446 | null | null | http://arxiv.org/abs/1410.0446v1 | 2014-10-02T03:41:53Z | 2014-10-02T03:41:53Z | Identification of Dynamic functional brain network states Through Tensor
Decomposition | With the advances in high resolution neuroimaging, there has been a growing
interest in the detection of functional brain connectivity. Complex network
theory has been proposed as an attractive mathematical representation of
functional brain networks. However, most of the current studies of functional
brain networks have focused on the computation of graph theoretic indices for
static networks, i.e. long-time averages of connectivity networks. It is
well-known that functional connectivity is a dynamic process and the
construction and reorganization of the networks is key to understanding human
cognition. Therefore, there is a growing need to track dynamic functional brain
networks and identify time intervals over which the network is
quasi-stationary. In this paper, we present a tensor decomposition based method
to identify temporally invariant 'network states' and find a common topographic
representation for each state. The proposed methods are applied to
electroencephalogram (EEG) data during the study of error-related negativity
(ERN).
| [
"['Arash Golibagh Mahyari' 'Selin Aviyente']",
"Arash Golibagh Mahyari, Selin Aviyente"
]
|
cs.LG cs.NE | null | 1410.0510 | null | null | http://arxiv.org/pdf/1410.0510v1 | 2014-10-02T10:58:17Z | 2014-10-02T10:58:17Z | Deep Sequential Neural Network | Neural Networks sequentially build high-level features through their
successive layers. We propose here a new neural network model where each layer
is associated with a set of candidate mappings. When an input is processed, at
each layer, one mapping among these candidates is selected according to a
sequential decision process. The resulting model is structured according to a
DAG like architecture, so that a path from the root to a leaf node defines a
sequence of transformations. Instead of considering global transformations,
like in classical multilayer networks, this model allows us for learning a set
of local transformations. It is thus able to process data with different
characteristics through specific sequences of such local transformations,
increasing the expression power of this model w.r.t a classical multilayered
network. The learning algorithm is inspired from policy gradient techniques
coming from the reinforcement learning domain and is used here instead of the
classical back-propagation based gradient descent techniques. Experiments on
different datasets show the relevance of this approach.
| [
"['Ludovic Denoyer' 'Patrick Gallinari']",
"Ludovic Denoyer and Patrick Gallinari"
]
|
stat.ML cs.LG | null | 1410.0576 | null | null | http://arxiv.org/pdf/1410.0576v1 | 2014-10-02T14:49:59Z | 2014-10-02T14:49:59Z | Mapping Energy Landscapes of Non-Convex Learning Problems | In many statistical learning problems, the target functions to be optimized
are highly non-convex in various model spaces and thus are difficult to
analyze. In this paper, we compute \emph{Energy Landscape Maps} (ELMs) which
characterize and visualize an energy function with a tree structure, in which
each leaf node represents a local minimum and each non-leaf node represents the
barrier between adjacent energy basins. The ELM also associates each node with
the estimated probability mass and volume for the corresponding energy basin.
We construct ELMs by adopting the generalized Wang-Landau algorithm and
multi-domain sampler that simulates a Markov chain traversing the model space
by dynamically reweighting the energy function. We construct ELMs in the model
space for two classic statistical learning problems: i) clustering with
Gaussian mixture models or Bernoulli templates; and ii) bi-clustering. We
propose a way to measure the difficulties (or complexity) of these learning
problems and study how various conditions affect the landscape complexity, such
as separability of the clusters, the number of examples, and the level of
supervision; and we also visualize the behaviors of different algorithms, such
as K-mean, EM, two-step EM and Swendsen-Wang cuts, in the energy landscapes.
| [
"['Maria Pavlovskaia' 'Kewei Tu' 'Song-Chun Zhu']",
"Maria Pavlovskaia, Kewei Tu and Song-Chun Zhu"
]
|
stat.ML cs.LG cs.NE | null | 1410.0630 | null | null | http://arxiv.org/pdf/1410.0630v1 | 2014-10-02T18:09:42Z | 2014-10-02T18:09:42Z | Deep Directed Generative Autoencoders | For discrete data, the likelihood $P(x)$ can be rewritten exactly and
parametrized into $P(X = x) = P(X = x | H = f(x)) P(H = f(x))$ if $P(X | H)$
has enough capacity to put no probability mass on any $x'$ for which $f(x')\neq
f(x)$, where $f(\cdot)$ is a deterministic discrete function. The log of the
first factor gives rise to the log-likelihood reconstruction error of an
autoencoder with $f(\cdot)$ as the encoder and $P(X|H)$ as the (probabilistic)
decoder. The log of the second term can be seen as a regularizer on the encoded
activations $h=f(x)$, e.g., as in sparse autoencoders. Both encoder and decoder
can be represented by a deep neural network and trained to maximize the average
of the optimal log-likelihood $\log p(x)$. The objective is to learn an encoder
$f(\cdot)$ that maps $X$ to $f(X)$ that has a much simpler distribution than
$X$ itself, estimated by $P(H)$. This "flattens the manifold" or concentrates
probability mass in a smaller number of (relevant) dimensions over which the
distribution factorizes. Generating samples from the model is straightforward
using ancestral sampling. One challenge is that regular back-propagation cannot
be used to obtain the gradient on the parameters of the encoder, but we find
that using the straight-through estimator works well here. We also find that
although optimizing a single level of such architecture may be difficult, much
better results can be obtained by pre-training and stacking them, gradually
transforming the data distribution into one that is more easily captured by a
simple parametric model.
| [
"Sherjil Ozair and Yoshua Bengio",
"['Sherjil Ozair' 'Yoshua Bengio']"
]
|
stat.ML cs.LG math.CO | null | 1410.0633 | null | null | http://arxiv.org/pdf/1410.0633v3 | 2015-05-24T17:31:55Z | 2014-10-02T18:20:04Z | Deterministic Conditions for Subspace Identifiability from Incomplete
Sampling | Consider a generic $r$-dimensional subspace of $\mathbb{R}^d$, $r<d$, and
suppose that we are only given projections of this subspace onto small subsets
of the canonical coordinates. The paper establishes necessary and sufficient
deterministic conditions on the subsets for subspace identifiability.
| [
"['Daniel L. Pimentel-Alarcón' 'Robert D. Nowak' 'Nigel Boston']",
"Daniel L. Pimentel-Alarc\\'on, Robert D. Nowak, Nigel Boston"
]
|
cs.NE cs.LG | null | 1410.0640 | null | null | http://arxiv.org/pdf/1410.0640v3 | 2014-10-06T20:48:29Z | 2014-10-02T18:38:11Z | Term-Weighting Learning via Genetic Programming for Text Classification | This paper describes a novel approach to learning term-weighting schemes
(TWSs) in the context of text classification. In text mining a TWS determines
the way in which documents will be represented in a vector space model, before
applying a classifier. Whereas acceptable performance has been obtained with
standard TWSs (e.g., Boolean and term-frequency schemes), the definition of
TWSs has been traditionally an art. Further, it is still a difficult task to
determine what is the best TWS for a particular problem and it is not clear
yet, whether better schemes, than those currently available, can be generated
by combining known TWS. We propose in this article a genetic program that aims
at learning effective TWSs that can improve the performance of current schemes
in text classification. The genetic program learns how to combine a set of
basic units to give rise to discriminative TWSs. We report an extensive
experimental study comprising data sets from thematic and non-thematic text
classification as well as from image classification. Our study shows the
validity of the proposed method; in fact, we show that TWSs learned with the
genetic program outperform traditional schemes and other TWSs proposed in
recent works. Further, we show that TWSs learned from a specific domain can be
effectively used for other tasks.
| [
"['Hugo Jair Escalante' 'Mauricio A. García-Limón' 'Alicia Morales-Reyes'\n 'Mario Graff' 'Manuel Montes-y-Gómez' 'Eduardo F. Morales']",
"Hugo Jair Escalante, Mauricio A. Garc\\'ia-Lim\\'on, Alicia\n Morales-Reyes, Mario Graff, Manuel Montes-y-G\\'omez, Eduardo F. Morales"
]
|
cs.NA cs.CV cs.IT cs.LG math.IT math.OC math.ST stat.TH | null | 1410.0719 | null | null | http://arxiv.org/pdf/1410.0719v2 | 2014-10-09T07:55:35Z | 2014-10-02T21:40:08Z | Proceedings of the second "international Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST'14) | The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.
| [
"['L. Jacques' 'C. De Vleeschouwer' 'Y. Boursier' 'P. Sudhakar' 'C. De Mol'\n 'A. Pizurica' 'S. Anthoine' 'P. Vandergheynst' 'P. Frossard' 'C. Bilen'\n 'S. Kitic' 'N. Bertin' 'R. Gribonval' 'N. Boumal' 'B. Mishra'\n 'P. -A. Absil' 'R. Sepulchre' 'S. Bundervoet' 'C. Schretter' 'A. Dooms'\n 'P. Schelkens' 'O. Chabiron' 'F. Malgouyres' 'J. -Y. Tourneret'\n 'N. Dobigeon' 'P. Chainais' 'C. Richard' 'B. Cornelis' 'I. Daubechies'\n 'D. Dunson' 'M. Dankova' 'P. Rajmic' 'K. Degraux' 'V. Cambareri'\n 'B. Geelen' 'G. Lafruit' 'G. Setti' 'J. -F. Determe' 'J. Louveaux'\n 'F. Horlin' 'A. Drémeau' 'P. Heas' 'C. Herzet' 'V. Duval' 'G. Peyré'\n 'A. Fawzi' 'M. Davies' 'N. Gillis' 'S. A. Vavasis' 'C. Soussen'\n 'L. Le Magoarou' 'J. Liang' 'J. Fadili' 'A. Liutkus' 'D. Martina'\n 'S. Gigan' 'L. Daudet' 'M. Maggioni' 'S. Minsker' 'N. Strawn' 'C. Mory'\n 'F. Ngole' 'J. -L. Starck' 'I. Loris' 'S. Vaiter' 'M. Golbabaee'\n 'D. Vukobratovic']",
"L. Jacques, C. De Vleeschouwer, Y. Boursier, P. Sudhakar, C. De Mol,\n A. Pizurica, S. Anthoine, P. Vandergheynst, P. Frossard, C. Bilen, S. Kitic,\n N. Bertin, R. Gribonval, N. Boumal, B. Mishra, P.-A. Absil, R. Sepulchre, S.\n Bundervoet, C. Schretter, A. Dooms, P. Schelkens, O. Chabiron, F. Malgouyres,\n J.-Y. Tourneret, N. Dobigeon, P. Chainais, C. Richard, B. Cornelis, I.\n Daubechies, D. Dunson, M. Dankova, P. Rajmic, K. Degraux, V. Cambareri, B.\n Geelen, G. Lafruit, G. Setti, J.-F. Determe, J. Louveaux, F. Horlin, A.\n Dr\\'emeau, P. Heas, C. Herzet, V. Duval, G. Peyr\\'e, A. Fawzi, M. Davies, N.\n Gillis, S. A. Vavasis, C. Soussen, L. Le Magoarou, J. Liang, J. Fadili, A.\n Liutkus, D. Martina, S. Gigan, L. Daudet, M. Maggioni, S. Minsker, N. Strawn,\n C. Mory, F. Ngole, J.-L. Starck, I. Loris, S. Vaiter, M. Golbabaee, D.\n Vukobratovic"
]
|
cs.CV cs.AI cs.LG cs.NE stat.ML | null | 1410.0736 | null | null | http://arxiv.org/pdf/1410.0736v4 | 2015-05-16T03:36:32Z | 2014-10-03T01:17:20Z | HD-CNN: Hierarchical Deep Convolutional Neural Network for Large Scale
Visual Recognition | In image classification, visual separability between different object
categories is highly uneven, and some categories are more difficult to
distinguish than others. Such difficult categories demand more dedicated
classifiers. However, existing deep convolutional neural networks (CNN) are
trained as flat N-way classifiers, and few efforts have been made to leverage
the hierarchical structure of categories. In this paper, we introduce
hierarchical deep CNNs (HD-CNNs) by embedding deep CNNs into a category
hierarchy. An HD-CNN separates easy classes using a coarse category classifier
while distinguishing difficult classes using fine category classifiers. During
HD-CNN training, component-wise pretraining is followed by global finetuning
with a multinomial logistic loss regularized by a coarse category consistency
term. In addition, conditional executions of fine category classifiers and
layer parameter compression make HD-CNNs scalable for large-scale visual
recognition. We achieve state-of-the-art results on both CIFAR100 and
large-scale ImageNet 1000-class benchmark datasets. In our experiments, we
build up three different HD-CNNs and they lower the top-1 error of the standard
CNNs by 2.65%, 3.1% and 1.1%, respectively.
| [
"Zhicheng Yan, Hao Zhang, Robinson Piramuthu, Vignesh Jagadeesh, Dennis\n DeCoste, Wei Di, Yizhou Yu",
"['Zhicheng Yan' 'Hao Zhang' 'Robinson Piramuthu' 'Vignesh Jagadeesh'\n 'Dennis DeCoste' 'Wei Di' 'Yizhou Yu']"
]
|
cs.LG | null | 1410.0741 | null | null | http://arxiv.org/pdf/1410.0741v1 | 2014-10-03T01:59:25Z | 2014-10-03T01:59:25Z | Generalized Laguerre Reduction of the Volterra Kernel for Practical
Identification of Nonlinear Dynamic Systems | The Volterra series can be used to model a large subset of nonlinear, dynamic
systems. A major drawback is the number of coefficients required model such
systems. In order to reduce the number of required coefficients, Laguerre
polynomials are used to estimate the Volterra kernels. Existing literature
proposes algorithms for a fixed number of Volterra kernels, and Laguerre
series. This paper presents a novel algorithm for generalized calculation of
the finite order Volterra-Laguerre (VL) series for a MIMO system. An example
addresses the utility of the algorithm in practical application.
| [
"['Brett W. Israelsen' 'Dale A. Smith']",
"Brett W. Israelsen, Dale A. Smith"
]
|
cs.NE cs.LG cs.MS | null | 1410.0759 | null | null | http://arxiv.org/pdf/1410.0759v3 | 2014-12-18T01:13:16Z | 2014-10-03T06:16:43Z | cuDNN: Efficient Primitives for Deep Learning | We present a library of efficient implementations of deep learning
primitives. Deep learning workloads are computationally intensive, and
optimizing their kernels is difficult and time-consuming. As parallel
architectures evolve, kernels must be reoptimized, which makes maintaining
codebases difficult over time. Similar issues have long been addressed in the
HPC community by libraries such as the Basic Linear Algebra Subroutines (BLAS).
However, there is no analogous library for deep learning. Without such a
library, researchers implementing deep learning workloads on parallel
processors must create and optimize their own implementations of the main
computational kernels, and this work must be repeated as new parallel
processors emerge. To address this problem, we have created a library similar
in intent to BLAS, with optimized routines for deep learning workloads. Our
implementation contains routines for GPUs, although similarly to the BLAS
library, these routines could be implemented for other platforms. The library
is easy to integrate into existing frameworks, and provides optimized
performance and memory usage. For example, integrating cuDNN into Caffe, a
popular framework for convolutional networks, improves performance by 36% on a
standard model while also reducing memory consumption.
| [
"Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen,\n John Tran, Bryan Catanzaro, Evan Shelhamer",
"['Sharan Chetlur' 'Cliff Woolley' 'Philippe Vandermersch' 'Jonathan Cohen'\n 'John Tran' 'Bryan Catanzaro' 'Evan Shelhamer']"
]
|
cs.NE cs.LG | null | 1410.0781 | null | null | http://arxiv.org/pdf/1410.0781v3 | 2014-12-07T15:51:28Z | 2014-10-03T08:47:03Z | SimNets: A Generalization of Convolutional Networks | We present a deep layered architecture that generalizes classical
convolutional neural networks (ConvNets). The architecture, called SimNets, is
driven by two operators, one being a similarity function whose family contains
the convolution operator used in ConvNets, and the other is a new soft
max-min-mean operator called MEX that realizes classical operators like ReLU
and max pooling, but has additional capabilities that make SimNets a powerful
generalization of ConvNets. Three interesting properties emerge from the
architecture: (i) the basic input to hidden layer to output machinery contains
as special cases kernel machines with the Exponential and Generalized Gaussian
kernels, the output units being "neurons in feature space" (ii) in its general
form, the basic machinery has a higher abstraction level than kernel machines,
and (iii) initializing networks using unsupervised learning is natural.
Experiments demonstrate the capability of achieving state of the art accuracy
with networks that are an order of magnitude smaller than comparable ConvNets.
| [
"Nadav Cohen and Amnon Shashua",
"['Nadav Cohen' 'Amnon Shashua']"
]
|
stat.ML cs.IR cs.LG | null | 1410.0908 | null | null | http://arxiv.org/pdf/1410.0908v1 | 2014-10-03T16:38:53Z | 2014-10-03T16:38:53Z | Probit Normal Correlated Topic Models | The logistic normal distribution has recently been adapted via the
transformation of multivariate Gaus- sian variables to model the topical
distribution of documents in the presence of correlations among topics. In this
paper, we propose a probit normal alternative approach to modelling correlated
topical structures. Our use of the probit model in the context of topic
discovery is novel, as many authors have so far con- centrated solely of the
logistic model partly due to the formidable inefficiency of the multinomial
probit model even in the case of very small topical spaces. We herein
circumvent the inefficiency of multinomial probit estimation by using an
adaptation of the diagonal orthant multinomial probit in the topic models
context, resulting in the ability of our topic modelling scheme to handle
corpuses with a large number of latent topics. An additional and very important
benefit of our method lies in the fact that unlike with the logistic normal
model whose non-conjugacy leads to the need for sophisticated sampling schemes,
our ap- proach exploits the natural conjugacy inherent in the auxiliary
formulation of the probit model to achieve greater simplicity. The application
of our proposed scheme to a well known Associated Press corpus not only helps
discover a large number of meaningful topics but also reveals the capturing of
compellingly intuitive correlations among certain topics. Besides, our proposed
approach lends itself to even further scalability thanks to various existing
high performance algorithms and architectures capable of handling millions of
documents.
| [
"Xingchen Yu and Ernest Fokoue",
"['Xingchen Yu' 'Ernest Fokoue']"
]
|
cs.LG cs.AI math.OC stat.ML | null | 1410.0949 | null | null | http://arxiv.org/pdf/1410.0949v3 | 2015-01-27T05:15:20Z | 2014-10-03T19:38:16Z | Tight Regret Bounds for Stochastic Combinatorial Semi-Bandits | A stochastic combinatorial semi-bandit is an online learning problem where at
each step a learning agent chooses a subset of ground items subject to
constraints, and then observes stochastic weights of these items and receives
their sum as a payoff. In this paper, we close the problem of computationally
and sample efficient learning in stochastic combinatorial semi-bandits. In
particular, we analyze a UCB-like algorithm for solving the problem, which is
known to be computationally efficient; and prove $O(K L (1 / \Delta) \log n)$
and $O(\sqrt{K L n \log n})$ upper bounds on its $n$-step regret, where $L$ is
the number of ground items, $K$ is the maximum number of chosen items, and
$\Delta$ is the gap between the expected returns of the optimal and best
suboptimal solutions. The gap-dependent bound is tight up to a constant factor
and the gap-free bound is tight up to a polylogarithmic factor.
| [
"['Branislav Kveton' 'Zheng Wen' 'Azin Ashkan' 'Csaba Szepesvari']",
"Branislav Kveton, Zheng Wen, Azin Ashkan, and Csaba Szepesvari"
]
|
cs.LG math.ST stat.ML stat.TH | null | 1410.0996 | null | null | http://arxiv.org/pdf/1410.0996v1 | 2014-10-03T23:30:16Z | 2014-10-03T23:30:16Z | Minimax Analysis of Active Learning | This work establishes distribution-free upper and lower bounds on the minimax
label complexity of active learning with general hypothesis classes, under
various noise models. The results reveal a number of surprising facts. In
particular, under the noise model of Tsybakov (2004), the minimax label
complexity of active learning with a VC class is always asymptotically smaller
than that of passive learning, and is typically significantly smaller than the
best previously-published upper bounds in the active learning literature. In
high-noise regimes, it turns out that all active learning problems of a given
VC dimension have roughly the same minimax label complexity, which contrasts
with well-known results for bounded noise. In low-noise regimes, we find that
the label complexity is well-characterized by a simple combinatorial complexity
measure we call the star number. Interestingly, we find that almost all of the
complexity measures previously explored in the active learning literature have
worst-case values exactly equal to the star number. We also propose new active
learning strategies that nearly achieve these minimax label complexities.
| [
"Steve Hanneke and Liu Yang",
"['Steve Hanneke' 'Liu Yang']"
]
|
stat.ML cs.AI cs.LG | null | 1410.1068 | null | null | http://arxiv.org/pdf/1410.1068v1 | 2014-10-04T17:36:58Z | 2014-10-04T17:36:58Z | Gamma Processes, Stick-Breaking, and Variational Inference | While most Bayesian nonparametric models in machine learning have focused on
the Dirichlet process, the beta process, or their variants, the gamma process
has recently emerged as a useful nonparametric prior in its own right. Current
inference schemes for models involving the gamma process are restricted to
MCMC-based methods, which limits their scalability. In this paper, we present a
variational inference framework for models involving gamma process priors. Our
approach is based on a novel stick-breaking constructive definition of the
gamma process. We prove correctness of this stick-breaking process by using the
characterization of the gamma process as a completely random measure (CRM), and
we explicitly derive the rate measure of our construction using Poisson process
machinery. We also derive error bounds on the truncation of the infinite
process required for variational inference, similar to the truncation analyses
for other nonparametric models based on the Dirichlet and beta processes. Our
representation is then used to derive a variational inference algorithm for a
particular Bayesian nonparametric latent structure formulation known as the
infinite Gamma-Poisson model, where the latent variables are drawn from a gamma
process prior with Poisson likelihoods. Finally, we present results for our
algorithms on nonnegative matrix factorization tasks on document corpora, and
show that we compare favorably to both sampling-based techniques and
variational approaches based on beta-Bernoulli priors.
| [
"['Anirban Roychowdhury' 'Brian Kulis']",
"Anirban Roychowdhury, Brian Kulis"
]
|
cs.CV cs.CL cs.LG | null | 1410.1090 | null | null | http://arxiv.org/pdf/1410.1090v1 | 2014-10-04T20:24:34Z | 2014-10-04T20:24:34Z | Explain Images with Multimodal Recurrent Neural Networks | In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model
for generating novel sentence descriptions to explain the content of images. It
directly models the probability distribution of generating a word given
previous words and the image. Image descriptions are generated by sampling from
this distribution. The model consists of two sub-networks: a deep recurrent
neural network for sentences and a deep convolutional network for images. These
two sub-networks interact with each other in a multimodal layer to form the
whole m-RNN model. The effectiveness of our model is validated on three
benchmark datasets (IAPR TC-12, Flickr 8K, and Flickr 30K). Our model
outperforms the state-of-the-art generative method. In addition, the m-RNN
model can be applied to retrieval tasks for retrieving images or sentences, and
achieves significant performance improvement over the state-of-the-art methods
which directly optimize the ranking objective function for retrieval.
| [
"Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Alan L. Yuille",
"['Junhua Mao' 'Wei Xu' 'Yi Yang' 'Jiang Wang' 'Alan L. Yuille']"
]
|
cs.LG | null | 1410.1103 | null | null | http://arxiv.org/pdf/1410.1103v3 | 2016-03-06T20:42:02Z | 2014-10-05T00:51:59Z | Online Ranking with Top-1 Feedback | We consider a setting where a system learns to rank a fixed set of $m$ items.
The goal is produce good item rankings for users with diverse interests who
interact online with the system for $T$ rounds. We consider a novel top-$1$
feedback model: at the end of each round, the relevance score for only the top
ranked object is revealed. However, the performance of the system is judged on
the entire ranked list. We provide a comprehensive set of results regarding
learnability under this challenging setting. For PairwiseLoss and DCG, two
popular ranking measures, we prove that the minimax regret is
$\Theta(T^{2/3})$. Moreover, the minimax regret is achievable using an
efficient strategy that only spends $O(m \log m)$ time per round. The same
efficient strategy achieves $O(T^{2/3})$ regret for Precision@$k$.
Surprisingly, we show that for normalized versions of these ranking measures,
i.e., AUC, NDCG \& MAP, no online ranking algorithm can have sublinear regret.
| [
"Sougata Chaudhuri and Ambuj Tewari",
"['Sougata Chaudhuri' 'Ambuj Tewari']"
]
|
cs.LG cs.AI stat.ML | null | 1410.1141 | null | null | http://arxiv.org/pdf/1410.1141v2 | 2014-10-28T19:14:37Z | 2014-10-05T10:54:07Z | On the Computational Efficiency of Training Neural Networks | It is well-known that neural networks are computationally hard to train. On
the other hand, in practice, modern day neural networks are trained efficiently
using SGD and a variety of tricks that include different activation functions
(e.g. ReLU), over-specification (i.e., train networks which are larger than
needed), and regularization. In this paper we revisit the computational
complexity of training neural networks from a modern perspective. We provide
both positive and negative results, some of them yield new provably efficient
and practical algorithms for training certain types of neural networks.
| [
"Roi Livni and Shai Shalev-Shwartz and Ohad Shamir",
"['Roi Livni' 'Shai Shalev-Shwartz' 'Ohad Shamir']"
]
|
cs.NE cs.LG | null | 1410.1165 | null | null | http://arxiv.org/pdf/1410.1165v3 | 2015-04-09T01:22:49Z | 2014-10-05T14:46:47Z | Understanding Locally Competitive Networks | Recently proposed neural network activation functions such as rectified
linear, maxout, and local winner-take-all have allowed for faster and more
effective training of deep neural architectures on large and complex datasets.
The common trait among these functions is that they implement local competition
between small groups of computational units within a layer, so that only part
of the network is activated for any given input pattern. In this paper, we
attempt to visualize and understand this self-modularization, and suggest a
unified explanation for the beneficial properties of such networks. We also
show how our insights can be directly useful for efficiently performing
retrieval over large datasets using neural networks.
| [
"Rupesh Kumar Srivastava, Jonathan Masci, Faustino Gomez, J\\\"urgen\n Schmidhuber",
"['Rupesh Kumar Srivastava' 'Jonathan Masci' 'Faustino Gomez'\n 'Jürgen Schmidhuber']"
]
|
cs.CR cs.DS cs.LG | null | 1410.1228 | null | null | http://arxiv.org/pdf/1410.1228v2 | 2015-02-20T19:29:47Z | 2014-10-05T23:55:22Z | Interactive Fingerprinting Codes and the Hardness of Preventing False
Discovery | We show an essentially tight bound on the number of adaptively chosen
statistical queries that a computationally efficient algorithm can answer
accurately given $n$ samples from an unknown distribution. A statistical query
asks for the expectation of a predicate over the underlying distribution, and
an answer to a statistical query is accurate if it is "close" to the correct
expectation over the distribution. This question was recently studied by Dwork
et al., who showed how to answer $\tilde{\Omega}(n^2)$ queries efficiently, and
also by Hardt and Ullman, who showed that answering $\tilde{O}(n^3)$ queries is
hard. We close the gap between the two bounds and show that, under a standard
hardness assumption, there is no computationally efficient algorithm that,
given $n$ samples from an unknown distribution, can give valid answers to
$O(n^2)$ adaptively chosen statistical queries. An implication of our results
is that computationally efficient algorithms for answering arbitrary,
adaptively chosen statistical queries may as well be differentially private.
We obtain our results using a new connection between the problem of answering
adaptively chosen statistical queries and a combinatorial object called an
interactive fingerprinting code. In order to optimize our hardness result, we
give a new Fourier-analytic approach to analyzing fingerprinting codes that is
simpler, more flexible, and yields better parameters than previous
constructions.
| [
"['Thomas Steinke' 'Jonathan Ullman']",
"Thomas Steinke and Jonathan Ullman"
]
|
cs.LG cs.AI cs.IR | null | 1410.1462 | null | null | http://arxiv.org/pdf/1410.1462v1 | 2014-10-06T17:10:23Z | 2014-10-06T17:10:23Z | Top Rank Optimization in Linear Time | Bipartite ranking aims to learn a real-valued ranking function that orders
positive instances before negative instances. Recent efforts of bipartite
ranking are focused on optimizing ranking accuracy at the top of the ranked
list. Most existing approaches are either to optimize task specific metrics or
to extend the ranking loss by emphasizing more on the error associated with the
top ranked instances, leading to a high computational cost that is super-linear
in the number of training instances. We propose a highly efficient approach,
titled TopPush, for optimizing accuracy at the top that has computational
complexity linear in the number of training instances. We present a novel
analysis that bounds the generalization error for the top ranked instances for
the proposed approach. Empirical study shows that the proposed approach is
highly competitive to the state-of-the-art approaches and is 10-100 times
faster.
| [
"Nan Li and Rong Jin and Zhi-Hua Zhou",
"['Nan Li' 'Rong Jin' 'Zhi-Hua Zhou']"
]
|
cs.LG | null | 1410.1784 | null | null | http://arxiv.org/pdf/1410.1784v1 | 2014-10-02T12:10:40Z | 2014-10-02T12:10:40Z | Stochastic Discriminative EM | Stochastic discriminative EM (sdEM) is an online-EM-type algorithm for
discriminative training of probabilistic generative models belonging to the
exponential family. In this work, we introduce and justify this algorithm as a
stochastic natural gradient descent method, i.e. a method which accounts for
the information geometry in the parameter space of the statistical model. We
show how this learning algorithm can be used to train probabilistic generative
models by minimizing different discriminative loss functions, such as the
negative conditional log-likelihood and the Hinge loss. The resulting models
trained by sdEM are always generative (i.e. they define a joint probability
distribution) and, in consequence, allows to deal with missing data and latent
variables in a principled way either when being learned or when making
predictions. The performance of this method is illustrated by several text
classification problems for which a multinomial naive Bayes and a latent
Dirichlet allocation based classifier are learned using different
discriminative loss functions.
| [
"Andres R. Masegosa",
"['Andres R. Masegosa']"
]
|
cs.LG cs.SI | null | 1410.1940 | null | null | http://arxiv.org/pdf/1410.1940v1 | 2014-10-07T23:11:37Z | 2014-10-07T23:11:37Z | GLAD: Group Anomaly Detection in Social Media Analysis- Extended
Abstract | Traditional anomaly detection on social media mostly focuses on individual
point anomalies while anomalous phenomena usually occur in groups. Therefore it
is valuable to study the collective behavior of individuals and detect group
anomalies. Existing group anomaly detection approaches rely on the assumption
that the groups are known, which can hardly be true in real world social media
applications. In this paper, we take a generative approach by proposing a
hierarchical Bayes model: Group Latent Anomaly Detection (GLAD) model. GLAD
takes both pair-wise and point-wise data as input, automatically infers the
groups and detects group anomalies simultaneously. To account for the dynamic
properties of the social media data, we further generalize GLAD to its dynamic
extension d-GLAD. We conduct extensive experiments to evaluate our models on
both synthetic and real world datasets. The empirical results demonstrate that
our approach is effective and robust in discovering latent groups and detecting
group anomalies.
| [
"['Qi' 'Yu' 'Xinran He' 'Yan Liu']",
"Qi (Rose) Yu, Xinran He and Yan Liu"
]
|
cs.CL cs.LG | null | 1410.2045 | null | null | http://arxiv.org/pdf/1410.2045v1 | 2014-10-08T10:01:47Z | 2014-10-08T10:01:47Z | Supervised learning Methods for Bangla Web Document Categorization | This paper explores the use of machine learning approaches, or more
specifically, four supervised learning Methods, namely Decision Tree(C 4.5),
K-Nearest Neighbour (KNN), Na\"ive Bays (NB), and Support Vector Machine (SVM)
for categorization of Bangla web documents. This is a task of automatically
sorting a set of documents into categories from a predefined set. Whereas a
wide range of methods have been applied to English text categorization,
relatively few studies have been conducted on Bangla language text
categorization. Hence, we attempt to analyze the efficiency of those four
methods for categorization of Bangla documents. In order to validate, Bangla
corpus from various websites has been developed and used as examples for the
experiment. For Bangla, empirical results support that all four methods produce
satisfactory performance with SVM attaining good result in terms of high
dimensional and relatively noisy document feature vectors.
| [
"['Ashis Kumar Mandal' 'Rikta Sen']",
"Ashis Kumar Mandal and Rikta Sen"
]
|
cs.LG | null | 1410.2191 | null | null | http://arxiv.org/pdf/1410.2191v1 | 2014-10-03T09:25:43Z | 2014-10-03T09:25:43Z | Learning manifold to regularize nonnegative matrix factorization | Inthischapterwediscusshowtolearnanoptimalmanifoldpresentationto regularize
nonegative matrix factorization (NMF) for data representation problems.
NMF,whichtriestorepresentanonnegativedatamatrixasaproductoftwolowrank
nonnegative matrices, has been a popular method for data representation due to
its ability to explore the latent part-based structure of data. Recent study
shows that lots of data distributions have manifold structures, and we should
respect the manifold structure when the data are represented. Recently,
manifold regularized NMF used a nearest neighbor graph to regulate the learning
of factorization parameter matrices and has shown its advantage over
traditional NMF methods for data representation problems. However, how to
construct an optimal graph to present the manifold prop- erly remains a
difficultproblem due to the graph modelselection, noisy features, and nonlinear
distributed data. In this chapter, we introduce three effective methods to
solve these problems of graph construction for manifold regularized NMF.
Multiple graph learning is proposed to solve the problem of graph model
selection, adaptive graph learning via feature selection is proposed to solve
the problem of constructing a graph from noisy features, while multi-kernel
learning-based graph construction is used to solve the problem of learning a
graph from nonlinearly distributed data.
| [
"Jim Jing-Yan Wang, Xin Gao",
"['Jim Jing-Yan Wang' 'Xin Gao']"
]
|
cs.CV cs.LG | 10.1109/TNNLS.2015.2423694 | 1410.2386 | null | null | http://arxiv.org/abs/1410.2386v2 | 2015-04-16T05:36:23Z | 2014-10-09T08:50:31Z | Bayesian Robust Tensor Factorization for Incomplete Multiway Data | We propose a generative model for robust tensor factorization in the presence
of both missing data and outliers. The objective is to explicitly infer the
underlying low-CP-rank tensor capturing the global information and a sparse
tensor capturing the local information (also considered as outliers), thus
providing the robust predictive distribution over missing entries. The
low-CP-rank tensor is modeled by multilinear interactions between multiple
latent factors on which the column sparsity is enforced by a hierarchical
prior, while the sparse tensor is modeled by a hierarchical view of Student-$t$
distribution that associates an individual hyperparameter with each element
independently. For model learning, we develop an efficient closed-form
variational inference under a fully Bayesian treatment, which can effectively
prevent the overfitting problem and scales linearly with data size. In contrast
to existing related works, our method can perform model selection automatically
and implicitly without need of tuning parameters. More specifically, it can
discover the groundtruth of CP rank and automatically adapt the sparsity
inducing priors to various types of outliers. In addition, the tradeoff between
the low-rank approximation and the sparse representation can be optimized in
the sense of maximum model evidence. The extensive experiments and comparisons
with many state-of-the-art algorithms on both synthetic and real-world datasets
demonstrate the superiorities of our method from several perspectives.
| [
"['Qibin Zhao' 'Guoxu Zhou' 'Liqing Zhang' 'Andrzej Cichocki'\n 'Shun-ichi Amari']",
"Qibin Zhao, Guoxu Zhou, Liqing Zhang, Andrzej Cichocki, and Shun-ichi\n Amari"
]
|
stat.ML cs.CL cs.LG | null | 1410.2455 | null | null | http://arxiv.org/pdf/1410.2455v3 | 2016-02-04T05:51:59Z | 2014-10-09T13:41:18Z | BilBOWA: Fast Bilingual Distributed Representations without Word
Alignments | We introduce BilBOWA (Bilingual Bag-of-Words without Alignments), a simple
and computationally-efficient model for learning bilingual distributed
representations of words which can scale to large monolingual datasets and does
not require word-aligned parallel training data. Instead it trains directly on
monolingual data and extracts a bilingual signal from a smaller set of raw-text
sentence-aligned data. This is achieved using a novel sampled bag-of-words
cross-lingual objective, which is used to regularize two noise-contrastive
language models for efficient cross-lingual feature learning. We show that
bilingual embeddings learned using the proposed model outperform
state-of-the-art methods on a cross-lingual document classification task as
well as a lexical translation task on WMT11 data.
| [
"['Stephan Gouws' 'Yoshua Bengio' 'Greg Corrado']",
"Stephan Gouws, Yoshua Bengio, Greg Corrado"
]
|
cs.LG cs.IT math.IT stat.ML | null | 1410.2500 | null | null | http://arxiv.org/pdf/1410.2500v6 | 2017-09-15T23:31:16Z | 2014-10-09T15:08:57Z | Speculate-Correct Error Bounds for k-Nearest Neighbor Classifiers | We introduce the speculate-correct method to derive error bounds for local
classifiers. Using it, we show that k nearest neighbor classifiers, in spite of
their famously fractured decision boundaries, have exponential error bounds
with O(sqrt((k + ln n) / n)) error bound range for n in-sample examples.
| [
"Eric Bax, Lingjie Weng, Xu Tian",
"['Eric Bax' 'Lingjie Weng' 'Xu Tian']"
]
|
stat.ME cs.IT cs.LG math.IT | null | 1410.2505 | null | null | http://arxiv.org/pdf/1410.2505v2 | 2015-12-31T04:27:22Z | 2014-10-09T15:17:54Z | Recovery of Sparse Signals Using Multiple Orthogonal Least Squares | We study the problem of recovering sparse signals from compressed linear
measurements. This problem, often referred to as sparse recovery or sparse
reconstruction, has generated a great deal of interest in recent years. To
recover the sparse signals, we propose a new method called multiple orthogonal
least squares (MOLS), which extends the well-known orthogonal least squares
(OLS) algorithm by allowing multiple $L$ indices to be chosen per iteration.
Owing to inclusion of multiple support indices in each selection, the MOLS
algorithm converges in much fewer iterations and improves the computational
efficiency over the conventional OLS algorithm. Theoretical analysis shows that
MOLS ($L > 1$) performs exact recovery of all $K$-sparse signals within $K$
iterations if the measurement matrix satisfies the restricted isometry property
(RIP) with isometry constant $\delta_{LK} < \frac{\sqrt{L}}{\sqrt{K} + 2
\sqrt{L}}.$ The recovery performance of MOLS in the noisy scenario is also
studied. It is shown that stable recovery of sparse signals can be achieved
with the MOLS algorithm when the signal-to-noise ratio (SNR) scales linearly
with the sparsity level of input signals.
| [
"['Jian Wang' 'Ping Li']",
"Jian Wang, Ping Li"
]
|
cs.LG cs.CL | null | 1410.2686 | null | null | http://arxiv.org/pdf/1410.2686v2 | 2015-03-11T05:56:51Z | 2014-10-10T06:42:25Z | Polarization Measurement of High Dimensional Social Media Messages With
Support Vector Machine Algorithm Using Mapreduce | In this article, we propose a new Support Vector Machine (SVM) training
algorithm based on distributed MapReduce technique. In literature, there are a
lots of research that shows us SVM has highest generalization property among
classification algorithms used in machine learning area. Also, SVM classifier
model is not affected by correlations of the features. But SVM uses quadratic
optimization techniques in its training phase. The SVM algorithm is formulated
as quadratic optimization problem. Quadratic optimization problem has $O(m^3)$
time and $O(m^2)$ space complexity, where m is the training set size. The
computation time of SVM training is quadratic in the number of training
instances. In this reason, SVM is not a suitable classification algorithm for
large scale dataset classification. To solve this training problem we developed
a new distributed MapReduce method developed. Accordingly, (i) SVM algorithm is
trained in distributed dataset individually; (ii) then merge all support
vectors of classifier model in every trained node; and (iii) iterate these two
steps until the classifier model converges to the optimal classifier function.
In the implementation phase, large scale social media dataset is presented in
TFxIDF matrix. The matrix is used for sentiment analysis to get polarization
value. Two and three class models are created for classification method.
Confusion matrices of each classification model are presented in tables. Social
media messages corpus consists of 108 public and 66 private universities
messages in Turkey. Twitter is used for source of corpus. Twitter user messages
are collected using Twitter Streaming API. Results are shown in graphics and
tables.
| [
"['Ferhat Özgür Çatak']",
"Ferhat \\\"Ozg\\\"ur \\c{C}atak"
]
|
cs.LG cs.NA | null | 1410.2786 | null | null | http://arxiv.org/pdf/1410.2786v1 | 2014-10-10T13:56:58Z | 2014-10-10T13:56:58Z | New SVD based initialization strategy for Non-negative Matrix
Factorization | There are two problems need to be dealt with for Non-negative Matrix
Factorization (NMF): choose a suitable rank of the factorization and provide a
good initialization method for NMF algorithms. This paper aims to solve these
two problems using Singular Value Decomposition (SVD). At first we extract the
number of main components as the rank, actually this method is inspired from
[1, 2]. Second, we use the singular value and its vectors to initialize NMF
algorithm. In 2008, Boutsidis and Gollopoulos [3] provided the method titled
NNDSVD to enhance initialization of NMF algorithms. They extracted the positive
section and respective singular triplet information of the unit matrices
{C(j)}k j=1 which were obtained from singular vector pairs. This strategy aims
to use positive section to cope with negative elements of the singular vectors,
but in experiments we found that even replacing negative elements by their
absolute values could get better results than NNDSVD. Hence, we give another
method based SVD to fulfil initialization for NMF algorithms (SVD-NMF).
Numerical experiments on two face databases ORL and YALE [16, 17] show that our
method is better than NNDSVD.
| [
"Hanli Qiao",
"['Hanli Qiao']"
]
|
cs.LG stat.ME | null | 1410.2838 | null | null | http://arxiv.org/pdf/1410.2838v1 | 2014-10-10T16:43:16Z | 2014-10-10T16:43:16Z | Approximate False Positive Rate Control in Selection Frequency for
Random Forest | Random Forest has become one of the most popular tools for feature selection.
Its ability to deal with high-dimensional data makes this algorithm especially
useful for studies in neuroimaging and bioinformatics. Despite its popularity
and wide use, feature selection in Random Forest still lacks a crucial
ingredient: false positive rate control. To date there is no efficient,
principled and computationally light-weight solution to this shortcoming. As a
result, researchers using Random Forest for feature selection have to resort to
using heuristically set thresholds on feature rankings. This article builds an
approximate probabilistic model for the feature selection process in random
forest training, which allows us to compute an estimated false positive rate
for a given threshold on selection frequency. Hence, it presents a principled
way to determine thresholds for the selection of relevant features without any
additional computational load. Experimental analysis with synthetic data
demonstrates that the proposed approach can limit false positive rates on the
order of the desired values and keep false negative rates low. Results show
that this holds even in the presence of a complex correlation structure between
features. Its good statistical properties and light-weight computational needs
make this approach widely applicable to feature selection for a wide-range of
applications.
| [
"['Ender Konukoglu' 'Melanie Ganz']",
"Ender Konukoglu and Melanie Ganz"
]
|
cs.LO cs.LG math.LO math.PR | 10.1080/11663081.2016.1139967 | 1410.3059 | null | null | http://arxiv.org/abs/1410.3059v2 | 2014-12-12T16:47:40Z | 2014-10-12T07:53:00Z | Computabilities of Validity and Satisfiability in Probability Logics
over Finite and Countable Models | The $\epsilon$-logic (which is called $\epsilon$E-logic in this paper) of
Kuyper and Terwijn is a variant of first order logic with the same syntax, in
which the models are equipped with probability measures and in which the
$\forall x$ quantifier is interpreted as "there exists a set $A$ of measure
$\ge 1 - \epsilon$ such that for each $x \in A$, ...." Previously, Kuyper and
Terwijn proved that the general satisfiability and validity problems for this
logic are, i) for rational $\epsilon \in (0, 1)$, respectively
$\Sigma^1_1$-complete and $\Pi^1_1$-hard, and ii) for $\epsilon = 0$,
respectively decidable and $\Sigma^0_1$-complete. The adjective "general" here
means "uniformly over all languages."
We extend these results in the scenario of finite models. In particular, we
show that the problems of satisfiability by and validity over finite models in
$\epsilon$E-logic are, i) for rational $\epsilon \in (0, 1)$, respectively
$\Sigma^0_1$- and $\Pi^0_1$-complete, and ii) for $\epsilon = 0$, respectively
decidable and $\Pi^0_1$-complete. Although partial results toward the countable
case are also achieved, the computability of $\epsilon$E-logic over countable
models still remains largely unsolved. In addition, most of the results, of
this paper and of Kuyper and Terwijn, do not apply to individual languages with
a finite number of unary predicates. Reducing this requirement continues to be
a major point of research.
On the positive side, we derive the decidability of the corresponding
problems for monadic relational languages --- equality- and function-free
languages with finitely many unary and zero other predicates. This result holds
for all three of the unrestricted, the countable, and the finite model cases.
Applications in computational learning theory, weighted graphs, and neural
networks are discussed in the context of these decidability and undecidability
results.
| [
"Greg Yang",
"['Greg Yang']"
]
|
cs.LG cs.NI | null | 1410.3145 | null | null | http://arxiv.org/pdf/1410.3145v1 | 2014-10-12T20:43:04Z | 2014-10-12T20:43:04Z | Machine Learning Techniques in Cognitive Radio Networks | Cognitive radio is an intelligent radio that can be programmed and configured
dynamically to fully use the frequency resources that are not used by licensed
users. It defines the radio devices that are capable of learning and adapting
to their transmission to the external radio environment, which means it has
some kind of intelligence for monitoring the radio environment, learning the
environment and make smart decisions. In this paper, we are reviewing some
examples of the usage of machine learning techniques in cognitive radio
networks for implementing the intelligent radio.
| [
"['Peter Hossain' 'Adaulfo Komisarczuk' 'Garin Pawetczak' 'Sarah Van Dijk'\n 'Isabella Axelsen']",
"Peter Hossain, Adaulfo Komisarczuk, Garin Pawetczak, Sarah Van Dijk,\n Isabella Axelsen"
]
|
cs.CG cs.LG math.AT stat.ML | null | 1410.3169 | null | null | http://arxiv.org/pdf/1410.3169v1 | 2014-10-13T00:21:59Z | 2014-10-13T00:21:59Z | Multi-Scale Local Shape Analysis and Feature Selection in Machine
Learning Applications | We introduce a method called multi-scale local shape analysis, or MLSA, for
extracting features that describe the local structure of points within a
dataset. The method uses both geometric and topological features at multiple
levels of granularity to capture diverse types of local information for
subsequent machine learning algorithms operating on the dataset. Using
synthetic and real dataset examples, we demonstrate significant performance
improvement of classification algorithms constructed for these datasets with
correspondingly augmented features.
| [
"Paul Bendich, Ellen Gasparovic, John Harer, Rauf Izmailov, and Linda\n Ness",
"['Paul Bendich' 'Ellen Gasparovic' 'John Harer' 'Rauf Izmailov'\n 'Linda Ness']"
]
|
stat.ML cs.LG | null | 1410.3314 | null | null | http://arxiv.org/pdf/1410.3314v1 | 2014-10-13T14:04:15Z | 2014-10-13T14:04:15Z | Propagation Kernels | We introduce propagation kernels, a general graph-kernel framework for
efficiently measuring the similarity of structured data. Propagation kernels
are based on monitoring how information spreads through a set of given graphs.
They leverage early-stage distributions from propagation schemes such as random
walks to capture structural information encoded in node labels, attributes, and
edge information. This has two benefits. First, off-the-shelf propagation
schemes can be used to naturally construct kernels for many graph types,
including labeled, partially labeled, unlabeled, directed, and attributed
graphs. Second, by leveraging existing efficient and informative propagation
schemes, propagation kernels can be considerably faster than state-of-the-art
approaches without sacrificing predictive performance. We will also show that
if the graphs at hand have a regular structure, for instance when modeling
image or video data, one can exploit this regularity to scale the kernel
computation to large databases of graphs with thousands of nodes. We support
our contributions by exhaustive experiments on a number of real-world graphs
from a variety of application domains.
| [
"['Marion Neumann' 'Roman Garnett' 'Christian Bauckhage'\n 'Kristian Kersting']",
"Marion Neumann and Roman Garnett and Christian Bauckhage and Kristian\n Kersting"
]
|
cs.LG cs.GT | null | 1410.3341 | null | null | http://arxiv.org/pdf/1410.3341v1 | 2014-10-09T03:51:19Z | 2014-10-09T03:51:19Z | Generalization Analysis for Game-Theoretic Machine Learning | For Internet applications like sponsored search, cautions need to be taken
when using machine learning to optimize their mechanisms (e.g., auction) since
self-interested agents in these applications may change their behaviors (and
thus the data distribution) in response to the mechanisms. To tackle this
problem, a framework called game-theoretic machine learning (GTML) was recently
proposed, which first learns a Markov behavior model to characterize agents'
behaviors, and then learns the optimal mechanism by simulating agents' behavior
changes in response to the mechanism. While GTML has demonstrated practical
success, its generalization analysis is challenging because the behavior data
are non-i.i.d. and dependent on the mechanism. To address this challenge,
first, we decompose the generalization error for GTML into the behavior
learning error and the mechanism learning error; second, for the behavior
learning error, we obtain novel non-asymptotic error bounds for both parametric
and non-parametric behavior learning methods; third, for the mechanism learning
error, we derive a uniform convergence bound based on a new concept called
nested covering number of the mechanism space and the generalization analysis
techniques developed for mixing sequences. To the best of our knowledge, this
is the first work on the generalization analysis of GTML, and we believe it has
general implications to the theoretical analysis of other complicated machine
learning problems.
| [
"['Haifang Li' 'Fei Tian' 'Wei Chen' 'Tao Qin' 'Tie-Yan Liu']",
"Haifang Li, Fei Tian, Wei Chen, Tao Qin, Tie-Yan Liu"
]
|
stat.ML cs.LG | null | 1410.3348 | null | null | http://arxiv.org/pdf/1410.3348v1 | 2014-10-13T15:27:45Z | 2014-10-13T15:27:45Z | Fast Multilevel Support Vector Machines | Solving different types of optimization models (including parameters fitting)
for support vector machines on large-scale training data is often an expensive
computational task. This paper proposes a multilevel algorithmic framework that
scales efficiently to very large data sets. Instead of solving the whole
training set in one optimization process, the support vectors are obtained and
gradually refined at multiple levels of coarseness of the data. The proposed
framework includes: (a) construction of hierarchy of large-scale data coarse
representations, and (b) a local processing of updating the hyperplane
throughout this hierarchy. Our multilevel framework substantially improves the
computational time without loosing the quality of classifiers. The algorithms
are demonstrated for both regular and weighted support vector machines.
Experimental results are presented for balanced and imbalanced classification
problems. Quality improvement on several imbalanced data sets has been
observed.
| [
"Talayeh Razzaghi and Ilya Safro",
"['Talayeh Razzaghi' 'Ilya Safro']"
]
|
math.DG cs.LG math.MG stat.ML | null | 1410.3351 | null | null | http://arxiv.org/pdf/1410.3351v5 | 2018-03-21T20:47:22Z | 2014-10-13T15:37:20Z | Ricci Curvature and the Manifold Learning Problem | Consider a sample of $n$ points taken i.i.d from a submanifold $\Sigma$ of
Euclidean space. We show that there is a way to estimate the Ricci curvature of
$\Sigma$ with respect to the induced metric from the sample. Our method is
grounded in the notions of Carr\'e du Champ for diffusion semi-groups, the
theory of Empirical processes and local Principal Component Analysis.
| [
"Antonio G. Ache and Micah W. Warren",
"['Antonio G. Ache' 'Micah W. Warren']"
]
|
cs.DS cs.IT cs.LG math.IT | null | 1410.3386 | null | null | http://arxiv.org/pdf/1410.3386v2 | 2014-10-14T00:27:21Z | 2014-10-13T16:36:10Z | Testing Poisson Binomial Distributions | A Poisson Binomial distribution over $n$ variables is the distribution of the
sum of $n$ independent Bernoullis. We provide a sample near-optimal algorithm
for testing whether a distribution $P$ supported on $\{0,...,n\}$ to which we
have sample access is a Poisson Binomial distribution, or far from all Poisson
Binomial distributions. The sample complexity of our algorithm is $O(n^{1/4})$
to which we provide a matching lower bound. We note that our sample complexity
improves quadratically upon that of the naive "learn followed by tolerant-test"
approach, while instance optimal identity testing [VV14] is not applicable
since we are looking to simultaneously test against a whole family of
distributions.
| [
"['Jayadev Acharya' 'Constantinos Daskalakis']",
"Jayadev Acharya and Constantinos Daskalakis"
]
|
cs.OS cs.LG cs.SY | null | 1410.3463 | null | null | http://arxiv.org/pdf/1410.3463v1 | 2014-10-13T14:26:28Z | 2014-10-13T14:26:28Z | Mining Block I/O Traces for Cache Preloading with Sparse Temporal
Non-parametric Mixture of Multivariate Poisson | Existing caching strategies, in the storage domain, though well suited to
exploit short range spatio-temporal patterns, are unable to leverage long-range
motifs for improving hitrates. Motivated by this, we investigate novel Bayesian
non-parametric modeling(BNP) techniques for count vectors, to capture long
range correlations for cache preloading, by mining Block I/O traces. Such
traces comprise of a sequence of memory accesses that can be aggregated into
high-dimensional sparse correlated count vector sequences.
While there are several state of the art BNP algorithms for clustering and
their temporal extensions for prediction, there has been no work on exploring
these for correlated count vectors. Our first contribution addresses this gap
by proposing a DP based mixture model of Multivariate Poisson (DP-MMVP) and its
temporal extension(HMM-DP-MMVP) that captures the full covariance structure of
multivariate count data. However, modeling full covariance structure for count
vectors is computationally expensive, particularly for high dimensional data.
Hence, we exploit sparsity in our count vectors, and as our main contribution,
introduce the Sparse DP mixture of multivariate Poisson(Sparse-DP-MMVP),
generalizing our DP-MMVP mixture model, also leading to more efficient
inference. We then discuss a temporal extension to our model for cache
preloading.
We take the first step towards mining historical data, to capture long range
patterns in storage traces for cache preloading. Experimentally, we show a
dramatic improvement in hitrates on benchmark traces and lay the groundwork for
further research in storage domain to reduce latencies using data mining
techniques to capture long range motifs.
| [
"Lavanya Sita Tekumalla, Chiranjib Bhattacharyya",
"['Lavanya Sita Tekumalla' 'Chiranjib Bhattacharyya']"
]
|
hep-ph cs.LG hep-ex | 10.1103/PhysRevLett.114.111801 | 1410.3469 | null | null | http://arxiv.org/abs/1410.3469v1 | 2014-10-13T20:00:03Z | 2014-10-13T20:00:03Z | Enhanced Higgs to $\tau^+\tau^-$ Searches with Deep Learning | The Higgs boson is thought to provide the interaction that imparts mass to
the fundamental fermions, but while measurements at the Large Hadron Collider
(LHC) are consistent with this hypothesis, current analysis techniques lack the
statistical power to cross the traditional 5$\sigma$ significance barrier
without more data. \emph{Deep learning} techniques have the potential to
increase the statistical power of this analysis by \emph{automatically}
learning complex, high-level data representations. In this work, deep neural
networks are used to detect the decay of the Higgs to a pair of tau leptons. A
Bayesian optimization algorithm is used to tune the network architecture and
training algorithm hyperparameters, resulting in a deep network of eight
non-linear processing layers that improves upon the performance of shallow
classifiers even without the use of features specifically engineered by
physicists for this application. The improvement in discovery significance is
equivalent to an increase in the accumulated dataset of 25\%.
| [
"Pierre Baldi, Peter Sadowski, Daniel Whiteson",
"['Pierre Baldi' 'Peter Sadowski' 'Daniel Whiteson']"
]
|
cs.LG stat.ML | null | 1410.3595 | null | null | http://arxiv.org/pdf/1410.3595v1 | 2014-10-14T07:29:35Z | 2014-10-14T07:29:35Z | A stochastic behavior analysis of stochastic restricted-gradient descent
algorithm in reproducing kernel Hilbert spaces | This paper presents a stochastic behavior analysis of a kernel-based
stochastic restricted-gradient descent method. The restricted gradient gives a
steepest ascent direction within the so-called dictionary subspace. The
analysis provides the transient and steady state performance in the mean
squared error criterion. It also includes stability conditions in the mean and
mean-square sense. The present study is based on the analysis of the kernel
normalized least mean square (KNLMS) algorithm initially proposed by Chen et
al. Simulation results validate the analysis.
| [
"Masa-aki Takizawa, Masahiro Yukawa, and Cedric Richard",
"['Masa-aki Takizawa' 'Masahiro Yukawa' 'Cedric Richard']"
]
|
stat.ML cond-mat.dis-nn cond-mat.stat-mech cs.LG | 10.7566/JPSJ.84.024801 | 1410.3596 | null | null | http://arxiv.org/abs/1410.3596v2 | 2014-12-04T02:24:07Z | 2014-10-14T07:41:34Z | Detection of cheating by decimation algorithm | We expand the item response theory to study the case of "cheating students"
for a set of exams, trying to detect them by applying a greedy algorithm of
inference. This extended model is closely related to the Boltzmann machine
learning. In this paper we aim to infer the correct biases and interactions of
our model by considering a relatively small number of sets of training data.
Nevertheless, the greedy algorithm that we employed in the present study
exhibits good performance with a few number of training data. The key point is
the sparseness of the interactions in our problem in the context of the
Boltzmann machine learning: the existence of cheating students is expected to
be very rare (possibly even in real world). We compare a standard approach to
infer the sparse interactions in the Boltzmann machine learning to our greedy
algorithm and we find the latter to be superior in several aspects.
| [
"['Shogo Yamanaka' 'Masayuki Ohzeki' 'Aurelien Decelle']",
"Shogo Yamanaka, Masayuki Ohzeki, Aurelien Decelle"
]
|
cs.CL cs.LG | null | 1410.3791 | null | null | http://arxiv.org/pdf/1410.3791v1 | 2014-10-14T18:37:32Z | 2014-10-14T18:37:32Z | POLYGLOT-NER: Massive Multilingual Named Entity Recognition | The increasing diversity of languages used on the web introduces a new level
of complexity to Information Retrieval (IR) systems. We can no longer assume
that textual content is written in one language or even the same language
family. In this paper, we demonstrate how to build massive multilingual
annotators with minimal human expertise and intervention. We describe a system
that builds Named Entity Recognition (NER) annotators for 40 major languages
using Wikipedia and Freebase. Our approach does not require NER human annotated
datasets or language specific resources like treebanks, parallel corpora, and
orthographic rules. The novelty of approach lies therein - using only language
agnostic techniques, while achieving competitive performance.
Our method learns distributed word representations (word embeddings) which
encode semantic and syntactic features of words in each language. Then, we
automatically generate datasets from Wikipedia link structure and Freebase
attributes. Finally, we apply two preprocessing stages (oversampling and exact
surface form matching) which do not require any linguistic expertise.
Our evaluation is two fold: First, we demonstrate the system performance on
human annotated datasets. Second, for languages where no gold-standard
benchmarks are available, we propose a new method, distant evaluation, based on
statistical machine translation.
| [
"['Rami Al-Rfou' 'Vivek Kulkarni' 'Bryan Perozzi' 'Steven Skiena']",
"Rami Al-Rfou, Vivek Kulkarni, Bryan Perozzi, Steven Skiena"
]
|
stat.ML cond-mat.stat-mech cs.LG cs.NE | null | 1410.3831 | null | null | http://arxiv.org/pdf/1410.3831v1 | 2014-10-14T20:00:09Z | 2014-10-14T20:00:09Z | An exact mapping between the Variational Renormalization Group and Deep
Learning | Deep learning is a broad set of techniques that uses multiple layers of
representation to automatically learn relevant features directly from
structured data. Recently, such techniques have yielded record-breaking results
on a diverse set of difficult machine learning tasks in computer vision, speech
recognition, and natural language processing. Despite the enormous success of
deep learning, relatively little is understood theoretically about why these
techniques are so successful at feature learning and compression. Here, we show
that deep learning is intimately related to one of the most important and
successful techniques in theoretical physics, the renormalization group (RG).
RG is an iterative coarse-graining scheme that allows for the extraction of
relevant features (i.e. operators) as a physical system is examined at
different length scales. We construct an exact mapping from the variational
renormalization group, first introduced by Kadanoff, and deep learning
architectures based on Restricted Boltzmann Machines (RBMs). We illustrate
these ideas using the nearest-neighbor Ising Model in one and two-dimensions.
Our results suggests that deep learning algorithms may be employing a
generalized RG-like scheme to learn relevant features from data.
| [
"Pankaj Mehta and David J. Schwab",
"['Pankaj Mehta' 'David J. Schwab']"
]
|
cs.DS cs.LG stat.ML | null | 1410.3886 | null | null | http://arxiv.org/pdf/1410.3886v1 | 2014-10-14T22:41:20Z | 2014-10-14T22:41:20Z | Tighter Low-rank Approximation via Sampling the Leveraged Element | In this work, we propose a new randomized algorithm for computing a low-rank
approximation to a given matrix. Taking an approach different from existing
literature, our method first involves a specific biased sampling, with an
element being chosen based on the leverage scores of its row and column, and
then involves weighted alternating minimization over the factored form of the
intended low-rank matrix, to minimize error only on these samples. Our method
can leverage input sparsity, yet produce approximations in {\em spectral} (as
opposed to the weaker Frobenius) norm; this combines the best aspects of
otherwise disparate current results, but with a dependence on the condition
number $\kappa = \sigma_1/\sigma_r$. In particular we require $O(nnz(M) +
\frac{n\kappa^2 r^5}{\epsilon^2})$ computations to generate a rank-$r$
approximation to $M$ in spectral norm. In contrast, the best existing method
requires $O(nnz(M)+ \frac{nr^2}{\epsilon^4})$ time to compute an approximation
in Frobenius norm. Besides the tightness in spectral norm, we have a better
dependence on the error $\epsilon$. Our method is naturally and highly
parallelizable.
Our new approach enables two extensions that are interesting on their own.
The first is a new method to directly compute a low-rank approximation (in
efficient factored form) to the product of two given matrices; it computes a
small random set of entries of the product, and then executes weighted
alternating minimization (as before) on these. The sampling strategy is
different because now we cannot access leverage scores of the product matrix
(but instead have to work with input matrices). The second extension is an
improved algorithm with smaller communication complexity for the distributed
PCA setting (where each server has small set of rows of the matrix, and want to
compute low rank approximation with small amount of communication with other
servers).
| [
"Srinadh Bhojanapalli, Prateek Jain, Sujay Sanghavi",
"['Srinadh Bhojanapalli' 'Prateek Jain' 'Sujay Sanghavi']"
]
|
cs.LG cs.IR cs.SI | null | 1410.3915 | null | null | http://arxiv.org/pdf/1410.3915v1 | 2014-10-15T03:10:26Z | 2014-10-15T03:10:26Z | Spotting Suspicious Link Behavior with fBox: An Adversarial Perspective | How can we detect suspicious users in large online networks? Online
popularity of a user or product (via follows, page-likes, etc.) can be
monetized on the premise of higher ad click-through rates or increased sales.
Web services and social networks which incentivize popularity thus suffer from
a major problem of fake connections from link fraudsters looking to make a
quick buck. Typical methods of catching this suspicious behavior use spectral
techniques to spot large groups of often blatantly fraudulent (but sometimes
honest) users. However, small-scale, stealthy attacks may go unnoticed due to
the nature of low-rank eigenanalysis used in practice.
In this work, we take an adversarial approach to find and prove claims about
the weaknesses of modern, state-of-the-art spectral methods and propose fBox,
an algorithm designed to catch small-scale, stealth attacks that slip below the
radar. Our algorithm has the following desirable properties: (a) it has
theoretical underpinnings, (b) it is shown to be highly effective on real data
and (c) it is scalable (linear on the input size). We evaluate fBox on a large,
public 41.7 million node, 1.5 billion edge who-follows-whom social graph from
Twitter in 2010 and with high precision identify many suspicious accounts which
have persisted without suspension even to this day.
| [
"Neil Shah, Alex Beutel, Brian Gallagher, Christos Faloutsos",
"['Neil Shah' 'Alex Beutel' 'Brian Gallagher' 'Christos Faloutsos']"
]
|
cs.LG | null | 1410.3935 | null | null | http://arxiv.org/pdf/1410.3935v1 | 2014-10-15T06:01:03Z | 2014-10-15T06:01:03Z | A Logic-based Approach to Generatively Defined Discriminative Modeling | Conditional random fields (CRFs) are usually specified by graphical models
but in this paper we propose to use probabilistic logic programs and specify
them generatively. Our intension is first to provide a unified approach to CRFs
for complex modeling through the use of a Turing complete language and second
to offer a convenient way of realizing generative-discriminative pairs in
machine learning to compare generative and discriminative models and choose the
best model. We implemented our approach as the D-PRISM language by modifying
PRISM, a logic-based probabilistic modeling language for generative modeling,
while exploiting its dynamic programming mechanism for efficient probability
computation. We tested D-PRISM with logistic regression, a linear-chain CRF and
a CRF-CFG and empirically confirmed their excellent discriminative performance
compared to their generative counterparts, i.e.\ naive Bayes, an HMM and a
PCFG. We also introduced new CRF models, CRF-BNCs and CRF-LCGs. They are CRF
versions of Bayesian network classifiers and probabilistic left-corner grammars
respectively and easily implementable in D-PRISM. We empirically showed that
they outperform their generative counterparts as expected.
| [
"['Taisuke Sato' 'Keiichi Kubota' 'Yoshitaka Kameya']",
"Taisuke Sato, Keiichi Kubota, Yoshitaka Kameya"
]
|
cs.LG stat.CO stat.ML | null | 1410.4009 | null | null | http://arxiv.org/pdf/1410.4009v1 | 2014-10-15T11:01:52Z | 2014-10-15T11:01:52Z | Thompson sampling with the online bootstrap | Thompson sampling provides a solution to bandit problems in which new
observations are allocated to arms with the posterior probability that an arm
is optimal. While sometimes easy to implement and asymptotically optimal,
Thompson sampling can be computationally demanding in large scale bandit
problems, and its performance is dependent on the model fit to the observed
data. We introduce bootstrap Thompson sampling (BTS), a heuristic method for
solving bandit problems which modifies Thompson sampling by replacing the
posterior distribution used in Thompson sampling by a bootstrap distribution.
We first explain BTS and show that the performance of BTS is competitive to
Thompson sampling in the well-studied Bernoulli bandit case. Subsequently, we
detail why BTS using the online bootstrap is more scalable than regular
Thompson sampling, and we show through simulation that BTS is more robust to a
misspecified error distribution. BTS is an appealing modification of Thompson
sampling, especially when samples from the posterior are otherwise not
available or are costly.
| [
"['Dean Eckles' 'Maurits Kaptein']",
"Dean Eckles and Maurits Kaptein"
]
|
stat.ML cs.LG cs.NA math.OC | null | 1410.4062 | null | null | http://arxiv.org/pdf/1410.4062v1 | 2014-10-15T13:50:34Z | 2014-10-15T13:50:34Z | Complexity Issues and Randomization Strategies in Frank-Wolfe Algorithms
for Machine Learning | Frank-Wolfe algorithms for convex minimization have recently gained
considerable attention from the Optimization and Machine Learning communities,
as their properties make them a suitable choice in a variety of applications.
However, as each iteration requires to optimize a linear model, a clever
implementation is crucial to make such algorithms viable on large-scale
datasets. For this purpose, approximation strategies based on a random sampling
have been proposed by several researchers. In this work, we perform an
experimental study on the effectiveness of these techniques, analyze possible
alternatives and provide some guidelines based on our results.
| [
"Emanuele Frandi, Ricardo Nanculef, Johan Suykens",
"['Emanuele Frandi' 'Ricardo Nanculef' 'Johan Suykens']"
]
|
cs.LG | null | 1410.4210 | null | null | http://arxiv.org/pdf/1410.4210v1 | 2014-10-15T20:08:21Z | 2014-10-15T20:08:21Z | Two-Layer Feature Reduction for Sparse-Group Lasso via Decomposition of
Convex Sets | Sparse-Group Lasso (SGL) has been shown to be a powerful regression technique
for simultaneously discovering group and within-group sparse patterns by using
a combination of the $\ell_1$ and $\ell_2$ norms. However, in large-scale
applications, the complexity of the regularizers entails great computational
challenges. In this paper, we propose a novel Two-Layer Feature REduction
method (TLFre) for SGL via a decomposition of its dual feasible set. The
two-layer reduction is able to quickly identify the inactive groups and the
inactive features, respectively, which are guaranteed to be absent from the
sparse representation and can be removed from the optimization. Existing
feature reduction methods are only applicable for sparse models with one
sparsity-inducing regularizer. To our best knowledge, TLFre is the first one
that is capable of dealing with multiple sparsity-inducing regularizers.
Moreover, TLFre has a very low computational cost and can be integrated with
any existing solvers. We also develop a screening method---called DPC
(DecomPosition of Convex set)---for the nonnegative Lasso problem. Experiments
on both synthetic and real data sets show that TLFre and DPC improve the
efficiency of SGL and nonnegative Lasso by several orders of magnitude.
| [
"['Jie Wang' 'Jieping Ye']",
"Jie Wang and Jieping Ye"
]
|
cs.LG cs.CV | null | 1410.4341 | null | null | http://arxiv.org/pdf/1410.4341v1 | 2014-10-16T09:09:45Z | 2014-10-16T09:09:45Z | Implicit segmentation of Kannada characters in offline handwriting
recognition using hidden Markov models | We describe a method for classification of handwritten Kannada characters
using Hidden Markov Models (HMMs). Kannada script is agglutinative, where
simple shapes are concatenated horizontally to form a character. This results
in a large number of characters making the task of classification difficult.
Character segmentation plays a significant role in reducing the number of
classes. Explicit segmentation techniques suffer when overlapping shapes are
present, which is common in the case of handwritten text. We use HMMs to take
advantage of the agglutinative nature of Kannada script, which allows us to
perform implicit segmentation of characters along with recognition. All the
experiments are performed on the Chars74k dataset that consists of 657
handwritten characters collected across multiple users. Gradient-based features
are extracted from individual characters and are used to train character HMMs.
The use of implicit segmentation technique at the character level resulted in
an improvement of around 10%. This system also outperformed an existing system
tested on the same dataset by around 16%. Analysis based on learning curves
showed that increasing the training data could result in better accuracy.
Accordingly, we collected additional data and obtained an improvement of 4%
with 6 additional samples.
| [
"Manasij Venkatesh, Vikas Majjagi, and Deepu Vijayasenan",
"['Manasij Venkatesh' 'Vikas Majjagi' 'Deepu Vijayasenan']"
]
|
cs.SI cs.LG stat.ML | null | 1410.4355 | null | null | http://arxiv.org/pdf/1410.4355v4 | 2015-04-20T11:55:53Z | 2014-10-16T09:57:20Z | Multi-Level Anomaly Detection on Time-Varying Graph Data | This work presents a novel modeling and analysis framework for graph
sequences which addresses the challenge of detecting and contextualizing
anomalies in labelled, streaming graph data. We introduce a generalization of
the BTER model of Seshadhri et al. by adding flexibility to community
structure, and use this model to perform multi-scale graph anomaly detection.
Specifically, probability models describing coarse subgraphs are built by
aggregating probabilities at finer levels, and these closely related
hierarchical models simultaneously detect deviations from expectation. This
technique provides insight into a graph's structure and internal context that
may shed light on a detected event. Additionally, this multi-scale analysis
facilitates intuitive visualizations by allowing users to narrow focus from an
anomalous graph to particular subgraphs or nodes causing the anomaly.
For evaluation, two hierarchical anomaly detectors are tested against a
baseline Gaussian method on a series of sampled graphs. We demonstrate that our
graph statistics-based approach outperforms both a distribution-based detector
and the baseline in a labeled setting with community structure, and it
accurately detects anomalies in synthetic and real-world datasets at the node,
subgraph, and graph levels. To illustrate the accessibility of information made
possible via this technique, the anomaly detector and an associated interactive
visualization tool are tested on NCAA football data, where teams and
conferences that moved within the league are identified with perfect recall,
and precision greater than 0.786.
| [
"Robert A. Bridges, John Collins, Erik M. Ferragut, Jason Laska, Blair\n D. Sullivan",
"['Robert A. Bridges' 'John Collins' 'Erik M. Ferragut' 'Jason Laska'\n 'Blair D. Sullivan']"
]
|
stat.ML cs.LG | null | 1410.4391 | null | null | http://arxiv.org/pdf/1410.4391v4 | 2016-12-02T00:05:32Z | 2014-10-16T12:15:17Z | Multivariate Spearman's rho for aggregating ranks using copulas | We study the problem of rank aggregation: given a set of ranked lists, we
want to form a consensus ranking. Furthermore, we consider the case of extreme
lists: i.e., only the rank of the best or worst elements are known. We impute
missing ranks by the average value and generalise Spearman's \rho to extreme
ranks. Our main contribution is the derivation of a non-parametric estimator
for rank aggregation based on multivariate extensions of Spearman's \rho, which
measures correlation between a set of ranked lists. Multivariate Spearman's
\rho is defined using copulas, and we show that the geometric mean of
normalised ranks maximises multivariate correlation. Motivated by this, we
propose a weighted geometric mean approach for learning to rank which has a
closed form least squares solution. When only the best or worst elements of a
ranked list are known, we impute the missing ranks by the average value,
allowing us to apply Spearman's \rho. Finally, we demonstrate good performance
on the rank aggregation benchmarks MQ2007 and MQ2008.
| [
"Justin Bedo and Cheng Soon Ong",
"['Justin Bedo' 'Cheng Soon Ong']"
]
|
cs.NI cs.LG | 10.1155/2015/717095 | 1410.4461 | null | null | http://arxiv.org/abs/1410.4461v2 | 2014-11-12T15:51:37Z | 2014-10-16T15:10:59Z | Map Matching based on Conditional Random Fields and Route Preference
Mining for Uncertain Trajectories | In order to improve offline map matching accuracy of low-sampling-rate GPS, a
map matching algorithm based on conditional random fields (CRF) and route
preference mining is proposed. In this algorithm, road offset distance and the
temporal-spatial relationship between the sampling points are used as features
of GPS trajectory in CRF model, which can utilize the advantages of integrating
the context information into features flexibly. When the sampling rate is too
low, it is difficult to guarantee the effectiveness using temporal-spatial
context modeled in CRF, and route preference of a driver is used as
replenishment to be superposed on the temporal-spatial transition features. The
experimental results show that this method can improve the accuracy of the
matching, especially in the case of low sampling rate.
| [
"['Xu Ming' 'Du Yi-man' 'Wu Jian-ping' 'Zhou Yang']",
"Xu Ming, Du Yi-man, Wu Jian-ping, Zhou Yang"
]
|
cs.CV cs.LG | null | 1410.4470 | null | null | http://arxiv.org/pdf/1410.4470v2 | 2014-10-17T06:12:37Z | 2014-10-16T15:51:50Z | MKL-RT: Multiple Kernel Learning for Ratio-trace Problems via Convex
Optimization | In the recent past, automatic selection or combination of kernels (or
features) based on multiple kernel learning (MKL) approaches has been receiving
significant attention from various research communities. Though MKL has been
extensively studied in the context of support vector machines (SVM), it is
relatively less explored for ratio-trace problems. In this paper, we show that
MKL can be formulated as a convex optimization problem for a general class of
ratio-trace problems that encompasses many popular algorithms used in various
computer vision applications. We also provide an optimization procedure that is
guaranteed to converge to the global optimum of the proposed optimization
problem. We experimentally demonstrate that the proposed MKL approach, which we
refer to as MKL-RT, can be successfully used to select features for
discriminative dimensionality reduction and cross-modal retrieval. We also show
that the proposed convex MKL-RT approach performs better than the recently
proposed non-convex MKL-DR approach.
| [
"Raviteja Vemulapalli, Vinay Praneeth Boda, and Rama Chellappa",
"['Raviteja Vemulapalli' 'Vinay Praneeth Boda' 'Rama Chellappa']"
]
|
stat.ML cs.CL cs.LG | null | 1410.4510 | null | null | http://arxiv.org/pdf/1410.4510v2 | 2014-11-21T16:38:59Z | 2014-10-16T17:35:31Z | Graph-Sparse LDA: A Topic Model with Structured Sparsity | Originally designed to model text, topic modeling has become a powerful tool
for uncovering latent structure in domains including medicine, finance, and
vision. The goals for the model vary depending on the application: in some
cases, the discovered topics may be used for prediction or some other
downstream task. In other cases, the content of the topic itself may be of
intrinsic scientific interest.
Unfortunately, even using modern sparse techniques, the discovered topics are
often difficult to interpret due to the high dimensionality of the underlying
space. To improve topic interpretability, we introduce Graph-Sparse LDA, a
hierarchical topic model that leverages knowledge of relationships between
words (e.g., as encoded by an ontology). In our model, topics are summarized by
a few latent concept-words from the underlying graph that explain the observed
words. Graph-Sparse LDA recovers sparse, interpretable summaries on two
real-world biomedical datasets while matching state-of-the-art prediction
performance.
| [
"Finale Doshi-Velez and Byron Wallace and Ryan Adams",
"['Finale Doshi-Velez' 'Byron Wallace' 'Ryan Adams']"
]
|
cs.LG | 10.1016/j.neucom.2015.06.065 | 1410.4573 | null | null | http://arxiv.org/abs/1410.4573v1 | 2014-10-16T20:04:49Z | 2014-10-16T20:04:49Z | Learning a hyperplane regressor by minimizing an exact bound on the VC
dimension | The capacity of a learning machine is measured by its Vapnik-Chervonenkis
dimension, and learning machines with a low VC dimension generalize better. It
is well known that the VC dimension of SVMs can be very large or unbounded,
even though they generally yield state-of-the-art learning performance. In this
paper, we show how to learn a hyperplane regressor by minimizing an exact, or
\boldmath{$\Theta$} bound on its VC dimension. The proposed approach, termed as
the Minimal Complexity Machine (MCM) Regressor, involves solving a simple
linear programming problem. Experimental results show, that on a number of
benchmark datasets, the proposed approach yields regressors with error rates
much less than those obtained with conventional SVM regresssors, while often
using fewer support vectors. On some benchmark datasets, the number of support
vectors is less than one tenth the number used by SVMs, indicating that the MCM
does indeed learn simpler representations.
| [
"Jayadeva, Suresh Chandra, Siddarth Sabharwal, and Sanjit S. Batra",
"['Jayadeva' 'Suresh Chandra' 'Siddarth Sabharwal' 'Sanjit S. Batra']"
]
|
cs.LG cs.NE cs.NI stat.ML | null | 1410.4599 | null | null | http://arxiv.org/pdf/1410.4599v2 | 2014-10-23T21:55:30Z | 2014-10-16T22:29:12Z | Non-parametric Bayesian Learning with Deep Learning Structure and Its
Applications in Wireless Networks | In this paper, we present an infinite hierarchical non-parametric Bayesian
model to extract the hidden factors over observed data, where the number of
hidden factors for each layer is unknown and can be potentially infinite.
Moreover, the number of layers can also be infinite. We construct the model
structure that allows continuous values for the hidden factors and weights,
which makes the model suitable for various applications. We use the
Metropolis-Hastings method to infer the model structure. Then the performance
of the algorithm is evaluated by the experiments. Simulation results show that
the model fits the underlying structure of simulated data.
| [
"Erte Pan and Zhu Han",
"['Erte Pan' 'Zhu Han']"
]
|
cs.LG cs.AI | null | 1410.4604 | null | null | http://arxiv.org/pdf/1410.4604v1 | 2014-10-16T23:30:08Z | 2014-10-16T23:30:08Z | Domain-Independent Optimistic Initialization for Reinforcement Learning | In Reinforcement Learning (RL), it is common to use optimistic initialization
of value functions to encourage exploration. However, such an approach
generally depends on the domain, viz., the scale of the rewards must be known,
and the feature representation must have a constant norm. We present a simple
approach that performs optimistic initialization with less dependence on the
domain.
| [
"['Marlos C. Machado' 'Sriram Srinivasan' 'Michael Bowling']",
"Marlos C. Machado, Sriram Srinivasan and Michael Bowling"
]
|
cs.NE cs.AI cs.LG | null | 1410.4615 | null | null | http://arxiv.org/pdf/1410.4615v3 | 2015-02-19T15:33:35Z | 2014-10-17T01:35:12Z | Learning to Execute | Recurrent Neural Networks (RNNs) with Long Short-Term Memory units (LSTM) are
widely used because they are expressive and are easy to train. Our interest
lies in empirically evaluating the expressiveness and the learnability of LSTMs
in the sequence-to-sequence regime by training them to evaluate short computer
programs, a domain that has traditionally been seen as too complex for neural
networks. We consider a simple class of programs that can be evaluated with a
single left-to-right pass using constant memory. Our main result is that LSTMs
can learn to map the character-level representations of such programs to their
correct outputs. Notably, it was necessary to use curriculum learning, and
while conventional curriculum learning proved ineffective, we developed a new
variant of curriculum learning that improved our networks' performance in all
experimental conditions. The improved curriculum had a dramatic impact on an
addition problem, making it possible to train an LSTM to add two 9-digit
numbers with 99% accuracy.
| [
"Wojciech Zaremba, Ilya Sutskever",
"['Wojciech Zaremba' 'Ilya Sutskever']"
]
|
cs.CV cs.LG | null | 1410.4673 | null | null | http://arxiv.org/pdf/1410.4673v1 | 2014-10-17T09:40:20Z | 2014-10-17T09:40:20Z | KCRC-LCD: Discriminative Kernel Collaborative Representation with
Locality Constrained Dictionary for Visual Categorization | We consider the image classification problem via kernel collaborative
representation classification with locality constrained dictionary (KCRC-LCD).
Specifically, we propose a kernel collaborative representation classification
(KCRC) approach in which kernel method is used to improve the discrimination
ability of collaborative representation classification (CRC). We then measure
the similarities between the query and atoms in the global dictionary in order
to construct a locality constrained dictionary (LCD) for KCRC. In addition, we
discuss several similarity measure approaches in LCD and further present a
simple yet effective unified similarity measure whose superiority is validated
in experiments. There are several appealing aspects associated with LCD. First,
LCD can be nicely incorporated under the framework of KCRC. The LCD similarity
measure can be kernelized under KCRC, which theoretically links CRC and LCD
under the kernel method. Second, KCRC-LCD becomes more scalable to both the
training set size and the feature dimension. Example shows that KCRC is able to
perfectly classify data with certain distribution, while conventional CRC fails
completely. Comprehensive experiments on many public datasets also show that
KCRC-LCD is a robust discriminative classifier with both excellent performance
and good scalability, being comparable or outperforming many other
state-of-the-art approaches.
| [
"['Weiyang Liu' 'Zhiding Yu' 'Lijia Lu' 'Yandong Wen' 'Hui Li'\n 'Yuexian Zou']",
"Weiyang Liu, Zhiding Yu, Lijia Lu, Yandong Wen, Hui Li and Yuexian Zou"
]
|
cs.LG stat.ML | null | 1410.4744 | null | null | http://arxiv.org/pdf/1410.4744v1 | 2014-10-17T14:43:43Z | 2014-10-17T14:43:43Z | mS2GD: Mini-Batch Semi-Stochastic Gradient Descent in the Proximal
Setting | We propose a mini-batching scheme for improving the theoretical complexity
and practical performance of semi-stochastic gradient descent applied to the
problem of minimizing a strongly convex composite function represented as the
sum of an average of a large number of smooth convex functions, and simple
nonsmooth convex function. Our method first performs a deterministic step
(computation of the gradient of the objective function at the starting point),
followed by a large number of stochastic steps. The process is repeated a few
times with the last iterate becoming the new starting point. The novelty of our
method is in introduction of mini-batching into the computation of stochastic
steps. In each step, instead of choosing a single function, we sample $b$
functions, compute their gradients, and compute the direction based on this. We
analyze the complexity of the method and show that the method benefits from two
speedup effects. First, we prove that as long as $b$ is below a certain
threshold, we can reach predefined accuracy with less overall work than without
mini-batching. Second, our mini-batching scheme admits a simple parallel
implementation, and hence is suitable for further acceleration by
parallelization.
| [
"Jakub Kone\\v{c}n\\'y, Jie Liu, Peter Richt\\'arik, Martin Tak\\'a\\v{c}",
"['Jakub Konečný' 'Jie Liu' 'Peter Richtárik' 'Martin Takáč']"
]
|
stat.ML cs.LG | null | 1410.4777 | null | null | http://arxiv.org/pdf/1410.4777v1 | 2014-10-17T15:55:46Z | 2014-10-17T15:55:46Z | A Hierarchical Multi-Output Nearest Neighbor Model for Multi-Output
Dependence Learning | Multi-Output Dependence (MOD) learning is a generalization of standard
classification problems that allows for multiple outputs that are dependent on
each other. A primary issue that arises in the context of MOD learning is that
for any given input pattern there can be multiple correct output patterns. This
changes the learning task from function approximation to relation
approximation. Previous algorithms do not consider this problem, and thus
cannot be readily applied to MOD problems. To perform MOD learning, we
introduce the Hierarchical Multi-Output Nearest Neighbor model (HMONN) that
employs a basic learning model for each output and a modified nearest neighbor
approach to refine the initial results.
| [
"Richard G. Morris and Tony Martinez and Michael R. Smith",
"['Richard G. Morris' 'Tony Martinez' 'Michael R. Smith']"
]
|
math.OC cs.LG stat.ML | null | 1410.4828 | null | null | http://arxiv.org/pdf/1410.4828v1 | 2014-10-17T19:19:29Z | 2014-10-17T19:19:29Z | Generalized Conditional Gradient for Sparse Estimation | Structured sparsity is an important modeling tool that expands the
applicability of convex formulations for data analysis, however it also creates
significant challenges for efficient algorithm design. In this paper we
investigate the generalized conditional gradient (GCG) algorithm for solving
structured sparse optimization problems---demonstrating that, with some
enhancements, it can provide a more efficient alternative to current state of
the art approaches. After providing a comprehensive overview of the convergence
properties of GCG, we develop efficient methods for evaluating polar operators,
a subroutine that is required in each GCG iteration. In particular, we show how
the polar operator can be efficiently evaluated in two important scenarios:
dictionary learning and structured sparse estimation. A further improvement is
achieved by interleaving GCG with fixed-rank local subspace optimization. A
series of experiments on matrix completion, multi-class classification,
multi-view dictionary learning and overlapping group lasso shows that the
proposed method can significantly reduce the training cost of current
alternatives.
| [
"['Yaoliang Yu' 'Xinhua Zhang' 'Dale Schuurmans']",
"Yaoliang Yu, Xinhua Zhang, and Dale Schuurmans"
]
|
cs.DC cs.LG stat.ML | null | 1410.4984 | null | null | http://arxiv.org/pdf/1410.4984v1 | 2014-10-18T18:12:57Z | 2014-10-18T18:12:57Z | Gaussian Process Models with Parallelization and GPU acceleration | In this work, we present an extension of Gaussian process (GP) models with
sophisticated parallelization and GPU acceleration. The parallelization scheme
arises naturally from the modular computational structure w.r.t. datapoints in
the sparse Gaussian process formulation. Additionally, the computational
bottleneck is implemented with GPU acceleration for further speed up. Combining
both techniques allows applying Gaussian process models to millions of
datapoints. The efficiency of our algorithm is demonstrated with a synthetic
dataset. Its source code has been integrated into our popular software library
GPy.
| [
"['Zhenwen Dai' 'Andreas Damianou' 'James Hensman' 'Neil Lawrence']",
"Zhenwen Dai, Andreas Damianou, James Hensman, Neil Lawrence"
]
|
cs.PF cs.LG | null | 1410.5102 | null | null | http://arxiv.org/pdf/1410.5102v1 | 2014-10-19T18:32:37Z | 2014-10-19T18:32:37Z | On Bootstrapping Machine Learning Performance Predictors via Analytical
Models | Performance modeling typically relies on two antithetic methodologies: white
box models, which exploit knowledge on system's internals and capture its
dynamics using analytical approaches, and black box techniques, which infer
relations among the input and output variables of a system based on the
evidences gathered during an initial training phase. In this paper we
investigate a technique, which we name Bootstrapping, which aims at reconciling
these two methodologies and at compensating the cons of the one with the pros
of the other. We thoroughly analyze the design space of this gray box modeling
technique, and identify a number of algorithmic and parametric trade-offs which
we evaluate via two realistic case studies, a Key-Value Store and a Total Order
Broadcast service.
| [
"['Diego Didona' 'Paolo Romano']",
"Diego Didona and Paolo Romano"
]
|
cs.LG stat.ML | null | 1410.5137 | null | null | http://arxiv.org/pdf/1410.5137v2 | 2014-10-21T08:45:56Z | 2014-10-20T02:29:27Z | On Iterative Hard Thresholding Methods for High-dimensional M-Estimation | The use of M-estimators in generalized linear regression models in high
dimensional settings requires risk minimization with hard $L_0$ constraints. Of
the known methods, the class of projected gradient descent (also known as
iterative hard thresholding (IHT)) methods is known to offer the fastest and
most scalable solutions. However, the current state-of-the-art is only able to
analyze these methods in extremely restrictive settings which do not hold in
high dimensional statistical models. In this work we bridge this gap by
providing the first analysis for IHT-style methods in the high dimensional
statistical setting. Our bounds are tight and match known minimax lower bounds.
Our results rely on a general analysis framework that enables us to analyze
several popular hard thresholding style algorithms (such as HTP, CoSaMP, SP) in
the high dimensional regression setting. We also extend our analysis to a large
family of "fully corrective methods" that includes two-stage and partial
hard-thresholding algorithms. We show that our results hold for the problem of
sparse regression, as well as low-rank matrix recovery.
| [
"Prateek Jain, Ambuj Tewari, Purushottam Kar",
"['Prateek Jain' 'Ambuj Tewari' 'Purushottam Kar']"
]
|
cs.LG | null | 1410.5329 | null | null | http://arxiv.org/pdf/1410.5329v4 | 2017-02-14T19:14:01Z | 2014-10-16T22:11:34Z | Naive Bayes and Text Classification I - Introduction and Theory | Naive Bayes classifiers, a family of classifiers that are based on the
popular Bayes' probability theorem, are known for creating simple yet well
performing models, especially in the fields of document classification and
disease prediction. In this article, we will look at the main concepts of naive
Bayes classification in the context of document categorization.
| [
"['Sebastian Raschka']",
"Sebastian Raschka"
]
|
cs.LG | null | 1410.5330 | null | null | http://arxiv.org/pdf/1410.5330v1 | 2014-10-17T00:50:42Z | 2014-10-17T00:50:42Z | An Overview of General Performance Metrics of Binary Classifier Systems | This document provides a brief overview of different metrics and terminology
that is used to measure the performance of binary classification systems.
| [
"['Sebastian Raschka']",
"Sebastian Raschka"
]
|
cs.DS cs.LG cs.NA math.NA stat.CO stat.ML | null | 1410.5392 | null | null | http://arxiv.org/pdf/1410.5392v1 | 2014-10-20T18:59:58Z | 2014-10-20T18:59:58Z | Scalable Parallel Factorizations of SDD Matrices and Efficient Sampling
for Gaussian Graphical Models | Motivated by a sampling problem basic to computational statistical inference,
we develop a nearly optimal algorithm for a fundamental problem in spectral
graph theory and numerical analysis. Given an $n\times n$ SDDM matrix ${\bf
\mathbf{M}}$, and a constant $-1 \leq p \leq 1$, our algorithm gives efficient
access to a sparse $n\times n$ linear operator $\tilde{\mathbf{C}}$ such that
$${\mathbf{M}}^{p} \approx \tilde{\mathbf{C}} \tilde{\mathbf{C}}^\top.$$ The
solution is based on factoring ${\bf \mathbf{M}}$ into a product of simple and
sparse matrices using squaring and spectral sparsification. For ${\mathbf{M}}$
with $m$ non-zero entries, our algorithm takes work nearly-linear in $m$, and
polylogarithmic depth on a parallel machine with $m$ processors. This gives the
first sampling algorithm that only requires nearly linear work and $n$ i.i.d.
random univariate Gaussian samples to generate i.i.d. random samples for
$n$-dimensional Gaussian random fields with SDDM precision matrices. For
sampling this natural subclass of Gaussian random fields, it is optimal in the
randomness and nearly optimal in the work and parallel complexity. In addition,
our sampling algorithm can be directly extended to Gaussian random fields with
SDD precision matrices.
| [
"Dehua Cheng, Yu Cheng, Yan Liu, Richard Peng and Shang-Hua Teng",
"['Dehua Cheng' 'Yu Cheng' 'Yan Liu' 'Richard Peng' 'Shang-Hua Teng']"
]
|
stat.ML cs.DS cs.IR cs.LG | null | 1410.5410 | null | null | http://arxiv.org/pdf/1410.5410v2 | 2014-11-13T20:48:36Z | 2014-10-20T19:54:58Z | Improved Asymmetric Locality Sensitive Hashing (ALSH) for Maximum Inner
Product Search (MIPS) | Recently it was shown that the problem of Maximum Inner Product Search (MIPS)
is efficient and it admits provably sub-linear hashing algorithms. Asymmetric
transformations before hashing were the key in solving MIPS which was otherwise
hard. In the prior work, the authors use asymmetric transformations which
convert the problem of approximate MIPS into the problem of approximate near
neighbor search which can be efficiently solved using hashing. In this work, we
provide a different transformation which converts the problem of approximate
MIPS into the problem of approximate cosine similarity search which can be
efficiently solved using signed random projections. Theoretical analysis show
that the new scheme is significantly better than the original scheme for MIPS.
Experimental evaluations strongly support the theoretical findings.
| [
"['Anshumali Shrivastava' 'Ping Li']",
"Anshumali Shrivastava and Ping Li"
]
|
cs.LO cs.LG | null | 1410.5467 | null | null | http://arxiv.org/pdf/1410.5467v1 | 2014-10-20T21:16:52Z | 2014-10-20T21:16:52Z | Machine Learning of Coq Proof Guidance: First Experiments | We report the results of the first experiments with learning proof
dependencies from the formalizations done with the Coq system. We explain the
process of obtaining the dependencies from the Coq proofs, the characterization
of formulas that is used for the learning, and the evaluation method. Various
machine learning methods are compared on a dataset of 5021 toplevel Coq proofs
coming from the CoRN repository. The best resulting method covers on average
75% of the needed proof dependencies among the first 100 predictions, which is
a comparable performance of such initial experiments on other large-theory
corpora.
| [
"Cezary Kaliszyk, Lionel Mamane, Josef Urban",
"['Cezary Kaliszyk' 'Lionel Mamane' 'Josef Urban']"
]
|
cs.LG | null | 1410.5473 | null | null | http://arxiv.org/pdf/1410.5473v2 | 2015-01-13T16:58:03Z | 2014-10-20T21:32:05Z | Feature Selection Based on Confidence Machine | In machine learning and pattern recognition, feature selection has been a hot
topic in the literature. Unsupervised feature selection is challenging due to
the loss of labels which would supply the related information.How to define an
appropriate metric is the key for feature selection. We propose a filter method
for unsupervised feature selection which is based on the Confidence Machine.
Confidence Machine offers an estimation of confidence on a feature'reliability.
In this paper, we provide the math model of Confidence Machine in the context
of feature selection, which maximizes the relevance and minimizes the
redundancy of the selected feature. We compare our method against classic
feature selection methods Laplacian Score, Pearson Correlation and Principal
Component Analysis on benchmark data sets. The experimental results demonstrate
the efficiency and effectiveness of our method.
| [
"['Chang Liu' 'Yi Xu']",
"Chang Liu and Yi Xu"
]
|
cs.CL cs.LG stat.ML | null | 1410.5491 | null | null | http://arxiv.org/pdf/1410.5491v1 | 2014-10-20T22:28:55Z | 2014-10-20T22:28:55Z | Using Mechanical Turk to Build Machine Translation Evaluation Sets | Building machine translation (MT) test sets is a relatively expensive task.
As MT becomes increasingly desired for more and more language pairs and more
and more domains, it becomes necessary to build test sets for each case. In
this paper, we investigate using Amazon's Mechanical Turk (MTurk) to make MT
test sets cheaply. We find that MTurk can be used to make test sets much
cheaper than professionally-produced test sets. More importantly, in
experiments with multiple MT systems, we find that the MTurk-produced test sets
yield essentially the same conclusions regarding system performance as the
professionally-produced test sets yield.
| [
"Michael Bloodgood and Chris Callison-Burch",
"['Michael Bloodgood' 'Chris Callison-Burch']"
]
|
stat.ML cs.DS cs.IR cs.LG | null | 1410.5518 | null | null | http://arxiv.org/pdf/1410.5518v3 | 2015-06-08T19:30:35Z | 2014-10-21T02:00:34Z | On Symmetric and Asymmetric LSHs for Inner Product Search | We consider the problem of designing locality sensitive hashes (LSH) for
inner product similarity, and of the power of asymmetric hashes in this
context. Shrivastava and Li argue that there is no symmetric LSH for the
problem and propose an asymmetric LSH based on different mappings for query and
database points. However, we show there does exist a simple symmetric LSH that
enjoys stronger guarantees and better empirical performance than the asymmetric
LSH they suggest. We also show a variant of the settings where asymmetry is
in-fact needed, but there a different asymmetric LSH is required.
| [
"Behnam Neyshabur, Nathan Srebro",
"['Behnam Neyshabur' 'Nathan Srebro']"
]
|
cs.LG cs.AI | null | 1410.5557 | null | null | http://arxiv.org/pdf/1410.5557v1 | 2014-10-21T07:24:03Z | 2014-10-21T07:24:03Z | Where do goals come from? A Generic Approach to Autonomous Goal-System
Development | Goals express agents' intentions and allow them to organize their behavior
based on low-dimensional abstractions of high-dimensional world states. How can
agents develop such goals autonomously? This paper proposes a detailed
conceptual and computational account to this longstanding problem. We argue to
consider goals as high-level abstractions of lower-level intention mechanisms
such as rewards and values, and point out that goals need to be considered
alongside with a detection of the own actions' effects. We propose Latent Goal
Analysis as a computational learning formulation thereof, and show
constructively that any reward or value function can by explained by goals and
such self-detection as latent mechanisms. We first show that learned goals
provide a highly effective dimensionality reduction in a practical
reinforcement learning problem. Then, we investigate a developmental scenario
in which entirely task-unspecific rewards induced by visual saliency lead to
self and goal representations that constitute goal-directed reaching.
| [
"Matthias Rolf and Minoru Asada",
"['Matthias Rolf' 'Minoru Asada']"
]
|
stat.ML cs.LG | null | 1410.5684 | null | null | http://arxiv.org/pdf/1410.5684v1 | 2014-10-21T14:36:26Z | 2014-10-21T14:36:26Z | Regularizing Recurrent Networks - On Injected Noise and Norm-based
Methods | Advancements in parallel processing have lead to a surge in multilayer
perceptrons' (MLP) applications and deep learning in the past decades.
Recurrent Neural Networks (RNNs) give additional representational power to
feedforward MLPs by providing a way to treat sequential data. However, RNNs are
hard to train using conventional error backpropagation methods because of the
difficulty in relating inputs over many time-steps. Regularization approaches
from MLP sphere, like dropout and noisy weight training, have been
insufficiently applied and tested on simple RNNs. Moreover, solutions have been
proposed to improve convergence in RNNs but not enough to improve the long term
dependency remembering capabilities thereof.
In this study, we aim to empirically evaluate the remembering and
generalization ability of RNNs on polyphonic musical datasets. The models are
trained with injected noise, random dropout, norm-based regularizers and their
respective performances compared to well-initialized plain RNNs and advanced
regularization methods like fast-dropout. We conclude with evidence that
training with noise does not improve performance as conjectured by a few works
in RNN optimization before ours.
| [
"Saahil Ognawala and Justin Bayer",
"['Saahil Ognawala' 'Justin Bayer']"
]
|
cs.LO cs.LG | null | 1410.5703 | null | null | http://arxiv.org/pdf/1410.5703v1 | 2014-10-17T19:57:42Z | 2014-10-17T19:57:42Z | Robust Multidimensional Mean-Payoff Games are Undecidable | Mean-payoff games play a central role in quantitative synthesis and
verification. In a single-dimensional game a weight is assigned to every
transition and the objective of the protagonist is to assure a non-negative
limit-average weight. In the multidimensional setting, a weight vector is
assigned to every transition and the objective of the protagonist is to satisfy
a boolean condition over the limit-average weight of each dimension, e.g.,
$\LimAvg(x_1) \leq 0 \vee \LimAvg(x_2)\geq 0 \wedge \LimAvg(x_3) \geq 0$. We
recently proved that when one of the players is restricted to finite-memory
strategies then the decidability of determining the winner is inter-reducible
with Hilbert's Tenth problem over rationals (a fundamental long-standing open
problem). In this work we allow arbitrary (infinite-memory) strategies for both
players and we show that the problem is undecidable.
| [
"Yaron Velner",
"['Yaron Velner']"
]
|
cs.DC cs.LG | null | 1410.5784 | null | null | http://arxiv.org/pdf/1410.5784v1 | 2014-10-18T05:00:31Z | 2014-10-18T05:00:31Z | Optimal Feature Selection from VMware ESXi 5.1 Feature Set | A study of VMware ESXi 5.1 server has been carried out to find the optimal
set of parameters which suggest usage of different resources of the server.
Feature selection algorithms have been used to extract the optimum set of
parameters of the data obtained from VMware ESXi 5.1 server using esxtop
command. Multiple virtual machines (VMs) are running in the mentioned server.
K-means algorithm is used for clustering the VMs. The goodness of each cluster
is determined by Davies Bouldin index and Dunn index respectively. The best
cluster is further identified by the determined indices. The features of the
best cluster are considered into a set of optimal parameters.
| [
"Amartya Hatua",
"['Amartya Hatua']"
]
|
physics.med-ph cs.LG stat.AP stat.ME | 10.1109/ICASSP.2014.6854728 | 1410.5801 | null | null | http://arxiv.org/abs/1410.5801v1 | 2014-10-20T16:57:17Z | 2014-10-20T16:57:17Z | Artifact reduction in multichannel pervasive EEG using hybrid WPT-ICA
and WPT-EMD signal decomposition techniques | In order to reduce the muscle artifacts in multi-channel pervasive
Electroencephalogram (EEG) signals, we here propose and compare two hybrid
algorithms by combining the concept of wavelet packet transform (WPT),
empirical mode decomposition (EMD) and Independent Component Analysis (ICA).
The signal cleaning performances of WPT-EMD and WPT-ICA algorithms have been
compared using a signal-to-noise ratio (SNR)-like criterion for artifacts. The
algorithms have been tested on multiple trials of four different artifact cases
viz. eye-blinking and muscle artifacts including left and right hand movement
and head-shaking.
| [
"['Valentina Bono' 'Wasifa Jamal' 'Saptarshi Das' 'Koushik Maharatna']",
"Valentina Bono, Wasifa Jamal, Saptarshi Das, Koushik Maharatna"
]
|
cs.CY cs.LG physics.data-an stat.AP stat.ML | 10.1145/2647868.2654933 | 1410.5816 | null | null | http://arxiv.org/abs/1410.5816v1 | 2014-10-21T18:54:53Z | 2014-10-21T18:54:53Z | Daily Stress Recognition from Mobile Phone Data, Weather Conditions and
Individual Traits | Research has proven that stress reduces quality of life and causes many
diseases. For this reason, several researchers devised stress detection systems
based on physiological parameters. However, these systems require that
obtrusive sensors are continuously carried by the user. In our paper, we
propose an alternative approach providing evidence that daily stress can be
reliably recognized based on behavioral metrics, derived from the user's mobile
phone activity and from additional indicators, such as the weather conditions
(data pertaining to transitory properties of the environment) and the
personality traits (data concerning permanent dispositions of individuals). Our
multifactorial statistical model, which is person-independent, obtains the
accuracy score of 72.28% for a 2-class daily stress recognition problem. The
model is efficient to implement for most of multimedia applications due to
highly reduced low-dimensional feature space (32d). Moreover, we identify and
discuss the indicators which have strong predictive power.
| [
"Andrey Bogomolov, Bruno Lepri, Michela Ferron, Fabio Pianesi, Alex\n (Sandy) Pentland",
"['Andrey Bogomolov' 'Bruno Lepri' 'Michela Ferron' 'Fabio Pianesi' 'Alex'\n 'Pentland']"
]
|
cs.CL cs.LG stat.ML | null | 1410.5877 | null | null | http://arxiv.org/pdf/1410.5877v1 | 2014-10-21T22:55:48Z | 2014-10-21T22:55:48Z | Bucking the Trend: Large-Scale Cost-Focused Active Learning for
Statistical Machine Translation | We explore how to improve machine translation systems by adding more
translation data in situations where we already have substantial resources. The
main challenge is how to buck the trend of diminishing returns that is commonly
encountered. We present an active learning-style data solicitation algorithm to
meet this challenge. We test it, gathering annotations via Amazon Mechanical
Turk, and find that we get an order of magnitude increase in performance rates
of improvement.
| [
"Michael Bloodgood and Chris Callison-Burch",
"['Michael Bloodgood' 'Chris Callison-Burch']"
]
|
cs.LG stat.ML | null | 1410.5884 | null | null | http://arxiv.org/pdf/1410.5884v1 | 2014-10-21T23:32:24Z | 2014-10-21T23:32:24Z | Mean-Field Networks | The mean field algorithm is a widely used approximate inference algorithm for
graphical models whose exact inference is intractable. In each iteration of
mean field, the approximate marginals for each variable are updated by getting
information from the neighbors. This process can be equivalently converted into
a feedforward network, with each layer representing one iteration of mean field
and with tied weights on all layers. This conversion enables a few natural
extensions, e.g. untying the weights in the network. In this paper, we study
these mean field networks (MFNs), and use them as inference tools as well as
discriminative models. Preliminary experiment results show that MFNs can learn
to do inference very efficiently and perform significantly better than mean
field as discriminative models.
| [
"['Yujia Li' 'Richard Zemel']",
"Yujia Li and Richard Zemel"
]
|
stat.ML cs.LG | null | 1410.5920 | null | null | http://arxiv.org/pdf/1410.5920v1 | 2014-10-22T06:09:58Z | 2014-10-22T06:09:58Z | Active Regression by Stratification | We propose a new active learning algorithm for parametric linear regression
with random design. We provide finite sample convergence guarantees for general
distributions in the misspecified model. This is the first active learner for
this setting that provably can improve over passive learning. Unlike other
learning settings (such as classification), in regression the passive learning
rate of $O(1/\epsilon)$ cannot in general be improved upon. Nonetheless, the
so-called `constant' in the rate of convergence, which is characterized by a
distribution-dependent risk, can be improved in many cases. For a given
distribution, achieving the optimal risk requires prior knowledge of the
distribution. Following the stratification technique advocated in Monte-Carlo
function integration, our active learner approaches the optimal risk using
piecewise constant approximations.
| [
"Sivan Sabato and Remi Munos",
"['Sivan Sabato' 'Remi Munos']"
]
|
cs.LG | null | 1410.6093 | null | null | http://arxiv.org/pdf/1410.6093v1 | 2014-10-22T16:13:36Z | 2014-10-22T16:13:36Z | Cosine Similarity Measure According to a Convex Cost Function | In this paper, we describe a new vector similarity measure associated with a
convex cost function. Given two vectors, we determine the surface normals of
the convex function at the vectors. The angle between the two surface normals
is the similarity measure. Convex cost function can be the negative entropy
function, total variation (TV) function and filtered variation function. The
convex cost function need not be differentiable everywhere. In general, we need
to compute the gradient of the cost function to compute the surface normals. If
the gradient does not exist at a given vector, it is possible to use the
subgradients and the normal producing the smallest angle between the two
vectors is used to compute the similarity measure.
| [
"Osman Gunay, Cem Emre Akbas, A. Enis Cetin",
"['Osman Gunay' 'Cem Emre Akbas' 'A. Enis Cetin']"
]
|
stat.ML cs.LG math.OC stat.AP | 10.1109/TSG.2015.2469098 | 1410.6095 | null | null | http://arxiv.org/abs/1410.6095v1 | 2014-10-22T16:14:38Z | 2014-10-22T16:14:38Z | Online Energy Price Matrix Factorization for Power Grid Topology
Tracking | Grid security and open markets are two major smart grid goals. Transparency
of market data facilitates a competitive and efficient energy environment, yet
it may also reveal critical physical system information. Recovering the grid
topology based solely on publicly available market data is explored here.
Real-time energy prices are calculated as the Lagrange multipliers of
network-constrained economic dispatch; that is, via a linear program (LP)
typically solved every 5 minutes. Granted the grid Laplacian is a parameter of
this LP, one could infer such a topology-revealing matrix upon observing
successive LP dual outcomes. The matrix of spatio-temporal prices is first
shown to factor as the product of the inverse Laplacian times a sparse matrix.
Leveraging results from sparse matrix decompositions, topology recovery schemes
with complementary strengths are subsequently formulated. Solvers scalable to
high-dimensional and streaming market data are devised. Numerical validation
using real load data on the IEEE 30-bus grid provide useful input for current
and future market designs.
| [
"['Vassilis Kekatos' 'Georgios B. Giannakis' 'Ross Baldick']",
"Vassilis Kekatos, Georgios B. Giannakis, and Ross Baldick"
]
|
cs.LG stat.ML | null | 1410.6382 | null | null | http://arxiv.org/pdf/1410.6382v1 | 2014-10-23T14:55:09Z | 2014-10-23T14:55:09Z | Attribute Efficient Linear Regression with Data-Dependent Sampling | In this paper we analyze a budgeted learning setting, in which the learner
can only choose and observe a small subset of the attributes of each training
example. We develop efficient algorithms for ridge and lasso linear regression,
which utilize the geometry of the data by a novel data-dependent sampling
scheme. When the learner has prior knowledge on the second moments of the
attributes, the optimal sampling probabilities can be calculated precisely, and
result in data-dependent improvements factors for the excess risk over the
state-of-the-art that may be as large as $O(\sqrt{d})$, where $d$ is the
problem's dimension. Moreover, under reasonable assumptions our algorithms can
use less attributes than full-information algorithms, which is the main concern
in budgeted learning settings. To the best of our knowledge, these are the
first algorithms able to do so in our setting. Where no such prior knowledge is
available, we develop a simple estimation technique that given a sufficient
amount of training examples, achieves similar improvements. We complement our
theoretical analysis with experiments on several data sets which support our
claims.
| [
"Doron Kukliansky, Ohad Shamir",
"['Doron Kukliansky' 'Ohad Shamir']"
]
|
math.OC cs.LG | null | 1410.6387 | null | null | http://arxiv.org/pdf/1410.6387v1 | 2014-10-23T15:05:44Z | 2014-10-23T15:05:44Z | On Lower and Upper Bounds in Smooth Strongly Convex Optimization - A
Unified Approach via Linear Iterative Methods | In this thesis we develop a novel framework to study smooth and strongly
convex optimization algorithms, both deterministic and stochastic. Focusing on
quadratic functions we are able to examine optimization algorithms as a
recursive application of linear operators. This, in turn, reveals a powerful
connection between a class of optimization algorithms and the analytic theory
of polynomials whereby new lower and upper bounds are derived. In particular,
we present a new and natural derivation of Nesterov's well-known Accelerated
Gradient Descent method by employing simple 'economic' polynomials. This rather
natural interpretation of AGD contrasts with earlier ones which lacked a
simple, yet solid, motivation. Lastly, whereas existing lower bounds are only
valid when the dimensionality scales with the number of iterations, our lower
bound holds in the natural regime where the dimensionality is fixed.
| [
"['Yossi Arjevani']",
"Yossi Arjevani"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.