title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
A density-sensitive hierarchical clustering method | cs.LG | We define a hierarchical clustering method: $\alpha$-unchaining single
linkage or $SL(\alpha)$. The input of this algorithm is a finite metric space
and a certain parameter $\alpha$. This method is sensitive to the density of
the distribution and offers some solution to the so called chaining effect. We
also define a modified version, $SL^*(\alpha)$, to treat the chaining through
points or small blocks. We study the theoretical properties of these methods
and offer some theoretical background for the treatment of chaining effects.
| \'Alvaro Mart\'inez-P\'erez | null | 1210.6292 | null | null |
MLPACK: A Scalable C++ Machine Learning Library | cs.MS cs.CV cs.LG | MLPACK is a state-of-the-art, scalable, multi-platform C++ machine learning
library released in late 2011 offering both a simple, consistent API accessible
to novice users and high performance and flexibility to expert users by
leveraging modern features of C++. MLPACK provides cutting-edge algorithms
whose benchmarks exhibit far better performance than other leading machine
learning libraries. MLPACK version 1.0.3, licensed under the LGPL, is available
at http://www.mlpack.org.
| Ryan R. Curtin, James R. Cline, N.P. Slagle, William B. March,
Parikshit Ram, Nishant A. Mehta, Alexander G. Gray | null | 1210.6293 | null | null |
High quality topic extraction from business news explains abnormal
financial market volatility | stat.ML cs.LG cs.SI physics.soc-ph q-fin.ST | Understanding the mutual relationships between information flows and social
activity in society today is one of the cornerstones of the social sciences. In
financial economics, the key issue in this regard is understanding and
quantifying how news of all possible types (geopolitical, environmental,
social, financial, economic, etc.) affect trading and the pricing of firms in
organized stock markets. In this article, we seek to address this issue by
performing an analysis of more than 24 million news records provided by
Thompson Reuters and of their relationship with trading activity for 206 major
stocks in the S&P US stock index. We show that the whole landscape of news that
affect stock price movements can be automatically summarized via simple
regularized regressions between trading activity and news information pieces
decomposed, with the help of simple topic modeling techniques, into their
"thematic" features. Using these methods, we are able to estimate and quantify
the impacts of news on trading. We introduce network-based visualization
techniques to represent the whole landscape of news information associated with
a basket of stocks. The examination of the words that are representative of the
topic distributions confirms that our method is able to extract the significant
pieces of information influencing the stock market. Our results show that one
of the most puzzling stylized fact in financial economies, namely that at
certain times trading volumes appear to be "abnormally large," can be partially
explained by the flow of news. In this sense, our results prove that there is
no "excess trading," when restricting to times when news are genuinely novel
and provide relevant financial information.
| Ryohei Hisano, Didier Sornette, Takayuki Mizuno, Takaaki Ohnishi,
Tsutomu Watanabe | 10.1371/journal.pone.0064846 | 1210.6321 | null | null |
Topic-Level Opinion Influence Model(TOIM): An Investigation Using
Tencent Micro-Blogging | cs.SI cs.CY cs.LG | Mining user opinion from Micro-Blogging has been extensively studied on the
most popular social networking sites such as Twitter and Facebook in the U.S.,
but few studies have been done on Micro-Blogging websites in other countries
(e.g. China). In this paper, we analyze the social opinion influence on
Tencent, one of the largest Micro-Blogging websites in China, endeavoring to
unveil the behavior patterns of Chinese Micro-Blogging users. This paper
proposes a Topic-Level Opinion Influence Model (TOIM) that simultaneously
incorporates topic factor and social direct influence in a unified
probabilistic framework. Based on TOIM, two topic level opinion influence
propagation and aggregation algorithms are developed to consider the indirect
influence: CP (Conservative Propagation) and NCP (None Conservative
Propagation). Users' historical social interaction records are leveraged by
TOIM to construct their progressive opinions and neighbors' opinion influence
through a statistical learning process, which can be further utilized to
predict users' future opinions on some specific topics. To evaluate and test
this proposed model, an experiment was designed and a sub-dataset from Tencent
Micro-Blogging was used. The experimental results show that TOIM outperforms
baseline methods on predicting users' opinion. The applications of CP and NCP
have no significant differences and could significantly improve recall and
F1-measure of TOIM.
| Daifeng Li, Ying Ding, Xin Shuai, Golden Guo-zheng Sun, Jie Tang,
Zhipeng Luo, Jingwei Zhang and Guo Zhang | null | 1210.6497 | null | null |
Neural Networks for Complex Data | cs.NE cs.LG stat.ML | Artificial neural networks are simple and efficient machine learning tools.
Defined originally in the traditional setting of simple vector data, neural
network models have evolved to address more and more difficulties of complex
real world problems, ranging from time evolving data to sophisticated data
structures such as graphs and functions. This paper summarizes advances on
those themes from the last decade, with a focus on results obtained by members
of the SAMM team of Universit\'e Paris 1
| Marie Cottrell (SAMM), Madalina Olteanu (SAMM), Fabrice Rossi (SAMM),
Joseph Rynkiewicz (SAMM), Nathalie Villa-Vialaneix (SAMM) | 10.1007/s13218-012-0207-2 | 1210.6511 | null | null |
Clustering hidden Markov models with variational HEM | cs.LG cs.CV stat.ML | The hidden Markov model (HMM) is a widely-used generative model that copes
with sequential data, assuming that each observation is conditioned on the
state of a hidden Markov chain. In this paper, we derive a novel algorithm to
cluster HMMs based on the hierarchical EM (HEM) algorithm. The proposed
algorithm i) clusters a given collection of HMMs into groups of HMMs that are
similar, in terms of the distributions they represent, and ii) characterizes
each group by a "cluster center", i.e., a novel HMM that is representative for
the group, in a manner that is consistent with the underlying generative model
of the HMM. To cope with intractable inference in the E-step, the HEM algorithm
is formulated as a variational optimization problem, and efficiently solved for
the HMM case by leveraging an appropriate variational approximation. The
benefits of the proposed algorithm, which we call variational HEM (VHEM), are
demonstrated on several tasks involving time-series data, such as hierarchical
clustering of motion capture sequences, and automatic annotation and retrieval
of music and of online hand-writing data, showing improvements over current
methods. In particular, our variational HEM algorithm effectively leverages
large amounts of data when learning annotation models by using an efficient
hierarchical estimation procedure, which reduces learning times and memory
requirements, while improving model robustness through better regularization.
| Emanuele Coviello and Antoni B. Chan and Gert R.G. Lanckriet | null | 1210.6707 | null | null |
Nested Hierarchical Dirichlet Processes | stat.ML cs.LG | We develop a nested hierarchical Dirichlet process (nHDP) for hierarchical
topic modeling. The nHDP is a generalization of the nested Chinese restaurant
process (nCRP) that allows each word to follow its own path to a topic node
according to a document-specific distribution on a shared tree. This alleviates
the rigid, single-path formulation of the nCRP, allowing a document to more
easily express thematic borrowings as a random effect. We derive a stochastic
variational inference algorithm for the model, in addition to a greedy subtree
selection method for each document, which allows for efficient inference using
massive collections of text documents. We demonstrate our algorithm on 1.8
million documents from The New York Times and 3.3 million documents from
Wikipedia.
| John Paisley, Chong Wang, David M. Blei and Michael I. Jordan | 10.1109/TPAMI.2014.2318728 | 1210.6738 | null | null |
Structured Sparsity Models for Multiparty Speech Recovery from
Reverberant Recordings | cs.LG cs.SD | We tackle the multi-party speech recovery problem through modeling the
acoustic of the reverberant chambers. Our approach exploits structured sparsity
models to perform room modeling and speech recovery. We propose a scheme for
characterizing the room acoustic from the unknown competing speech sources
relying on localization of the early images of the speakers by sparse
approximation of the spatial spectra of the virtual sources in a free-space
model. The images are then clustered exploiting the low-rank structure of the
spectro-temporal components belonging to each source. This enables us to
identify the early support of the room impulse response function and its unique
map to the room geometry. To further tackle the ambiguity of the reflection
ratios, we propose a novel formulation of the reverberation model and estimate
the absorption coefficients through a convex optimization exploiting joint
sparsity model formulated upon spatio-spectral sparsity of concurrent speech
representation. The acoustic parameters are then incorporated for separating
individual speech signals through either structured sparse recovery or inverse
filtering the acoustic channels. The experiments conducted on real data
recordings demonstrate the effectiveness of the proposed approach for
multi-party speech recovery and recognition.
| Afsaneh Asaei, Mohammad Golbabaee, Herv\'e Bourlard, Volkan Cevher | null | 1210.6766 | null | null |
Predicting Near-Future Churners and Win-Backs in the Telecommunications
Industry | cs.CE cs.LG | In this work, we presented the strategies and techniques that we have
developed for predicting the near-future churners and win-backs for a telecom
company. On a large-scale and real-world database containing customer profiles
and some transaction data from a telecom company, we first analyzed the data
schema, developed feature computation strategies and then extracted a large set
of relevant features that can be associated with the customer churning and
returning behaviors. Our features include both the original driver factors as
well as some derived features. We evaluated our features on the imbalance
corrected dataset, i.e. under-sampled dataset and compare a large number of
existing machine learning tools, especially decision tree-based classifiers,
for predicting the churners and win-backs. In general, we find RandomForest and
SimpleCart learning algorithms generally perform well and tend to provide us
with highly competitive prediction performance. Among the top-15 driver factors
that signal the churn behavior, we find that the service utilization, e.g. last
two months' download and upload volume, last three months' average upload and
download, and the payment related factors are the most indicative features for
predicting if churn will happen soon. Such features can collectively tell
discrepancies between the service plans, payments and the dynamically changing
utilization needs of the customers. Our proposed features and their
computational strategy exhibit reasonable precision performance to predict
churn behavior in near future.
| Clifton Phua, Hong Cao, Jo\~ao B\'artolo Gomes, Minh Nhut Nguyen | null | 1210.6891 | null | null |
Enhancing the functional content of protein interaction networks | q-bio.MN cs.CE cs.LG q-bio.GN stat.ML | Protein interaction networks are a promising type of data for studying
complex biological systems. However, despite the rich information embedded in
these networks, they face important data quality challenges of noise and
incompleteness that adversely affect the results obtained from their analysis.
Here, we explore the use of the concept of common neighborhood similarity
(CNS), which is a form of local structure in networks, to address these issues.
Although several CNS measures have been proposed in the literature, an
understanding of their relative efficacies for the analysis of interaction
networks has been lacking. We follow the framework of graph transformation to
convert the given interaction network into a transformed network corresponding
to a variety of CNS measures evaluated. The effectiveness of each measure is
then estimated by comparing the quality of protein function predictions
obtained from its corresponding transformed network with those from the
original network. Using a large set of S. cerevisiae interactions, and a set of
136 GO terms, we find that several of the transformed networks produce more
accurate predictions than those obtained from the original network. In
particular, the $HC.cont$ measure proposed here performs particularly well for
this task. Further investigation reveals that the two major factors
contributing to this improvement are the abilities of CNS measures, especially
$HC.cont$, to prune out noisy edges and introduce new links between
functionally related proteins.
| Gaurav Pandey and Sahil Manocha and Gowtham Atluri and Vipin Kumar | null | 1210.6912 | null | null |
User-level Weibo Recommendation incorporating Social Influence based on
Semi-Supervised Algorithm | cs.SI cs.CY cs.LG | Tencent Weibo, as one of the most popular micro-blogging services in China,
has attracted millions of users, producing 30-60 millions of weibo (similar as
tweet in Twitter) daily. With the overload problem of user generate content,
Tencent users find it is more and more hard to browse and find valuable
information at the first time. In this paper, we propose a Factor Graph based
weibo recommendation algorithm TSI-WR (Topic-Level Social Influence based Weibo
Recommendation), which could help Tencent users to find most suitable
information. The main innovation is that we consider both direct and indirect
social influence from topic level based on social balance theory. The main
advantages of adopting this strategy are that it could first build a more
accurate description of latent relationship between two users with weak
connections, which could help to solve the data sparsity problem; second
provide a more accurate recommendation for a certain user from a wider range.
Other meaningful contextual information is also combined into our model, which
include: Users profile, Users influence, Content of weibos, Topic information
of weibos and etc. We also design a semi-supervised algorithm to further reduce
the influence of data sparisty. The experiments show that all the selected
variables are important and the proposed model outperforms several baseline
methods.
| Daifeng Li, Zhipeng Luo, Golden Guo-zheng Sun, Jie Tang, Jingwei Zhang | null | 1210.7047 | null | null |
Large-Scale Sparse Principal Component Analysis with Application to Text
Data | stat.ML cs.LG math.OC | Sparse PCA provides a linear combination of small number of features that
maximizes variance across data. Although Sparse PCA has apparent advantages
compared to PCA, such as better interpretability, it is generally thought to be
computationally much more expensive. In this paper, we demonstrate the
surprising fact that sparse PCA can be easier than PCA in practice, and that it
can be reliably applied to very large data sets. This comes from a rigorous
feature elimination pre-processing result, coupled with the favorable fact that
features in real-life data typically have exponentially decreasing variances,
which allows for many features to be eliminated. We introduce a fast block
coordinate ascent algorithm with much better computational complexity than the
existing first-order ones. We provide experimental results obtained on text
corpora involving millions of documents and hundreds of thousands of features.
These results illustrate how Sparse PCA can help organize a large corpus of
text data in a user-interpretable way, providing an attractive alternative
approach to topic models.
| Youwei Zhang, Laurent El Ghaoui | null | 1210.7054 | null | null |
Selective Transfer Learning for Cross Domain Recommendation | cs.LG cs.IR stat.ML | Collaborative filtering (CF) aims to predict users' ratings on items
according to historical user-item preference data. In many real-world
applications, preference data are usually sparse, which would make models
overfit and fail to give accurate predictions. Recently, several research works
show that by transferring knowledge from some manually selected source domains,
the data sparseness problem could be mitigated. However for most cases, parts
of source domain data are not consistent with the observations in the target
domain, which may misguide the target domain model building. In this paper, we
propose a novel criterion based on empirical prediction error and its variance
to better capture the consistency across domains in CF settings. Consequently,
we embed this criterion into a boosting framework to perform selective
knowledge transfer. Comparing to several state-of-the-art methods, we show that
our proposed selective transfer learning framework can significantly improve
the accuracy of rating prediction tasks on several real-world recommendation
tasks.
| Zhongqi Lu and Erheng Zhong and Lili Zhao and Wei Xiang and Weike Pan
and Qiang Yang | null | 1210.7056 | null | null |
A Multiscale Framework for Challenging Discrete Optimization | cs.CV cs.LG math.OC stat.ML | Current state-of-the-art discrete optimization methods struggle behind when
it comes to challenging contrast-enhancing discrete energies (i.e., favoring
different labels for neighboring variables). This work suggests a multiscale
approach for these challenging problems. Deriving an algebraic representation
allows us to coarsen any pair-wise energy using any interpolation in a
principled algebraic manner. Furthermore, we propose an energy-aware
interpolation operator that efficiently exposes the multiscale landscape of the
energy yielding an effective coarse-to-fine optimization scheme. Results on
challenging contrast-enhancing energies show significant improvement over
state-of-the-art methods.
| Shai Bagon and Meirav Galun | null | 1210.7070 | null | null |
Discrete Energy Minimization, beyond Submodularity: Applications and
Approximations | cs.CV cs.LG math.OC stat.ML | In this thesis I explore challenging discrete energy minimization problems
that arise mainly in the context of computer vision tasks. This work motivates
the use of such "hard-to-optimize" non-submodular functionals, and proposes
methods and algorithms to cope with the NP-hardness of their optimization.
Consequently, this thesis revolves around two axes: applications and
approximations. The applications axis motivates the use of such
"hard-to-optimize" energies by introducing new tasks. As the energies become
less constrained and structured one gains more expressive power for the
objective function achieving more accurate models. Results show how
challenging, hard-to-optimize, energies are more adequate for certain computer
vision applications. To overcome the resulting challenging optimization tasks
the second axis of this thesis proposes approximation algorithms to cope with
the NP-hardness of the optimization. Experiments show that these new methods
yield good results for representative challenging problems.
| Shai Bagon | null | 1210.7362 | null | null |
Recognizing Static Signs from the Brazilian Sign Language: Comparing
Large-Margin Decision Directed Acyclic Graphs, Voting Support Vector Machines
and Artificial Neural Networks | cs.CV cs.LG stat.ML | In this paper, we explore and detail our experiments in a
high-dimensionality, multi-class image classification problem often found in
the automatic recognition of Sign Languages. Here, our efforts are directed
towards comparing the characteristics, advantages and drawbacks of creating and
training Support Vector Machines disposed in a Directed Acyclic Graph and
Artificial Neural Networks to classify signs from the Brazilian Sign Language
(LIBRAS). We explore how the different heuristics, hyperparameters and
multi-class decision schemes affect the performance, efficiency and ease of use
for each classifier. We provide hyperparameter surface maps capturing accuracy
and efficiency, comparisons between DDAGs and 1-vs-1 SVMs, and effects of
heuristics when training ANNs with Resilient Backpropagation. We report
statistically significant results using Cohen's Kappa statistic for contingency
tables.
| C\'esar Roberto de Souza, Ednaldo Brigante Pizzolato, Mauro dos Santos
Anjo | null | 1210.7461 | null | null |
Tensor decompositions for learning latent variable models | cs.LG math.NA stat.ML | This work considers a computationally and statistically efficient parameter
estimation method for a wide class of latent variable models---including
Gaussian mixture models, hidden Markov models, and latent Dirichlet
allocation---which exploits a certain tensor structure in their low-order
observable moments (typically, of second- and third-order). Specifically,
parameter estimation is reduced to the problem of extracting a certain
(orthogonal) decomposition of a symmetric tensor derived from the moments; this
decomposition can be viewed as a natural generalization of the singular value
decomposition for matrices. Although tensor decompositions are generally
intractable to compute, the decomposition of these specially structured tensors
can be efficiently obtained by a variety of approaches, including power
iterations and maximization approaches (similar to the case of matrices). A
detailed analysis of a robust tensor power method is provided, establishing an
analogue of Wedin's perturbation theorem for the singular vectors of matrices.
This implies a robust and computationally tractable estimation approach for
several popular latent variable models.
| Anima Anandkumar and Rong Ge and Daniel Hsu and Sham M. Kakade and
Matus Telgarsky | null | 1210.7559 | null | null |
Text Classification with Compression Algorithms | cs.LG | This work concerns a comparison of SVM kernel methods in text categorization
tasks. In particular I define a kernel function that estimates the similarity
between two objects computing by their compressed lengths. In fact, compression
algorithms can detect arbitrarily long dependencies within the text strings.
Data text vectorization looses information in feature extractions and is highly
sensitive by textual language. Furthermore, these methods are language
independent and require no text preprocessing. Moreover, the accuracy computed
on the datasets (Web-KB, 20ng and Reuters-21578), in some case, is greater than
Gaussian, linear and polynomial kernels. The method limits are represented by
computational time complexity of the Gram matrix and by very poor performance
on non-textual datasets.
| Antonio Giuliano Zippo | null | 1210.7657 | null | null |
Learning in the Model Space for Fault Diagnosis | cs.LG cs.AI | The emergence of large scaled sensor networks facilitates the collection of
large amounts of real-time data to monitor and control complex engineering
systems. However, in many cases the collected data may be incomplete or
inconsistent, while the underlying environment may be time-varying or
un-formulated. In this paper, we have developed an innovative cognitive fault
diagnosis framework that tackles the above challenges. This framework
investigates fault diagnosis in the model space instead of in the signal space.
Learning in the model space is implemented by fitting a series of models using
a series of signal segments selected with a rolling window. By investigating
the learning techniques in the fitted model space, faulty models can be
discriminated from healthy models using one-class learning algorithm. The
framework enables us to construct fault library when unknown faults occur,
which can be regarded as cognitive fault isolation. This paper also
theoretically investigates how to measure the pairwise distance between two
models in the model space and incorporates the model distance into the learning
algorithm in the model space. The results on three benchmark applications and
one simulated model for the Barcelona water distribution network have confirmed
the effectiveness of the proposed framework.
| Huanhuan Chen, Peter Tino, Xin Yao, and Ali Rodan | null | 1210.8291 | null | null |
Temporal Autoencoding Restricted Boltzmann Machine | stat.ML cs.AI cs.LG | Much work has been done refining and characterizing the receptive fields
learned by deep learning algorithms. A lot of this work has focused on the
development of Gabor-like filters learned when enforcing sparsity constraints
on a natural image dataset. Little work however has investigated how these
filters might expand to the temporal domain, namely through training on natural
movies. Here we investigate exactly this problem in established temporal deep
learning algorithms as well as a new learning paradigm suggested here, the
Temporal Autoencoding Restricted Boltzmann Machine (TARBM).
| Chris H\"ausler, Alex Susemihl | null | 1210.8353 | null | null |
First Experiments with PowerPlay | cs.AI cs.LG | Like a scientist or a playing child, PowerPlay not only learns new skills to
solve given problems, but also invents new interesting problems by itself. By
design, it continually comes up with the fastest to find, initially novel, but
eventually solvable tasks. It also continually simplifies or compresses or
speeds up solutions to previous tasks. Here we describe first experiments with
PowerPlay. A self-delimiting recurrent neural network SLIM RNN is used as a
general computational problem solving architecture. Its connection weights can
encode arbitrary, self-delimiting, halting or non-halting programs affecting
both environment (through effectors) and internal states encoding abstractions
of event sequences. Our PowerPlay-driven SLIM RNN learns to become an
increasingly general solver of self-invented problems, continually adding new
problem solving procedures to its growing skill repertoire. Extending a recent
conference paper, we identify interesting, emerging, developmental stages of
our open-ended system. We also show how it automatically self-modularizes,
frequently re-using code for previously invented skills, always trying to
invent novel tasks that can be quickly validated because they do not require
too many weight changes affecting too many previous tasks.
| Rupesh Kumar Srivastava, Bas R. Steunebrink and J\"urgen Schmidhuber | null | 1210.8385 | null | null |
Venn-Abers predictors | cs.LG stat.ML | This paper continues study, both theoretical and empirical, of the method of
Venn prediction, concentrating on binary prediction problems. Venn predictors
produce probability-type predictions for the labels of test objects which are
guaranteed to be well calibrated under the standard assumption that the
observations are generated independently from the same distribution. We give a
simple formalization and proof of this property. We also introduce Venn-Abers
predictors, a new class of Venn predictors based on the idea of isotonic
regression, and report promising empirical results both for Venn-Abers
predictors and for their more computationally efficient simplified version.
| Vladimir Vovk and Ivan Petej | null | 1211.0025 | null | null |
Understanding the Interaction between Interests, Conversations and
Friendships in Facebook | cs.SI cs.LG stat.ML | In this paper, we explore salient questions about user interests,
conversations and friendships in the Facebook social network, using a novel
latent space model that integrates several data types. A key challenge of
studying Facebook's data is the wide range of data modalities such as text,
network links, and categorical labels. Our latent space model seamlessly
combines all three data modalities over millions of users, allowing us to study
the interplay between user friendships, interests, and higher-order
network-wide social trends on Facebook. The recovered insights not only answer
our initial questions, but also reveal surprising facts about user interests in
the context of Facebook's ecosystem. We also confirm that our results are
significant with respect to evidential information from the study subjects.
| Qirong Ho, Rong Yan, Rajat Raina, Eric P. Xing | null | 1211.0028 | null | null |
The Emerging Field of Signal Processing on Graphs: Extending
High-Dimensional Data Analysis to Networks and Other Irregular Domains | cs.DM cs.LG cs.SI | In applications such as social, energy, transportation, sensor, and neuronal
networks, high-dimensional data naturally reside on the vertices of weighted
graphs. The emerging field of signal processing on graphs merges algebraic and
spectral graph theoretic concepts with computational harmonic analysis to
process such signals on graphs. In this tutorial overview, we outline the main
challenges of the area, discuss different ways to define graph spectral
domains, which are the analogues to the classical frequency domain, and
highlight the importance of incorporating the irregular structures of graph
data domains when processing signals on graphs. We then review methods to
generalize fundamental operations such as filtering, translation, modulation,
dilation, and downsampling to the graph setting, and survey the localized,
multiscale transforms that have been proposed to efficiently extract
information from high-dimensional data on graphs. We conclude with a brief
discussion of open issues and possible extensions.
| David I Shuman, Sunil K. Narang, Pascal Frossard, Antonio Ortega, and
Pierre Vandergheynst | 10.1109/MSP.2012.2235192 | 1211.0053 | null | null |
Iterative Hard Thresholding Methods for $l_0$ Regularized Convex Cone
Programming | math.OC cs.LG math.NA stat.CO stat.ML | In this paper we consider $l_0$ regularized convex cone programming problems.
In particular, we first propose an iterative hard thresholding (IHT) method and
its variant for solving $l_0$ regularized box constrained convex programming.
We show that the sequence generated by these methods converges to a local
minimizer. Also, we establish the iteration complexity of the IHT method for
finding an $\epsilon$-local-optimal solution. We then propose a method for
solving $l_0$ regularized convex cone programming by applying the IHT method to
its quadratic penalty relaxation and establish its iteration complexity for
finding an $\epsilon$-approximate local minimizer. Finally, we propose a
variant of this method in which the associated penalty parameter is dynamically
updated, and show that every accumulation point is a local minimizer of the
problem.
| Zhaosong Lu | null | 1211.0056 | null | null |
Extension of TSVM to Multi-Class and Hierarchical Text Classification
Problems With General Losses | cs.LG | Transductive SVM (TSVM) is a well known semi-supervised large margin learning
method for binary text classification. In this paper we extend this method to
multi-class and hierarchical classification problems. We point out that the
determination of labels of unlabeled examples with fixed classifier weights is
a linear programming problem. We devise an efficient technique for solving it.
The method is applicable to general loss functions. We demonstrate the value of
the new method using large margin loss on a number of multi-class and
hierarchical classification datasets. For maxent loss we show empirically that
our method is better than expectation regularization/constraint and posterior
regularization methods, and competitive with the version of entropy
regularization method which uses label constraints.
| Sathiya Keerthi Selvaraj, Sundararajan Sellamanickam, Shirish Shevade | null | 1211.0210 | null | null |
Deep Gaussian Processes | stat.ML cs.LG math.PR | In this paper we introduce deep Gaussian process (GP) models. Deep GPs are a
deep belief network based on Gaussian process mappings. The data is modeled as
the output of a multivariate GP. The inputs to that Gaussian process are then
governed by another GP. A single layer model is equivalent to a standard GP or
the GP latent variable model (GP-LVM). We perform inference in the model by
approximate variational marginalization. This results in a strict lower bound
on the marginal likelihood of the model which we use for model selection
(number of layers and nodes per layer). Deep belief networks are typically
applied to relatively large data sets using stochastic gradient descent for
optimization. Our fully Bayesian treatment allows for the application of deep
models even when data is scarce. Model selection by our variational bound shows
that a five layer hierarchy is justified even when modelling a digit data set
containing only 150 examples.
| Andreas C. Damianou, Neil D. Lawrence | null | 1211.0358 | null | null |
Learning curves for multi-task Gaussian process regression | cs.LG cond-mat.dis-nn stat.ML | We study the average case performance of multi-task Gaussian process (GP)
regression as captured in the learning curve, i.e. the average Bayes error for
a chosen task versus the total number of examples $n$ for all tasks. For GP
covariances that are the product of an input-dependent covariance function and
a free-form inter-task covariance matrix, we show that accurate approximations
for the learning curve can be obtained for an arbitrary number of tasks $T$. We
use these to study the asymptotic learning behaviour for large $n$.
Surprisingly, multi-task learning can be asymptotically essentially useless, in
the sense that examples from other tasks help only when the degree of
inter-task correlation, $\rho$, is near its maximal value $\rho=1$. This effect
is most extreme for learning of smooth target functions as described by e.g.
squared exponential kernels. We also demonstrate that when learning many tasks,
the learning curves separate into an initial phase, where the Bayes error on
each task is reduced down to a plateau value by "collective learning" even
though most tasks have not seen examples, and a final decay that occurs once
the number of examples is proportional to the number of tasks.
| Simon R. F. Ashton and Peter Sollich | null | 1211.0439 | null | null |
Ordinal Rating of Network Performance and Inference by Matrix Completion | cs.NI cs.LG | This paper addresses the large-scale acquisition of end-to-end network
performance. We made two distinct contributions: ordinal rating of network
performance and inference by matrix completion. The former reduces measurement
costs and unifies various metrics which eases their processing in applications.
The latter enables scalable and accurate inference with no requirement of
structural information of the network nor geometric constraints. By combining
both, the acquisition problem bears strong similarities to recommender systems.
This paper investigates the applicability of various matrix factorization
models used in recommender systems. We found that the simple regularized matrix
factorization is not only practical but also produces accurate results that are
beneficial for peer selection.
| Wei Du and Yongjun Liao and and Pierre Geurts and Guy Leduc | null | 1211.0447 | null | null |
Partition Tree Weighting | cs.IT cs.LG math.IT stat.ML | This paper introduces the Partition Tree Weighting technique, an efficient
meta-algorithm for piecewise stationary sources. The technique works by
performing Bayesian model averaging over a large class of possible partitions
of the data into locally stationary segments. It uses a prior, closely related
to the Context Tree Weighting technique of Willems, that is well suited to data
compression applications. Our technique can be applied to any coding
distribution at an additional time and space cost only logarithmic in the
sequence length. We provide a competitive analysis of the redundancy of our
method, and explore its application in a variety of settings. The order of the
redundancy and the complexity of our algorithm matches those of the best
competitors available in the literature, and the new algorithm exhibits a
superior complexity-performance trade-off in our experiments.
| Joel Veness, Martha White, Michael Bowling, Andr\'as Gy\"orgy | null | 1211.0587 | null | null |
The complexity of learning halfspaces using generalized linear methods | cs.LG cs.DS | Many popular learning algorithms (E.g. Regression, Fourier-Transform based
algorithms, Kernel SVM and Kernel ridge regression) operate by reducing the
problem to a convex optimization problem over a vector space of functions.
These methods offer the currently best approach to several central problems
such as learning half spaces and learning DNF's. In addition they are widely
used in numerous application domains. Despite their importance, there are still
very few proof techniques to show limits on the power of these algorithms.
We study the performance of this approach in the problem of (agnostically and
improperly) learning halfspaces with margin $\gamma$. Let $\mathcal{D}$ be a
distribution over labeled examples. The $\gamma$-margin error of a hyperplane
$h$ is the probability of an example to fall on the wrong side of $h$ or at a
distance $\le\gamma$ from it. The $\gamma$-margin error of the best $h$ is
denoted $\mathrm{Err}_\gamma(\mathcal{D})$. An $\alpha(\gamma)$-approximation
algorithm receives $\gamma,\epsilon$ as input and, using i.i.d. samples of
$\mathcal{D}$, outputs a classifier with error rate $\le
\alpha(\gamma)\mathrm{Err}_\gamma(\mathcal{D}) + \epsilon$. Such an algorithm
is efficient if it uses $\mathrm{poly}(\frac{1}{\gamma},\frac{1}{\epsilon})$
samples and runs in time polynomial in the sample size.
The best approximation ratio achievable by an efficient algorithm is
$O\left(\frac{1/\gamma}{\sqrt{\log(1/\gamma)}}\right)$ and is achieved using an
algorithm from the above class. Our main result shows that the approximation
ratio of every efficient algorithm from this family must be $\ge
\Omega\left(\frac{1/\gamma}{\mathrm{poly}\left(\log\left(1/\gamma\right)\right)}\right)$,
essentially matching the best known upper bound.
| Amit Daniely and Nati Linial and Shai Shalev-Shwartz | null | 1211.0616 | null | null |
Stochastic ADMM for Nonsmooth Optimization | cs.LG math.OC stat.ML | We present a stochastic setting for optimization problems with nonsmooth
convex separable objective functions over linear equality constraints. To solve
such problems, we propose a stochastic Alternating Direction Method of
Multipliers (ADMM) algorithm. Our algorithm applies to a more general class of
nonsmooth convex functions that does not necessarily have a closed-form
solution by minimizing the augmented function directly. We also demonstrate the
rates of convergence for our algorithm under various structural assumptions of
the stochastic functions: $O(1/\sqrt{t})$ for convex functions and $O(\log
t/t)$ for strongly convex functions. Compared to previous literature, we
establish the convergence rate of ADMM algorithm, for the first time, in terms
of both the objective value and the feasibility violation.
| Hua Ouyang, Niao He, Alexander Gray | null | 1211.0632 | null | null |
Discussion: Latent variable graphical model selection via convex
optimization | math.ST cs.LG stat.ML stat.TH | Discussion of "Latent variable graphical model selection via convex
optimization" by Venkat Chandrasekaran, Pablo A. Parrilo and Alan S. Willsky
[arXiv:1008.1290].
| Ming Yuan | 10.1214/12-AOS979 | 1211.0801 | null | null |
Rejoinder: Latent variable graphical model selection via convex
optimization | math.ST cs.LG stat.ML stat.TH | Rejoinder to "Latent variable graphical model selection via convex
optimization" by Venkat Chandrasekaran, Pablo A. Parrilo and Alan S. Willsky
[arXiv:1008.1290].
| Venkat Chandrasekaran, Pablo A. Parrilo, Alan S. Willsky | 10.1214/12-AOS1020 | 1211.0835 | null | null |
Comparing K-Nearest Neighbors and Potential Energy Method in
classification problem. A case study using KNN applet by E.M. Mirkes and real
life benchmark data sets | stat.ML cs.LG | K-nearest neighbors (KNN) method is used in many supervised learning
classification problems. Potential Energy (PE) method is also developed for
classification problems based on its physical metaphor. The energy potential
used in the experiments are Yukawa potential and Gaussian Potential. In this
paper, I use both applet and MATLAB program with real life benchmark data to
analyze the performances of KNN and PE method in classification problems. The
results show that in general, KNN and PE methods have similar performance. In
particular, PE with Yukawa potential has worse performance than KNN when the
density of the data is higher in the distribution of the database. When the
Gaussian potential is applied, the results from PE and KNN have similar
behavior. The indicators used are correlation coefficients and information
gain.
| Yanshan Shi | null | 1211.0879 | null | null |
APPLE: Approximate Path for Penalized Likelihood Estimators | stat.ML cs.LG | In high-dimensional data analysis, penalized likelihood estimators are shown
to provide superior results in both variable selection and parameter
estimation. A new algorithm, APPLE, is proposed for calculating the Approximate
Path for Penalized Likelihood Estimators. Both the convex penalty (such as
LASSO) and the nonconvex penalty (such as SCAD and MCP) cases are considered.
The APPLE efficiently computes the solution path for the penalized likelihood
estimator using a hybrid of the modified predictor-corrector method and the
coordinate-descent algorithm. APPLE is compared with several well-known
packages via simulation and analysis of two gene expression data sets.
| Yi Yu and Yang Feng | null | 1211.0889 | null | null |
Algorithm Runtime Prediction: Methods & Evaluation | cs.AI cs.LG cs.PF stat.ML | Perhaps surprisingly, it is possible to predict how long an algorithm will
take to run on a previously unseen input, using machine learning techniques to
build a model of the algorithm's runtime as a function of problem-specific
instance features. Such models have important applications to algorithm
analysis, portfolio-based algorithm selection, and the automatic configuration
of parameterized algorithms. Over the past decade, a wide variety of techniques
have been studied for building such models. Here, we describe extensions and
improvements of existing models, new families of models, and -- perhaps most
importantly -- a much more thorough treatment of algorithm parameters as model
inputs. We also comprehensively describe new and existing features for
predicting algorithm runtime for propositional satisfiability (SAT), travelling
salesperson (TSP) and mixed integer programming (MIP) problems. We evaluate
these innovations through the largest empirical analysis of its kind, comparing
to a wide range of runtime modelling techniques from the literature. Our
experiments consider 11 algorithms and 35 instance distributions; they also
span a very wide range of SAT, MIP, and TSP instances, with the least
structured having been generated uniformly at random and the most structured
having emerged from real industrial applications. Overall, we demonstrate that
our new models yield substantially better runtime predictions than previous
approaches in terms of their generalization to new problem instances, to new
algorithms from a parameterized space, and to both simultaneously.
| Frank Hutter, Lin Xu, Holger H. Hoos, Kevin Leyton-Brown | null | 1211.0906 | null | null |
Learning using Local Membership Queries | cs.LG cs.AI | We introduce a new model of membership query (MQ) learning, where the
learning algorithm is restricted to query points that are \emph{close} to
random examples drawn from the underlying distribution. The learning model is
intermediate between the PAC model (Valiant, 1984) and the PAC+MQ model (where
the queries are allowed to be arbitrary points).
Membership query algorithms are not popular among machine learning
practitioners. Apart from the obvious difficulty of adaptively querying
labelers, it has also been observed that querying \emph{unnatural} points leads
to increased noise from human labelers (Lang and Baum, 1992). This motivates
our study of learning algorithms that make queries that are close to examples
generated from the data distribution.
We restrict our attention to functions defined on the $n$-dimensional Boolean
hypercube and say that a membership query is local if its Hamming distance from
some example in the (random) training data is at most $O(\log(n))$. We show the
following results in this model:
(i) The class of sparse polynomials (with coefficients in R) over $\{0,1\}^n$
is polynomial time learnable under a large class of \emph{locally smooth}
distributions using $O(\log(n))$-local queries. This class also includes the
class of $O(\log(n))$-depth decision trees.
(ii) The class of polynomial-sized decision trees is polynomial time
learnable under product distributions using $O(\log(n))$-local queries.
(iii) The class of polynomial size DNF formulas is learnable under the
uniform distribution using $O(\log(n))$-local queries in time
$n^{O(\log(\log(n)))}$.
(iv) In addition we prove a number of results relating the proposed model to
the traditional PAC model and the PAC+MQ model.
| Pranjal Awasthi, Vitaly Feldman, Varun Kanade | null | 1211.0996 | null | null |
Algorithms and Hardness for Robust Subspace Recovery | cs.CC cs.DS cs.IT cs.LG math.IT | We consider a fundamental problem in unsupervised learning called
\emph{subspace recovery}: given a collection of $m$ points in $\mathbb{R}^n$,
if many but not necessarily all of these points are contained in a
$d$-dimensional subspace $T$ can we find it? The points contained in $T$ are
called {\em inliers} and the remaining points are {\em outliers}. This problem
has received considerable attention in computer science and in statistics. Yet
efficient algorithms from computer science are not robust to {\em adversarial}
outliers, and the estimators from robust statistics are hard to compute in high
dimensions.
Are there algorithms for subspace recovery that are both robust to outliers
and efficient? We give an algorithm that finds $T$ when it contains more than a
$\frac{d}{n}$ fraction of the points. Hence, for say $d = n/2$ this estimator
is both easy to compute and well-behaved when there are a constant fraction of
outliers. We prove that it is Small Set Expansion hard to find $T$ when the
fraction of errors is any larger, thus giving evidence that our estimator is an
{\em optimal} compromise between efficiency and robustness.
As it turns out, this basic problem has a surprising number of connections to
other areas including small set expansion, matroid theory and functional
analysis that we make use of here.
| Moritz Hardt and Ankur Moitra | null | 1211.1041 | null | null |
Soft (Gaussian CDE) regression models and loss functions | cs.LG stat.ML | Regression, unlike classification, has lacked a comprehensive and effective
approach to deal with cost-sensitive problems by the reuse (and not a
re-training) of general regression models. In this paper, a wide variety of
cost-sensitive problems in regression (such as bids, asymmetric losses and
rejection rules) can be solved effectively by a lightweight but powerful
approach, consisting of: (1) the conversion of any traditional one-parameter
crisp regression model into a two-parameter soft regression model, seen as a
normal conditional density estimator, by the use of newly-introduced enrichment
methods; and (2) the reframing of an enriched soft regression model to new
contexts by an instance-dependent optimisation of the expected loss derived
from the conditional normal distribution.
| Jose Hernandez-Orallo | null | 1211.1043 | null | null |
Active and passive learning of linear separators under log-concave
distributions | cs.LG math.ST stat.ML stat.TH | We provide new results concerning label efficient, polynomial time, passive
and active learning of linear separators. We prove that active learning
provides an exponential improvement over PAC (passive) learning of homogeneous
linear separators under nearly log-concave distributions. Building on this, we
provide a computationally efficient PAC algorithm with optimal (up to a
constant factor) sample complexity for such problems. This resolves an open
question concerning the sample complexity of efficient PAC algorithms under the
uniform distribution in the unit ball. Moreover, it provides the first bound
for a polynomial-time PAC algorithm that is tight for an interesting infinite
class of hypothesis functions under a general and natural class of
data-distributions, providing significant progress towards a longstanding open
question.
We also provide new bounds for active and passive learning in the case that
the data might not be linearly separable, both in the agnostic case and and
under the Tsybakov low-noise condition. To derive our results, we provide new
structural results for (nearly) log-concave distributions, which might be of
independent interest as well.
| Maria Florina Balcan and Philip M. Long | null | 1211.1082 | null | null |
Visual Transfer Learning: Informal Introduction and Literature Overview | cs.CV cs.LG | Transfer learning techniques are important to handle small training sets and
to allow for quick generalization even from only a few examples. The following
paper is the introduction as well as the literature overview part of my thesis
related to the topic of transfer learning for visual recognition problems.
| Erik Rodner | null | 1211.1127 | null | null |
Handwritten digit recognition by bio-inspired hierarchical networks | cs.LG cs.CV q-bio.NC | The human brain processes information showing learning and prediction
abilities but the underlying neuronal mechanisms still remain unknown.
Recently, many studies prove that neuronal networks are able of both
generalizations and associations of sensory inputs. In this paper, following a
set of neurophysiological evidences, we propose a learning framework with a
strong biological plausibility that mimics prominent functions of cortical
circuitries. We developed the Inductive Conceptual Network (ICN), that is a
hierarchical bio-inspired network, able to learn invariant patterns by
Variable-order Markov Models implemented in its nodes. The outputs of the
top-most node of ICN hierarchy, representing the highest input generalization,
allow for automatic classification of inputs. We found that the ICN clusterized
MNIST images with an error of 5.73% and USPS images with an error of 12.56%.
| Antonio G. Zippo, Giuliana Gelsomino, Sara Nencini, Gabriele E. M.
Biella | null | 1211.1255 | null | null |
Random walk kernels and learning curves for Gaussian process regression
on random graphs | stat.ML cond-mat.dis-nn cond-mat.stat-mech cs.LG | We consider learning on graphs, guided by kernels that encode similarity
between vertices. Our focus is on random walk kernels, the analogues of squared
exponential kernels in Euclidean spaces. We show that on large, locally
treelike, graphs these have some counter-intuitive properties, specifically in
the limit of large kernel lengthscales. We consider using these kernels as
covariance matrices of e.g.\ Gaussian processes (GPs). In this situation one
typically scales the prior globally to normalise the average of the prior
variance across vertices. We demonstrate that, in contrast to the Euclidean
case, this generically leads to significant variation in the prior variance
across vertices, which is undesirable from the probabilistic modelling point of
view. We suggest the random walk kernel should be normalised locally, so that
each vertex has the same prior variance, and analyse the consequences of this
by studying learning curves for Gaussian process regression. Numerical
calculations as well as novel theoretical predictions for the learning curves
using belief propagation make it clear that one obtains distinctly different
probabilistic models depending on the choice of normalisation. Our method for
predicting the learning curves using belief propagation is significantly more
accurate than previous approximations and should become exact in the limit of
large random graphs.
| Matthew Urry and Peter Sollich | null | 1211.1328 | null | null |
K-Plane Regression | cs.LG | In this paper, we present a novel algorithm for piecewise linear regression
which can learn continuous as well as discontinuous piecewise linear functions.
The main idea is to repeatedly partition the data and learn a liner model in in
each partition. While a simple algorithm incorporating this idea does not work
well, an interesting modification results in a good algorithm. The proposed
algorithm is similar in spirit to $k$-means clustering algorithm. We show that
our algorithm can also be viewed as an EM algorithm for maximum likelihood
estimation of parameters under a reasonable probability model. We empirically
demonstrate the effectiveness of our approach by comparing its performance with
the state of art regression learning algorithms on some real world datasets.
| Naresh Manwani, P. S. Sastry | 10.1016/j.ins.2014.08.058 | 1211.1513 | null | null |
Explosion prediction of oil gas using SVM and Logistic Regression | cs.CE cs.LG | The prevention of dangerous chemical accidents is a primary problem of
industrial manufacturing. In the accidents of dangerous chemicals, the oil gas
explosion plays an important role. The essential task of the explosion
prevention is to estimate the better explosion limit of a given oil gas. In
this paper, Support Vector Machines (SVM) and Logistic Regression (LR) are used
to predict the explosion of oil gas. LR can get the explicit probability
formula of explosion, and the explosive range of the concentrations of oil gas
according to the concentration of oxygen. Meanwhile, SVM gives higher accuracy
of prediction. Furthermore, considering the practical requirements, the effects
of penalty parameter on the distribution of two types of errors are discussed.
| Xiaofei Wang, Mingming Zhang, Liyong Shen, Suixiang Gao | null | 1211.1526 | null | null |
Image denoising with multi-layer perceptrons, part 1: comparison with
existing algorithms and with bounds | cs.CV cs.LG | Image denoising can be described as the problem of mapping from a noisy image
to a noise-free image. The best currently available denoising methods
approximate this mapping with cleverly engineered algorithms. In this work we
attempt to learn this mapping directly with plain multi layer perceptrons (MLP)
applied to image patches. We will show that by training on large image
databases we are able to outperform the current state-of-the-art image
denoising methods. In addition, our method achieves results that are superior
to one type of theoretical bound and goes a large way toward closing the gap
with a second type of theoretical bound. Our approach is easily adapted to less
extensively studied types of noise, such as mixed Poisson-Gaussian noise, JPEG
artifacts, salt-and-pepper noise and noise resembling stripes, for which we
achieve excellent results as well. We will show that combining a block-matching
procedure with MLPs can further improve the results on certain images. In a
second paper, we detail the training trade-offs and the inner mechanisms of our
MLPs.
| Harold Christopher Burger, Christian J. Schuler, Stefan Harmeling | null | 1211.1544 | null | null |
A Riemannian geometry for low-rank matrix completion | cs.LG cs.NA math.OC | We propose a new Riemannian geometry for fixed-rank matrices that is
specifically tailored to the low-rank matrix completion problem. Exploiting the
degree of freedom of a quotient space, we tune the metric on our search space
to the particular least square cost function. At one level, it illustrates in a
novel way how to exploit the versatile framework of optimization on quotient
manifold. At another level, our algorithm can be considered as an improved
version of LMaFit, the state-of-the-art Gauss-Seidel algorithm. We develop
necessary tools needed to perform both first-order and second-order
optimization. In particular, we propose gradient descent schemes (steepest
descent and conjugate gradient) and trust-region algorithms. We also show that,
thanks to the simplicity of the cost function, it is numerically cheap to
perform an exact linesearch given a search direction, which makes our
algorithms competitive with the state-of-the-art on standard low-rank matrix
completion instances.
| B. Mishra, K. Adithya Apuroop and R. Sepulchre | null | 1211.1550 | null | null |
Image denoising with multi-layer perceptrons, part 2: training
trade-offs and analysis of their mechanisms | cs.CV cs.LG | Image denoising can be described as the problem of mapping from a noisy image
to a noise-free image. In another paper, we show that multi-layer perceptrons
can achieve outstanding image denoising performance for various types of noise
(additive white Gaussian noise, mixed Poisson-Gaussian noise, JPEG artifacts,
salt-and-pepper noise and noise resembling stripes). In this work we discuss in
detail which trade-offs have to be considered during the training procedure. We
will show how to achieve good results and which pitfalls to avoid. By analysing
the activation patterns of the hidden units we are able to make observations
regarding the functioning principle of multi-layer perceptrons trained for
image denoising.
| Harold Christopher Burger, Christian J. Schuler, Stefan Harmeling | null | 1211.1552 | null | null |
Learning Monocular Reactive UAV Control in Cluttered Natural
Environments | cs.RO cs.CV cs.LG cs.SY | Autonomous navigation for large Unmanned Aerial Vehicles (UAVs) is fairly
straight-forward, as expensive sensors and monitoring devices can be employed.
In contrast, obstacle avoidance remains a challenging task for Micro Aerial
Vehicles (MAVs) which operate at low altitude in cluttered environments. Unlike
large vehicles, MAVs can only carry very light sensors, such as cameras, making
autonomous navigation through obstacles much more challenging. In this paper,
we describe a system that navigates a small quadrotor helicopter autonomously
at low altitude through natural forest environments. Using only a single cheap
camera to perceive the environment, we are able to maintain a constant velocity
of up to 1.5m/s. Given a small set of human pilot demonstrations, we use recent
state-of-the-art imitation learning techniques to train a controller that can
avoid trees by adapting the MAVs heading. We demonstrate the performance of our
system in a more controlled environment indoors, and in real natural forest
environments outdoors.
| Stephane Ross, Narek Melik-Barkhudarov, Kumar Shaurya Shankar, Andreas
Wendel, Debadeepta Dey, J. Andrew Bagnell, Martial Hebert | null | 1211.1690 | null | null |
Blind Signal Separation in the Presence of Gaussian Noise | cs.LG cs.DS stat.ML | A prototypical blind signal separation problem is the so-called cocktail
party problem, with n people talking simultaneously and n different microphones
within a room. The goal is to recover each speech signal from the microphone
inputs. Mathematically this can be modeled by assuming that we are given
samples from an n-dimensional random variable X=AS, where S is a vector whose
coordinates are independent random variables corresponding to each speaker. The
objective is to recover the matrix A^{-1} given random samples from X. A range
of techniques collectively known as Independent Component Analysis (ICA) have
been proposed to address this problem in the signal processing and machine
learning literature. Many of these techniques are based on using the kurtosis
or other cumulants to recover the components.
In this paper we propose a new algorithm for solving the blind signal
separation problem in the presence of additive Gaussian noise, when we are
given samples from X=AS+\eta, where \eta is drawn from an unknown, not
necessarily spherical n-dimensional Gaussian distribution. Our approach is
based on a method for decorrelating a sample with additive Gaussian noise under
the assumption that the underlying distribution is a linear transformation of a
distribution with independent components. Our decorrelation routine is based on
the properties of cumulant tensors and can be combined with any standard
cumulant-based method for ICA to get an algorithm that is provably robust in
the presence of Gaussian noise. We derive polynomial bounds for the sample
complexity and error propagation of our method.
| Mikhail Belkin, Luis Rademacher, James Voss | null | 1211.1716 | null | null |
Inverse problems in approximate uniform generation | cs.CC cs.DS cs.LG | We initiate the study of \emph{inverse} problems in approximate uniform
generation, focusing on uniform generation of satisfying assignments of various
types of Boolean functions. In such an inverse problem, the algorithm is given
uniform random satisfying assignments of an unknown function $f$ belonging to a
class $\C$ of Boolean functions, and the goal is to output a probability
distribution $D$ which is $\epsilon$-close, in total variation distance, to the
uniform distribution over $f^{-1}(1)$.
Positive results: We prove a general positive result establishing sufficient
conditions for efficient inverse approximate uniform generation for a class
$\C$. We define a new type of algorithm called a \emph{densifier} for $\C$, and
show (roughly speaking) how to combine (i) a densifier, (ii) an approximate
counting / uniform generation algorithm, and (iii) a Statistical Query learning
algorithm, to obtain an inverse approximate uniform generation algorithm. We
apply this general result to obtain a poly$(n,1/\eps)$-time algorithm for the
class of halfspaces; and a quasipoly$(n,1/\eps)$-time algorithm for the class
of $\poly(n)$-size DNF formulas.
Negative results: We prove a general negative result establishing that the
existence of certain types of signature schemes in cryptography implies the
hardness of certain inverse approximate uniform generation problems. This
implies that there are no {subexponential}-time inverse approximate uniform
generation algorithms for 3-CNF formulas; for intersections of two halfspaces;
for degree-2 polynomial threshold functions; and for monotone 2-CNF formulas.
Finally, we show that there is no general relationship between the complexity
of the "forward" approximate uniform generation problem and the complexity of
the inverse problem for a class $\C$ -- it is possible for either one to be
easy while the other is hard.
| Anindya De, Ilias Diakonikolas, Rocco A. Servedio | null | 1211.1722 | null | null |
Algorithm for Missing Values Imputation in Categorical Data with Use of
Association Rules | cs.LG | This paper presents algorithm for missing values imputation in categorical
data. The algorithm is based on using association rules and is presented in
three variants. Experimental shows better accuracy of missing values imputation
using the algorithm then using most common attribute value.
| Ji\v{r}\'i Kaiser | null | 1211.1799 | null | null |
Tangent-based manifold approximation with locally linear models | cs.LG cs.CV | In this paper, we consider the problem of manifold approximation with affine
subspaces. Our objective is to discover a set of low dimensional affine
subspaces that represents manifold data accurately while preserving the
manifold's structure. For this purpose, we employ a greedy technique that
partitions manifold samples into groups that can be each approximated by a low
dimensional subspace. We start by considering each manifold sample as a
different group and we use the difference of tangents to determine appropriate
group mergings. We repeat this procedure until we reach the desired number of
sample groups. The best low dimensional affine subspaces corresponding to the
final groups constitute our approximate manifold representation. Our
experiments verify the effectiveness of the proposed scheme and show its
superior performance compared to state-of-the-art methods for manifold
approximation.
| Sofia Karygianni and Pascal Frossard | 10.1016/j.sigpro.2014.03.047 | 1211.1893 | null | null |
LAGE: A Java Framework to reconstruct Gene Regulatory Networks from
Large-Scale Continues Expression Data | cs.LG cs.CE q-bio.QM stat.ML | LAGE is a systematic framework developed in Java. The motivation of LAGE is
to provide a scalable and parallel solution to reconstruct Gene Regulatory
Networks (GRNs) from continuous gene expression data for very large amount of
genes. The basic idea of our framework is motivated by the philosophy of
divideand-conquer. Specifically, LAGE recursively partitions genes into
multiple overlapping communities with much smaller sizes, learns
intra-community GRNs respectively before merge them altogether. Besides, the
complete information of overlapping communities serves as the byproduct, which
could be used to mine meaningful functional modules in biological networks.
| Yang Lu and Mengying Wang and Kenny Q. Zhu and Bo Yuan | null | 1211.2073 | null | null |
Efficient Monte Carlo Methods for Multi-Dimensional Learning with
Classifier Chains | cs.LG stat.CO stat.ML | Multi-dimensional classification (MDC) is the supervised learning problem
where an instance is associated with multiple classes, rather than with a
single class, as in traditional classification problems. Since these classes
are often strongly correlated, modeling the dependencies between them allows
MDC methods to improve their performance - at the expense of an increased
computational cost. In this paper we focus on the classifier chains (CC)
approach for modeling dependencies, one of the most popular and highest-
performing methods for multi-label classification (MLC), a particular case of
MDC which involves only binary classes (i.e., labels). The original CC
algorithm makes a greedy approximation, and is fast but tends to propagate
errors along the chain. Here we present novel Monte Carlo schemes, both for
finding a good chain sequence and performing efficient inference. Our
algorithms remain tractable for high-dimensional data sets and obtain the best
predictive performance across several real data sets.
| Jesse Read, Luca Martino, David Luengo | 10.1016/j.patcog.2013.10.006 | 1211.2190 | null | null |
Efficient learning of simplices | cs.LG cs.DS stat.ML | We show an efficient algorithm for the following problem: Given uniformly
random points from an arbitrary n-dimensional simplex, estimate the simplex.
The size of the sample and the number of arithmetic operations of our algorithm
are polynomial in n. This answers a question of Frieze, Jerrum and Kannan
[FJK]. Our result can also be interpreted as efficiently learning the
intersection of n+1 half-spaces in R^n in the model where the intersection is
bounded and we are given polynomially many uniform samples from it. Our proof
uses the local search technique from Independent Component Analysis (ICA), also
used by [FJK]. Unlike these previous algorithms, which were based on analyzing
the fourth moment, ours is based on the third moment.
We also show a direct connection between the problem of learning a simplex
and ICA: a simple randomized reduction to ICA from the problem of learning a
simplex. The connection is based on a known representation of the uniform
measure on a simplex. Similar representations lead to a reduction from the
problem of learning an affine transformation of an n-dimensional l_p ball to
ICA.
| Joseph Anderson, Navin Goyal, Luis Rademacher | null | 1211.2227 | null | null |
No-Regret Algorithms for Unconstrained Online Convex Optimization | cs.LG | Some of the most compelling applications of online convex optimization,
including online prediction and classification, are unconstrained: the natural
feasible set is R^n. Existing algorithms fail to achieve sub-linear regret in
this setting unless constraints on the comparator point x^* are known in
advance. We present algorithms that, without such prior knowledge, offer
near-optimal regret bounds with respect to any choice of x^*. In particular,
regret with respect to x^* = 0 is constant. We then prove lower bounds showing
that our guarantees are near-optimal in this setting.
| Matthew Streeter and H. Brendan McMahan | null | 1211.2260 | null | null |
Probabilistic Combination of Classifier and Cluster Ensembles for
Non-transductive Learning | cs.LG stat.ML | Unsupervised models can provide supplementary soft constraints to help
classify new target data under the assumption that similar objects in the
target set are more likely to share the same class label. Such models can also
help detect possible differences between training and target distributions,
which is useful in applications where concept drift may take place. This paper
describes a Bayesian framework that takes as input class labels from existing
classifiers (designed based on labeled data from the source domain), as well as
cluster labels from a cluster ensemble operating solely on the target data to
be classified, and yields a consensus labeling of the target data. This
framework is particularly useful when the statistics of the target data drift
or change from those of the training data. We also show that the proposed
framework is privacy-aware and allows performing distributed learning when
data/models have sharing restrictions. Experiments show that our framework can
yield superior results to those provided by applying classifier ensembles only.
| Ayan Acharya, Eduardo R. Hruschka, Joydeep Ghosh, Badrul Sarwar,
Jean-David Ruvini | null | 1211.2304 | null | null |
Hybrid methodology for hourly global radiation forecasting in
Mediterranean area | cs.NE cs.LG physics.ao-ph stat.AP | The renewable energies prediction and particularly global radiation
forecasting is a challenge studied by a growing number of research teams. This
paper proposes an original technique to model the insolation time series based
on combining Artificial Neural Network (ANN) and Auto-Regressive and Moving
Average (ARMA) model. While ANN by its non-linear nature is effective to
predict cloudy days, ARMA techniques are more dedicated to sunny days without
cloud occurrences. Thus, three hybrids models are suggested: the first proposes
simply to use ARMA for 6 months in spring and summer and to use an optimized
ANN for the other part of the year; the second model is equivalent to the first
but with a seasonal learning; the last model depends on the error occurred the
previous hour. These models were used to forecast the hourly global radiation
for five places in Mediterranean area. The forecasting performance was compared
among several models: the 3 above mentioned models, the best ANN and ARMA for
each location. In the best configuration, the coupling of ANN and ARMA allows
an improvement of more than 1%, with a maximum in autumn (3.4%) and a minimum
in winter (0.9%) where ANN alone is the best.
| Cyril Voyant (SPE, CHD Castellucio), Marc Muselli (SPE), Christophe
Paoli (SPE), Marie Laure Nivet (SPE) | 10.1016/j.renene.2012.10.049 | 1211.2378 | null | null |
Measures of Entropy from Data Using Infinitely Divisible Kernels | cs.LG cs.IT math.IT stat.ML | Information theory provides principled ways to analyze different inference
and learning problems such as hypothesis testing, clustering, dimensionality
reduction, classification, among others. However, the use of information
theoretic quantities as test statistics, that is, as quantities obtained from
empirical data, poses a challenging estimation problem that often leads to
strong simplifications such as Gaussian models, or the use of plug in density
estimators that are restricted to certain representation of the data. In this
paper, a framework to non-parametrically obtain measures of entropy directly
from data using operators in reproducing kernel Hilbert spaces defined by
infinitely divisible kernels is presented. The entropy functionals, which bear
resemblance with quantum entropies, are defined on positive definite matrices
and satisfy similar axioms to those of Renyi's definition of entropy.
Convergence of the proposed estimators follows from concentration results on
the difference between the ordered spectrum of the Gram matrices and the
integral operators associated to the population quantities. In this way,
capitalizing on both the axiomatic definition of entropy and on the
representation power of positive definite kernels, the proposed measure of
entropy avoids the estimation of the probability distribution underlying the
data. Moreover, estimators of kernel-based conditional entropy and mutual
information are also defined. Numerical experiments on independence tests
compare favourably with state of the art.
| Luis G. Sanchez Giraldo and Murali Rao and Jose C. Principe | null | 1211.2459 | null | null |
Random Utility Theory for Social Choice | cs.MA cs.LG stat.ML | Random utility theory models an agent's preferences on alternatives by
drawing a real-valued score on each alternative (typically independently) from
a parameterized distribution, and then ranking the alternatives according to
scores. A special case that has received significant attention is the
Plackett-Luce model, for which fast inference methods for maximum likelihood
estimators are available. This paper develops conditions on general random
utility models that enable fast inference within a Bayesian framework through
MC-EM, providing concave loglikelihood functions and bounded sets of global
maxima solutions. Results on both real-world and simulated data provide support
for the scalability of the approach and capability for model selection among
general random utility models including Plackett-Luce.
| Hossein Azari Soufiani, David C. Parkes, Lirong Xia | null | 1211.2476 | null | null |
Minimal cost feature selection of data with normal distribution
measurement errors | cs.AI cs.LG | Minimal cost feature selection is devoted to obtain a trade-off between test
costs and misclassification costs. This issue has been addressed recently on
nominal data. In this paper, we consider numerical data with measurement errors
and study minimal cost feature selection in this model. First, we build a data
model with normal distribution measurement errors. Second, the neighborhood of
each data item is constructed through the confidence interval. Comparing with
discretized intervals, neighborhoods are more reasonable to maintain the
information of data. Third, we define a new minimal total cost feature
selection problem through considering the trade-off between test costs and
misclassification costs. Fourth, we proposed a backtracking algorithm with
three effective pruning techniques to deal with this problem. The algorithm is
tested on four UCI data sets. Experimental results indicate that the pruning
techniques are effective, and the algorithm is efficient for data sets with
nearly one thousand objects.
| Hong Zhao, Fan Min and William Zhu | null | 1211.2512 | null | null |
Iterative Thresholding Algorithm for Sparse Inverse Covariance
Estimation | stat.CO cs.LG stat.ML | The L1-regularized maximum likelihood estimation problem has recently become
a topic of great interest within the machine learning, statistics, and
optimization communities as a method for producing sparse inverse covariance
estimators. In this paper, a proximal gradient method (G-ISTA) for performing
L1-regularized covariance matrix estimation is presented. Although numerous
algorithms have been proposed for solving this problem, this simple proximal
gradient method is found to have attractive theoretical and numerical
properties. G-ISTA has a linear rate of convergence, resulting in an O(log e)
iteration complexity to reach a tolerance of e. This paper gives eigenvalue
bounds for the G-ISTA iterates, providing a closed-form linear convergence
rate. The rate is shown to be closely related to the condition number of the
optimal point. Numerical convergence results and timing comparisons for the
proposed method are presented. G-ISTA is shown to perform very well, especially
when the optimal point is well-conditioned.
| Dominique Guillot and Bala Rajaratnam and Benjamin T. Rolfs and Arian
Maleki and Ian Wong | null | 1211.2532 | null | null |
A Comparative Study of Gaussian Mixture Model and Radial Basis Function
for Voice Recognition | cs.LG cs.CV stat.ML | A comparative study of the application of Gaussian Mixture Model (GMM) and
Radial Basis Function (RBF) in biometric recognition of voice has been carried
out and presented. The application of machine learning techniques to biometric
authentication and recognition problems has gained a widespread acceptance. In
this research, a GMM model was trained, using Expectation Maximization (EM)
algorithm, on a dataset containing 10 classes of vowels and the model was used
to predict the appropriate classes using a validation dataset. For experimental
validity, the model was compared to the performance of two different versions
of RBF model using the same learning and validation datasets. The results
showed very close recognition accuracy between the GMM and the standard RBF
model, but with GMM performing better than the standard RBF by less than 1% and
the two models outperformed similar models reported in literature. The DTREG
version of RBF outperformed the other two models by producing 94.8% recognition
accuracy. In terms of recognition time, the standard RBF was found to be the
fastest among the three models.
| Fatai Adesina Anifowose | null | 1211.2556 | null | null |
Proximal Stochastic Dual Coordinate Ascent | stat.ML cs.LG math.OC | We introduce a proximal version of dual coordinate ascent method. We
demonstrate how the derived algorithmic framework can be used for numerous
regularized loss minimization problems, including $\ell_1$ regularization and
structured output SVM. The convergence rates we obtain match, and sometimes
improve, state-of-the-art results.
| Shai Shalev-Shwartz and Tong Zhang | null | 1211.2717 | null | null |
Deep Attribute Networks | cs.CV cs.LG stat.ML | Obtaining compact and discriminative features is one of the major challenges
in many of the real-world image classification tasks such as face verification
and object recognition. One possible approach is to represent input image on
the basis of high-level features that carry semantic meaning which humans can
understand. In this paper, a model coined deep attribute network (DAN) is
proposed to address this issue. For an input image, the model outputs the
attributes of the input image without performing any classification. The
efficacy of the proposed model is evaluated on unconstrained face verification
and real-world object recognition tasks using the LFW and the a-PASCAL
datasets. We demonstrate the potential of deep learning for attribute-based
classification by showing comparable results with existing state-of-the-art
results. Once properly trained, the DAN is fast and does away with calculating
low-level features which are maybe unreliable and computationally expensive.
| Junyoung Chung, Donghoon Lee, Youngjoo Seo, and Chang D. Yoo | null | 1211.2881 | null | null |
Boosting Simple Collaborative Filtering Models Using Ensemble Methods | cs.IR cs.LG stat.ML | In this paper we examine the effect of applying ensemble learning to the
performance of collaborative filtering methods. We present several systematic
approaches for generating an ensemble of collaborative filtering models based
on a single collaborative filtering algorithm (single-model or homogeneous
ensemble). We present an adaptation of several popular ensemble techniques in
machine learning for the collaborative filtering domain, including bagging,
boosting, fusion and randomness injection. We evaluate the proposed approach on
several types of collaborative filtering base models: k- NN, matrix
factorization and a neighborhood matrix factorization model. Empirical
evaluation shows a prediction improvement compared to all base CF algorithms.
In particular, we show that the performance of an ensemble of simple (weak) CF
models such as k-NN is competitive compared with a single strong CF model (such
as matrix factorization) while requiring an order of magnitude less
computational cost.
| Ariel Bar, Lior Rokach, Guy Shani, Bracha Shapira, Alon Schclar | null | 1211.2891 | null | null |
Shattering-Extremal Systems | math.CO cs.CG cs.DM cs.LG | The Shatters relation and the VC dimension have been investigated since the
early seventies. These concepts have found numerous applications in statistics,
combinatorics, learning theory and computational geometry. Shattering extremal
systems are set-systems with a very rich structure and many different
characterizations. The goal of this thesis is to elaborate on the structure of
these systems.
| Shay Moran | null | 1211.2980 | null | null |
Time-series Scenario Forecasting | stat.ML cs.LG stat.AP | Many applications require the ability to judge uncertainty of time-series
forecasts. Uncertainty is often specified as point-wise error bars around a
mean or median forecast. Due to temporal dependencies, such a method obscures
some information. We would ideally have a way to query the posterior
probability of the entire time-series given the predictive variables, or at a
minimum, be able to draw samples from this distribution. We use a Bayesian
dictionary learning algorithm to statistically generate an ensemble of
forecasts. We show that the algorithm performs as well as a physics-based
ensemble method for temperature forecasts for Houston. We conclude that the
method shows promise for scenario forecasting where physics-based methods are
absent.
| Sriharsha Veeramachaneni | null | 1211.3010 | null | null |
Recovering the Optimal Solution by Dual Random Projection | cs.LG | Random projection has been widely used in data classification. It maps
high-dimensional data into a low-dimensional subspace in order to reduce the
computational cost in solving the related optimization problem. While previous
studies are focused on analyzing the classification performance of using random
projection, in this work, we consider the recovery problem, i.e., how to
accurately recover the optimal solution to the original optimization problem in
the high-dimensional space based on the solution learned from the subspace
spanned by random projections. We present a simple algorithm, termed Dual
Random Projection, that uses the dual solution of the low-dimensional
optimization problem to recover the optimal solution to the original problem.
Our theoretical analysis shows that with a high probability, the proposed
algorithm is able to accurately recover the optimal solution to the original
problem, provided that the data matrix is of low rank or can be well
approximated by a low rank matrix.
| Lijun Zhang, Mehrdad Mahdavi, Rong Jin, Tianbao Yang, Shenghuo Zhu | null | 1211.3046 | null | null |
Distributed Non-Stochastic Experts | cs.LG cs.AI | We consider the online distributed non-stochastic experts problem, where the
distributed system consists of one coordinator node that is connected to $k$
sites, and the sites are required to communicate with each other via the
coordinator. At each time-step $t$, one of the $k$ site nodes has to pick an
expert from the set ${1, ..., n}$, and the same site receives information about
payoffs of all experts for that round. The goal of the distributed system is to
minimize regret at time horizon $T$, while simultaneously keeping communication
to a minimum.
The two extreme solutions to this problem are: (i) Full communication: This
essentially simulates the non-distributed setting to obtain the optimal
$O(\sqrt{\log(n)T})$ regret bound at the cost of $T$ communication. (ii) No
communication: Each site runs an independent copy : the regret is
$O(\sqrt{log(n)kT})$ and the communication is 0. This paper shows the
difficulty of simultaneously achieving regret asymptotically better than
$\sqrt{kT}$ and communication better than $T$. We give a novel algorithm that
for an oblivious adversary achieves a non-trivial trade-off: regret
$O(\sqrt{k^{5(1+\epsilon)/6} T})$ and communication $O(T/k^{\epsilon})$, for
any value of $\epsilon \in (0, 1/5)$. We also consider a variant of the model,
where the coordinator picks the expert. In this model, we show that the
label-efficient forecaster of Cesa-Bianchi et al. (2005) already gives us
strategy that is near optimal in regret vs communication trade-off.
| Varun Kanade, Zhenming Liu, Bozidar Radunovic | null | 1211.3212 | null | null |
Order-independent constraint-based causal structure learning | stat.ML cs.LG | We consider constraint-based methods for causal structure learning, such as
the PC-, FCI-, RFCI- and CCD- algorithms (Spirtes et al. (2000, 1993),
Richardson (1996), Colombo et al. (2012), Claassen et al. (2013)). The first
step of all these algorithms consists of the PC-algorithm. This algorithm is
known to be order-dependent, in the sense that the output can depend on the
order in which the variables are given. This order-dependence is a minor issue
in low-dimensional settings. We show, however, that it can be very pronounced
in high-dimensional settings, where it can lead to highly variable results. We
propose several modifications of the PC-algorithm (and hence also of the other
algorithms) that remove part or all of this order-dependence. All proposed
modifications are consistent in high-dimensional settings under the same
conditions as their original counterparts. We compare the PC-, FCI-, and
RFCI-algorithms and their modifications in simulation studies and on a yeast
gene expression data set. We show that our modifications yield similar
performance in low-dimensional settings and improved performance in
high-dimensional settings. All software is implemented in the R-package pcalg.
| Diego Colombo and Marloes H. Maathuis | null | 1211.3295 | null | null |
Network Sampling: From Static to Streaming Graphs | cs.SI cs.DS cs.LG physics.soc-ph stat.ML | Network sampling is integral to the analysis of social, information, and
biological networks. Since many real-world networks are massive in size,
continuously evolving, and/or distributed in nature, the network structure is
often sampled in order to facilitate study. For these reasons, a more thorough
and complete understanding of network sampling is critical to support the field
of network science. In this paper, we outline a framework for the general
problem of network sampling, by highlighting the different objectives,
population and units of interest, and classes of network sampling methods. In
addition, we propose a spectrum of computational models for network sampling
methods, ranging from the traditionally studied model based on the assumption
of a static domain to a more challenging model that is appropriate for
streaming domains. We design a family of sampling methods based on the concept
of graph induction that generalize across the full spectrum of computational
models (from static to streaming) while efficiently preserving many of the
topological properties of the input graphs. Furthermore, we demonstrate how
traditional static sampling algorithms can be modified for graph streams for
each of the three main classes of sampling methods: node, edge, and
topology-based sampling. Our experimental results indicate that our proposed
family of sampling methods more accurately preserves the underlying properties
of the graph for both static and streaming graphs. Finally, we study the impact
of network sampling algorithms on the parameter estimation and performance
evaluation of relational classification algorithms.
| Nesreen K. Ahmed and Jennifer Neville and Ramana Kompella | null | 1211.3412 | null | null |
Spectral Clustering: An empirical study of Approximation Algorithms and
its Application to the Attrition Problem | cs.LG math.NA stat.ML | Clustering is the problem of separating a set of objects into groups (called
clusters) so that objects within the same cluster are more similar to each
other than to those in different clusters. Spectral clustering is a now
well-known method for clustering which utilizes the spectrum of the data
similarity matrix to perform this separation. Since the method relies on
solving an eigenvector problem, it is computationally expensive for large
datasets. To overcome this constraint, approximation methods have been
developed which aim to reduce running time while maintaining accurate
classification. In this article, we summarize and experimentally evaluate
several approximation methods for spectral clustering. From an applications
standpoint, we employ spectral clustering to solve the so-called attrition
problem, where one aims to identify from a set of employees those who are
likely to voluntarily leave the company from those who are not. Our study sheds
light on the empirical performance of existing approximate spectral clustering
methods and shows the applicability of these methods in an important business
optimization related problem.
| B. Cung, T. Jin, J. Ramirez, A. Thompson, C. Boutsidis and D. Needell | null | 1211.3444 | null | null |
Accelerated Canonical Polyadic Decomposition by Using Mode Reduction | cs.NA cs.LG math.NA | Canonical Polyadic (or CANDECOMP/PARAFAC, CP) decompositions (CPD) are widely
applied to analyze high order tensors. Existing CPD methods use alternating
least square (ALS) iterations and hence need to unfold tensors to each of the
$N$ modes frequently, which is one major bottleneck of efficiency for
large-scale data and especially when $N$ is large. To overcome this problem, in
this paper we proposed a new CPD method which converts the original $N$th
($N>3$) order tensor to a 3rd-order tensor first. Then the full CPD is realized
by decomposing this mode reduced tensor followed by a Khatri-Rao product
projection procedure. This way is quite efficient as unfolding to each of the
$N$ modes are avoided, and dimensionality reduction can also be easily
incorporated to further improve the efficiency. We show that, under mild
conditions, any $N$th-order CPD can be converted into a 3rd-order case but
without destroying the essential uniqueness, and theoretically gives the same
results as direct $N$-way CPD methods. Simulations show that, compared with
state-of-the-art CPD methods, the proposed method is more efficient and escape
from local solutions more easily.
| Guoxu Zhou, Andrzej Cichocki, and Shengli Xie | 10.1109/TNNLS.2013.2271507 | 1211.3500 | null | null |
Sequence Transduction with Recurrent Neural Networks | cs.NE cs.LG stat.ML | Many machine learning tasks can be expressed as the transformation---or
\emph{transduction}---of input sequences into output sequences: speech
recognition, machine translation, protein secondary structure prediction and
text-to-speech to name but a few. One of the key challenges in sequence
transduction is learning to represent both the input and output sequences in a
way that is invariant to sequential distortions such as shrinking, stretching
and translating. Recurrent neural networks (RNNs) are a powerful sequence
learning architecture that has proven capable of learning such representations.
However RNNs traditionally require a pre-defined alignment between the input
and output sequences to perform transduction. This is a severe limitation since
\emph{finding} the alignment is the most difficult aspect of many sequence
transduction problems. Indeed, even determining the length of the output
sequence is often challenging. This paper introduces an end-to-end,
probabilistic sequence transduction system, based entirely on RNNs, that is in
principle able to transform any input sequence into any finite, discrete output
sequence. Experimental results for phoneme recognition are provided on the
TIMIT speech corpus.
| Alex Graves | null | 1211.3711 | null | null |
Objective Improvement in Information-Geometric Optimization | cs.LG cs.AI math.OC stat.ML | Information-Geometric Optimization (IGO) is a unified framework of stochastic
algorithms for optimization problems. Given a family of probability
distributions, IGO turns the original optimization problem into a new
maximization problem on the parameter space of the probability distributions.
IGO updates the parameter of the probability distribution along the natural
gradient, taken with respect to the Fisher metric on the parameter manifold,
aiming at maximizing an adaptive transform of the objective function. IGO
recovers several known algorithms as particular instances: for the family of
Bernoulli distributions IGO recovers PBIL, for the family of Gaussian
distributions the pure rank-mu CMA-ES update is recovered, and for exponential
families in expectation parametrization the cross-entropy/ML method is
recovered. This article provides a theoretical justification for the IGO
framework, by proving that any step size not greater than 1 guarantees monotone
improvement over the course of optimization, in terms of q-quantile values of
the objective function f. The range of admissible step sizes is independent of
f and its domain. We extend the result to cover the case of different step
sizes for blocks of the parameters in the IGO algorithm. Moreover, we prove
that expected fitness improves over time when fitness-proportional selection is
applied, in which case the RPP algorithm is recovered.
| Youhei Akimoto (INRIA Saclay - Ile de France), Yann Ollivier (LRI) | null | 1211.3831 | null | null |
On Calibrated Predictions for Auction Selection Mechanisms | cs.GT cs.LG | Calibration is a basic property for prediction systems, and algorithms for
achieving it are well-studied in both statistics and machine learning. In many
applications, however, the predictions are used to make decisions that select
which observations are made. This makes calibration difficult, as adjusting
predictions to achieve calibration changes future data. We focus on
click-through-rate (CTR) prediction for search ad auctions. Here, CTR
predictions are used by an auction that determines which ads are shown, and we
want to maximize the value generated by the auction.
We show that certain natural notions of calibration can be impossible to
achieve, depending on the details of the auction. We also show that it can be
impossible to maximize auction efficiency while using calibrated predictions.
Finally, we give conditions under which calibration is achievable and
simultaneously maximizes auction efficiency: roughly speaking, bids and queries
must not contain information about CTRs that is not already captured by the
predictions.
| H. Brendan McMahan and Omkar Muralidharan | null | 1211.3955 | null | null |
Lasso Screening Rules via Dual Polytope Projection | cs.LG stat.ML | Lasso is a widely used regression technique to find sparse representations.
When the dimension of the feature space and the number of samples are extremely
large, solving the Lasso problem remains challenging. To improve the efficiency
of solving large-scale Lasso problems, El Ghaoui and his colleagues have
proposed the SAFE rules which are able to quickly identify the inactive
predictors, i.e., predictors that have $0$ components in the solution vector.
Then, the inactive predictors or features can be removed from the optimization
problem to reduce its scale. By transforming the standard Lasso to its dual
form, it can be shown that the inactive predictors include the set of inactive
constraints on the optimal dual solution. In this paper, we propose an
efficient and effective screening rule via Dual Polytope Projections (DPP),
which is mainly based on the uniqueness and nonexpansiveness of the optimal
dual solution due to the fact that the feasible set in the dual space is a
convex and closed polytope. Moreover, we show that our screening rule can be
extended to identify inactive groups in group Lasso. To the best of our
knowledge, there is currently no "exact" screening rule for group Lasso. We
have evaluated our screening rule using synthetic and real data sets. Results
show that our rule is more effective in identifying inactive predictors than
existing state-of-the-art screening rules for Lasso.
| Jie Wang, Peter Wonka, Jieping Ye | null | 1211.3966 | null | null |
The Algebraic Combinatorial Approach for Low-Rank Matrix Completion | cs.LG cs.NA math.AG math.CO stat.ML | We present a novel algebraic combinatorial view on low-rank matrix completion
based on studying relations between a few entries with tools from algebraic
geometry and matroid theory. The intrinsic locality of the approach allows for
the treatment of single entries in a closed theoretical and practical
framework. More specifically, apart from introducing an algebraic combinatorial
theory of low-rank matrix completion, we present probability-one algorithms to
decide whether a particular entry of the matrix can be completed. We also
describe methods to complete that entry from a few others, and to estimate the
error which is incurred by any method completing that entry. Furthermore, we
show how known results on matrix completion and their sampling assumptions can
be related to our new perspective and interpreted in terms of a completability
phase transition.
| Franz J. Kir\'aly, Louis Theran, Ryota Tomioka | null | 1211.4116 | null | null |
Data Clustering via Principal Direction Gap Partitioning | stat.ML cs.LG | We explore the geometrical interpretation of the PCA based clustering
algorithm Principal Direction Divisive Partitioning (PDDP). We give several
examples where this algorithm breaks down, and suggest a new method, gap
partitioning, which takes into account natural gaps in the data between
clusters. Geometric features of the PCA space are derived and illustrated and
experimental results are given which show our method is comparable on the
datasets used in the original paper on PDDP.
| Ralph Abbey, Jeremy Diepenbrock, Amy Langville, Carl Meyer, Shaina
Race, Dexin Zhou | null | 1211.4142 | null | null |
Efficiently Learning from Revealed Preference | cs.GT cs.DS cs.LG | In this paper, we consider the revealed preferences problem from a learning
perspective. Every day, a price vector and a budget is drawn from an unknown
distribution, and a rational agent buys his most preferred bundle according to
some unknown utility function, subject to the given prices and budget
constraint. We wish not only to find a utility function which rationalizes a
finite set of observations, but to produce a hypothesis valuation function
which accurately predicts the behavior of the agent in the future. We give
efficient algorithms with polynomial sample-complexity for agents with linear
valuation functions, as well as for agents with linearly separable, concave
valuation functions with bounded second derivative.
| Morteza Zadimoghaddam and Aaron Roth | null | 1211.4150 | null | null |
What Regularized Auto-Encoders Learn from the Data Generating
Distribution | cs.LG stat.ML | What do auto-encoders learn about the underlying data generating
distribution? Recent work suggests that some auto-encoder variants do a good
job of capturing the local manifold structure of data. This paper clarifies
some of these previous observations by showing that minimizing a particular
form of regularized reconstruction error yields a reconstruction function that
locally characterizes the shape of the data generating density. We show that
the auto-encoder captures the score (derivative of the log-density with respect
to the input). It contradicts previous interpretations of reconstruction error
as an energy function. Unlike previous results, the theorems provided here are
completely generic and do not depend on the parametrization of the
auto-encoder: they show what the auto-encoder would tend to if given enough
capacity and examples. These results are for a contractive training criterion
we show to be similar to the denoising auto-encoder training criterion with
small corruption noise, but with contraction applied on the whole
reconstruction function rather than just encoder. Similarly to score matching,
one can consider the proposed training criterion as a convenient alternative to
maximum likelihood because it does not involve a partition function. Finally,
we show how an approximate Metropolis-Hastings MCMC can be setup to recover
samples from the estimated distribution, and this is confirmed in sampling
experiments.
| Guillaume Alain and Yoshua Bengio | null | 1211.4246 | null | null |
Application of three graph Laplacian based semi-supervised learning
methods to protein function prediction problem | cs.LG cs.CE q-bio.QM stat.ML | Protein function prediction is the important problem in modern biology. In
this paper, the un-normalized, symmetric normalized, and random walk graph
Laplacian based semi-supervised learning methods will be applied to the
integrated network combined from multiple networks to predict the functions of
all yeast proteins in these multiple networks. These multiple networks are
network created from Pfam domain structure, co-participation in a protein
complex, protein-protein interaction network, genetic interaction network, and
network created from cell cycle gene expression measurements. Multiple networks
are combined with fixed weights instead of using convex optimization to
determine the combination weights due to high time complexity of convex
optimization method. This simple combination method will not affect the
accuracy performance measures of the three semi-supervised learning methods.
Experiment results show that the un-normalized and symmetric normalized graph
Laplacian based methods perform slightly better than random walk graph
Laplacian based method for integrated network. Moreover, the accuracy
performance measures of these three semi-supervised learning methods for
integrated network are much better than the best accuracy performance measures
of these three methods for the individual network.
| Loc Tran | 10.5121/ijbb.2013.3202 | 1211.4289 | null | null |
Bayesian nonparametric models for ranked data | stat.ML cs.LG stat.ME | We develop a Bayesian nonparametric extension of the popular Plackett-Luce
choice model that can handle an infinite number of choice items. Our framework
is based on the theory of random atomic measures, with the prior specified by a
gamma process. We derive a posterior characterization and a simple and
effective Gibbs sampler for posterior simulation. We develop a time-varying
extension of our model, and apply it to the New York Times lists of weekly
bestselling books.
| Francois Caron (INRIA Bordeaux - Sud-Ouest, IMB), Yee Whye Teh | null | 1211.4321 | null | null |
A Sensing Policy Based on Confidence Bounds and a Restless Multi-Armed
Bandit Model | cs.IT cs.LG math.IT | A sensing policy for the restless multi-armed bandit problem with stationary
but unknown reward distributions is proposed. The work is presented in the
context of cognitive radios in which the bandit problem arises when deciding
which parts of the spectrum to sense and exploit. It is shown that the proposed
policy attains asymptotically logarithmic weak regret rate when the rewards are
bounded independent and identically distributed or finite state Markovian.
Simulation results verifying uniformly logarithmic weak regret are also
presented. The proposed policy is a centrally coordinated index policy, in
which the index of a frequency band is comprised of a sample mean term and a
confidence term. The sample mean term promotes spectrum exploitation whereas
the confidence term encourages exploration. The confidence term is designed
such that the time interval between consecutive sensing instances of any
suboptimal band grows exponentially. This exponential growth between suboptimal
sensing time instances leads to logarithmically growing weak regret. Simulation
results demonstrate that the proposed policy performs better than other similar
methods in the literature.
| Jan Oksanen, Visa Koivunen, H. Vincent Poor | null | 1211.4384 | null | null |
Mixture Gaussian Process Conditional Heteroscedasticity | cs.LG stat.ML | Generalized autoregressive conditional heteroscedasticity (GARCH) models have
long been considered as one of the most successful families of approaches for
volatility modeling in financial return series. In this paper, we propose an
alternative approach based on methodologies widely used in the field of
statistical machine learning. Specifically, we propose a novel nonparametric
Bayesian mixture of Gaussian process regression models, each component of which
models the noise variance process that contaminates the observed data as a
separate latent Gaussian process driven by the observed data. This way, we
essentially obtain a mixture Gaussian process conditional heteroscedasticity
(MGPCH) model for volatility modeling in financial return series. We impose a
nonparametric prior with power-law nature over the distribution of the model
mixture components, namely the Pitman-Yor process prior, to allow for better
capturing modeled data distributions with heavy tails and skewness. Finally, we
provide a copula- based approach for obtaining a predictive posterior for the
covariances over the asset returns modeled by means of a postulated MGPCH
model. We evaluate the efficacy of our approach in a number of benchmark
scenarios, and compare its performance to state-of-the-art methodologies.
| Emmanouil A. Platanios and Sotirios P. Chatzis | null | 1211.4410 | null | null |
Hypothesis Testing in Feedforward Networks with Broadcast Failures | cs.IT cs.LG math.IT | Consider a countably infinite set of nodes, which sequentially make decisions
between two given hypotheses. Each node takes a measurement of the underlying
truth, observes the decisions from some immediate predecessors, and makes a
decision between the given hypotheses. We consider two classes of broadcast
failures: 1) each node broadcasts a decision to the other nodes, subject to
random erasure in the form of a binary erasure channel; 2) each node broadcasts
a randomly flipped decision to the other nodes in the form of a binary
symmetric channel. We are interested in whether there exists a decision
strategy consisting of a sequence of likelihood ratio tests such that the node
decisions converge in probability to the underlying truth. In both cases, we
show that if each node only learns from a bounded number of immediate
predecessors, then there does not exist a decision strategy such that the
decisions converge in probability to the underlying truth. However, in case 1,
we show that if each node learns from an unboundedly growing number of
predecessors, then the decisions converge in probability to the underlying
truth, even when the erasure probabilities converge to 1. We also derive the
convergence rate of the error probability. In case 2, we show that if each node
learns from all of its previous predecessors, then the decisions converge in
probability to the underlying truth when the flipping probabilities of the
binary symmetric channels are bounded away from 1/2. In the case where the
flipping probabilities converge to 1/2, we derive a necessary condition on the
convergence rate of the flipping probabilities such that the decisions still
converge to the underlying truth. We also explicitly characterize the
relationship between the convergence rate of the error probability and the
convergence rate of the flipping probabilities.
| Zhenliang Zhang, Edwin K. P. Chong, Ali Pezeshki, and William Moran | 10.1109/JSTSP.2013.2258657 | 1211.4518 | null | null |
Forest Sparsity for Multi-channel Compressive Sensing | cs.LG cs.CV cs.IT math.IT stat.ML | In this paper, we investigate a new compressive sensing model for
multi-channel sparse data where each channel can be represented as a
hierarchical tree and different channels are highly correlated. Therefore, the
full data could follow the forest structure and we call this property as
\emph{forest sparsity}. It exploits both intra- and inter- channel correlations
and enriches the family of existing model-based compressive sensing theories.
The proposed theory indicates that only $\mathcal{O}(Tk+\log(N/k))$
measurements are required for multi-channel data with forest sparsity, where
$T$ is the number of channels, $N$ and $k$ are the length and sparsity number
of each channel respectively. This result is much better than
$\mathcal{O}(Tk+T\log(N/k))$ of tree sparsity, $\mathcal{O}(Tk+k\log(N/k))$ of
joint sparsity, and far better than $\mathcal{O}(Tk+Tk\log(N/k))$ of standard
sparsity. In addition, we extend the forest sparsity theory to the multiple
measurement vectors problem, where the measurement matrix is a block-diagonal
matrix. The result shows that the required measurement bound can be the same as
that for dense random measurement matrix, when the data shares equal energy in
each channel. A new algorithm is developed and applied on four example
applications to validate the benefit of the proposed model. Extensive
experiments demonstrate the effectiveness and efficiency of the proposed theory
and algorithm.
| Chen Chen and Yeqing Li and Junzhou Huang | 10.1109/TSP.2014.2318138 | 1211.4657 | null | null |
A unifying representation for a class of dependent random measures | stat.ML cs.LG | We present a general construction for dependent random measures based on
thinning Poisson processes on an augmented space. The framework is not
restricted to dependent versions of a specific nonparametric model, but can be
applied to all models that can be represented using completely random measures.
Several existing dependent random measures can be seen as specific cases of
this framework. Interesting properties of the resulting measures are derived
and the efficacy of the framework is demonstrated by constructing a
covariate-dependent latent feature model and topic model that obtain superior
predictive performance.
| Nicholas J. Foti, Joseph D. Futoma, Daniel N. Rockmore, Sinead
Williamson | null | 1211.4753 | null | null |
A survey of non-exchangeable priors for Bayesian nonparametric models | stat.ML cs.LG | Dependent nonparametric processes extend distributions over measures, such as
the Dirichlet process and the beta process, to give distributions over
collections of measures, typically indexed by values in some covariate space.
Such models are appropriate priors when exchangeability assumptions do not
hold, and instead we want our model to vary fluidly with some set of
covariates. Since the concept of dependent nonparametric processes was
formalized by MacEachern [1], there have been a number of models proposed and
used in the statistics and machine learning literatures. Many of these models
exhibit underlying similarities, an understanding of which, we hope, will help
in selecting an appropriate prior, developing new models, and leveraging
inference techniques.
| Nicholas J. Foti, Sinead Williamson | null | 1211.4798 | null | null |
Domain Adaptations for Computer Vision Applications | cs.CV cs.LG stat.ML | A basic assumption of statistical learning theory is that train and test data
are drawn from the same underlying distribution. Unfortunately, this assumption
doesn't hold in many applications. Instead, ample labeled data might exist in a
particular `source' domain while inference is needed in another, `target'
domain. Domain adaptation methods leverage labeled data from both domains to
improve classification on unseen data in the target domain. In this work we
survey domain transfer learning methods for various application domains with
focus on recent work in Computer Vision.
| Oscar Beijbom | null | 1211.4860 | null | null |
A Traveling Salesman Learns Bayesian Networks | cs.LG stat.ML | Structure learning of Bayesian networks is an important problem that arises
in numerous machine learning applications. In this work, we present a novel
approach for learning the structure of Bayesian networks using the solution of
an appropriately constructed traveling salesman problem. In our approach, one
computes an optimal ordering (partially ordered set) of random variables using
methods for the traveling salesman problem. This ordering significantly reduces
the search space for the subsequent greedy optimization that computes the final
structure of the Bayesian network. We demonstrate our approach of learning
Bayesian networks on real world census and weather datasets. In both cases, we
demonstrate that the approach very accurately captures dependencies between
random variables. We check the accuracy of the predictions based on independent
studies in both application domains.
| Tuhin Sahai, Stefan Klus and Michael Dellnitz | null | 1211.4888 | null | null |
Fast Marginalized Block Sparse Bayesian Learning Algorithm | cs.IT cs.LG math.IT stat.ML | The performance of sparse signal recovery from noise corrupted,
underdetermined measurements can be improved if both sparsity and correlation
structure of signals are exploited. One typical correlation structure is the
intra-block correlation in block sparse signals. To exploit this structure, a
framework, called block sparse Bayesian learning (BSBL), has been proposed
recently. Algorithms derived from this framework showed superior performance
but they are not very fast, which limits their applications. This work derives
an efficient algorithm from this framework, using a marginalized likelihood
maximization method. Compared to existing BSBL algorithms, it has close
recovery performance but is much faster. Therefore, it is more suitable for
large scale datasets and applications requiring real-time implementation.
| Benyuan Liu, Zhilin Zhang, Hongqi Fan, Qiang Fu | null | 1211.4909 | null | null |
Bayesian nonparametric Plackett-Luce models for the analysis of
preferences for college degree programmes | stat.ML cs.LG stat.ME | In this paper we propose a Bayesian nonparametric model for clustering
partial ranking data. We start by developing a Bayesian nonparametric extension
of the popular Plackett-Luce choice model that can handle an infinite number of
choice items. Our framework is based on the theory of random atomic measures,
with the prior specified by a completely random measure. We characterise the
posterior distribution given data, and derive a simple and effective Gibbs
sampler for posterior simulation. We then develop a Dirichlet process mixture
extension of our model and apply it to investigate the clustering of
preferences for college degree programmes amongst Irish secondary school
graduates. The existence of clusters of applicants who have similar preferences
for degree programmes is established and we determine that subject matter and
geographical location of the third level institution characterise these
clusters.
| Fran\c{c}ois Caron, Yee Whye Teh, Thomas Brendan Murphy | 10.1214/14-AOAS717 | 1211.5037 | null | null |
On the difficulty of training Recurrent Neural Networks | cs.LG | There are two widely known issues with properly training Recurrent Neural
Networks, the vanishing and the exploding gradient problems detailed in Bengio
et al. (1994). In this paper we attempt to improve the understanding of the
underlying issues by exploring these problems from an analytical, a geometric
and a dynamical systems perspective. Our analysis is used to justify a simple
yet effective solution. We propose a gradient norm clipping strategy to deal
with exploding gradients and a soft constraint for the vanishing gradients
problem. We validate empirically our hypothesis and proposed solutions in the
experimental section.
| Razvan Pascanu and Tomas Mikolov and Yoshua Bengio | null | 1211.5063 | null | null |
Optimally fuzzy temporal memory | cs.AI cs.LG | Any learner with the ability to predict the future of a structured
time-varying signal must maintain a memory of the recent past. If the signal
has a characteristic timescale relevant to future prediction, the memory can be
a simple shift register---a moving window extending into the past, requiring
storage resources that linearly grows with the timescale to be represented.
However, an independent general purpose learner cannot a priori know the
characteristic prediction-relevant timescale of the signal. Moreover, many
naturally occurring signals show scale-free long range correlations implying
that the natural prediction-relevant timescale is essentially unbounded. Hence
the learner should maintain information from the longest possible timescale
allowed by resource availability. Here we construct a fuzzy memory system that
optimally sacrifices the temporal accuracy of information in a scale-free
fashion in order to represent prediction-relevant information from
exponentially long timescales. Using several illustrative examples, we
demonstrate the advantage of the fuzzy memory system over a shift register in
time series forecasting of natural signals. When the available storage
resources are limited, we suggest that a general purpose learner would be
better off committing to such a fuzzy memory system.
| Karthik H. Shankar and Marc W. Howard | null | 1211.5189 | null | null |
Service Composition Design Pattern for Autonomic Computing Systems using
Association Rule based Learning and Service-Oriented Architecture | cs.SE cs.DC cs.LG | In this paper we present a Service Injection and composition Design Pattern
for Unstructured Peer-to-Peer networks, which is designed with Aspect-oriented
design patterns, and amalgamation of the Strategy, Worker Object, and
Check-List Design Patterns used to design the Self-Adaptive Systems. It will
apply self reconfiguration planes dynamically without the interruption or
intervention of the administrator for handling service failures at the servers.
When a client requests for a complex service, Service Composition should be
done to fulfil the request. If a service is not available in the memory, it
will be injected as Aspectual Feature Module code. We used Service Oriented
Architecture (SOA) with Web Services in Java to Implement the composite Design
Pattern. As far as we know, there are no studies on composition of design
patterns for Peer-to-peer computing domain. The pattern is described using a
java-like notation for the classes and interfaces. A simple UML class and
Sequence diagrams are depicted.
| Vishnuvardhan Mannava and T. Ramesh | null | 1211.5227 | null | null |
Analysis of a randomized approximation scheme for matrix multiplication | cs.DS cs.LG cs.NA stat.ML | This note gives a simple analysis of a randomized approximation scheme for
matrix multiplication proposed by Sarlos (2006) based on a random rotation
followed by uniform column sampling. The result follows from a matrix version
of Bernstein's inequality and a tail inequality for quadratic forms in
subgaussian random vectors.
| Daniel Hsu and Sham M. Kakade and Tong Zhang | null | 1211.5414 | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.