categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
cs.CL cs.IR cs.LG stat.ML | null | 1205.2657 | null | null | http://arxiv.org/pdf/1205.2657v1 | 2012-05-09T14:53:11Z | 2012-05-09T14:53:11Z | Multilingual Topic Models for Unaligned Text | We develop the multilingual topic model for unaligned text (MuTo), a
probabilistic model of text that is designed to analyze corpora composed of
documents in two languages. From these documents, MuTo uses stochastic EM to
simultaneously discover both a matching between the languages and multilingual
latent topics. We demonstrate that MuTo is able to find shared topics on
real-world multilingual corpora, successfully pairing related documents across
languages. MuTo provides a new framework for creating multilingual topic models
without needing carefully curated parallel corpora and allows applications
built using the topic model formalism to be applied to a much wider class of
corpora.
| [
"Jordan Boyd-Graber, David Blei",
"['Jordan Boyd-Graber' 'David Blei']"
] |
stat.ML cs.LG | null | 1205.2658 | null | null | http://arxiv.org/pdf/1205.2658v1 | 2012-05-09T14:51:42Z | 2012-05-09T14:51:42Z | Optimization of Structured Mean Field Objectives | In intractable, undirected graphical models, an intuitive way of creating
structured mean field approximations is to select an acyclic tractable
subgraph. We show that the hardness of computing the objective function and
gradient of the mean field objective qualitatively depends on a simple graph
property. If the tractable subgraph has this property- we call such subgraphs
v-acyclic-a very fast block coordinate ascent algorithm is possible. If not,
optimization is harder, but we show a new algorithm based on the construction
of an auxiliary exponential family that can be used to make inference possible
in this case as well. We discuss the advantages and disadvantages of each
regime and compare the algorithms empirically.
| [
"Alexandre Bouchard-Cote, Michael I. Jordan",
"['Alexandre Bouchard-Cote' 'Michael I. Jordan']"
] |
cs.LG stat.ML | null | 1205.2660 | null | null | http://arxiv.org/pdf/1205.2660v1 | 2012-05-09T14:48:34Z | 2012-05-09T14:48:34Z | Alternating Projections for Learning with Expectation Constraints | We present an objective function for learning with unlabeled data that
utilizes auxiliary expectation constraints. We optimize this objective function
using a procedure that alternates between information and moment projections.
Our method provides an alternate interpretation of the posterior regularization
framework (Graca et al., 2008), maintains uncertainty during optimization
unlike constraint-driven learning (Chang et al., 2007), and is more efficient
than generalized expectation criteria (Mann & McCallum, 2008). Applications of
this framework include minimally supervised learning, semisupervised learning,
and learning with constraints that are more expressive than the underlying
model. In experiments, we demonstrate comparable accuracy to generalized
expectation criteria for minimally supervised learning, and use expressive
structural constraints to guide semi-supervised learning, providing a 3%-6%
improvement over stateof-the-art constraint-driven learning.
| [
"['Kedar Bellare' 'Gregory Druck' 'Andrew McCallum']",
"Kedar Bellare, Gregory Druck, Andrew McCallum"
] |
cs.LG | null | 1205.2661 | null | null | http://arxiv.org/pdf/1205.2661v1 | 2012-05-09T14:47:06Z | 2012-05-09T14:47:06Z | REGAL: A Regularization based Algorithm for Reinforcement Learning in
Weakly Communicating MDPs | We provide an algorithm that achieves the optimal regret rate in an unknown
weakly communicating Markov Decision Process (MDP). The algorithm proceeds in
episodes where, in each episode, it picks a policy using regularization based
on the span of the optimal bias vector. For an MDP with S states and A actions
whose optimal bias vector has span bounded by H, we show a regret bound of
~O(HSpAT). We also relate the span to various diameter-like quantities
associated with the MDP, demonstrating how our results improve on previous
regret bounds.
| [
"['Peter L. Bartlett' 'Ambuj Tewari']",
"Peter L. Bartlett, Ambuj Tewari"
] |
cs.LG stat.ML | null | 1205.2662 | null | null | http://arxiv.org/pdf/1205.2662v1 | 2012-05-09T14:43:32Z | 2012-05-09T14:43:32Z | On Smoothing and Inference for Topic Models | Latent Dirichlet analysis, or topic modeling, is a flexible latent variable
framework for modeling high-dimensional sparse count data. Various learning
algorithms have been developed in recent years, including collapsed Gibbs
sampling, variational inference, and maximum a posteriori estimation, and this
variety motivates the need for careful empirical comparisons. In this paper, we
highlight the close connections between these approaches. We find that the main
differences are attributable to the amount of smoothing applied to the counts.
When the hyperparameters are optimized, the differences in performance among
the algorithms diminish significantly. The ability of these algorithms to
achieve solutions of comparable accuracy gives us the freedom to select
computationally efficient approaches. Using the insights gained from this
comparative study, we show how accurate topic models can be learned in several
seconds on text corpora with thousands of documents.
| [
"['Arthur Asuncion' 'Max Welling' 'Padhraic Smyth' 'Yee Whye Teh']",
"Arthur Asuncion, Max Welling, Padhraic Smyth, Yee Whye Teh"
] |
cs.LG | null | 1205.2664 | null | null | http://arxiv.org/pdf/1205.2664v1 | 2012-05-09T14:42:20Z | 2012-05-09T14:42:20Z | A Bayesian Sampling Approach to Exploration in Reinforcement Learning | We present a modular approach to reinforcement learning that uses a Bayesian
representation of the uncertainty over models. The approach, BOSS (Best of
Sampled Set), drives exploration by sampling multiple models from the posterior
and selecting actions optimistically. It extends previous work by providing a
rule for deciding when to resample and how to combine the models. We show that
our algorithm achieves nearoptimal reward with high probability with a sample
complexity that is low relative to the speed at which the posterior
distribution converges during learning. We demonstrate that BOSS performs quite
favorably compared to state-of-the-art reinforcement-learning approaches and
illustrate its flexibility by pairing it with a non-parametric model that
generalizes across states.
| [
"['John Asmuth' 'Lihong Li' 'Michael L. Littman' 'Ali Nouri'\n 'David Wingate']",
"John Asmuth, Lihong Li, Michael L. Littman, Ali Nouri, David Wingate"
] |
cs.LG | null | 1205.2874 | null | null | http://arxiv.org/pdf/1205.2874v3 | 2012-06-30T08:17:09Z | 2012-05-13T15:11:00Z | Decoupling Exploration and Exploitation in Multi-Armed Bandits | We consider a multi-armed bandit problem where the decision maker can explore
and exploit different arms at every round. The exploited arm adds to the
decision maker's cumulative reward (without necessarily observing the reward)
while the explored arm reveals its value. We devise algorithms for this setup
and show that the dependence on the number of arms, k, can be much better than
the standard square root of k dependence, depending on the behavior of the
arms' reward sequences. For the important case of piecewise stationary
stochastic bandits, we show a significant improvement over existing algorithms.
Our algorithms are based on a non-uniform sampling policy, which we show is
essential to the success of any algorithm in the adversarial setup. Finally, we
show some simulation results on an ultra-wide band channel selection inspired
setting indicating the applicability of our algorithms.
| [
"['Orly Avner' 'Shie Mannor' 'Ohad Shamir']",
"Orly Avner, Shie Mannor, Ohad Shamir"
] |
cs.IR cs.LG | null | 1205.2930 | null | null | http://arxiv.org/pdf/1205.2930v1 | 2012-05-14T02:27:52Z | 2012-05-14T02:27:52Z | Density Sensitive Hashing | Nearest neighbors search is a fundamental problem in various research fields
like machine learning, data mining and pattern recognition. Recently,
hashing-based approaches, e.g., Locality Sensitive Hashing (LSH), are proved to
be effective for scalable high dimensional nearest neighbors search. Many
hashing algorithms found their theoretic root in random projection. Since these
algorithms generate the hash tables (projections) randomly, a large number of
hash tables (i.e., long codewords) are required in order to achieve both high
precision and recall. To address this limitation, we propose a novel hashing
algorithm called {\em Density Sensitive Hashing} (DSH) in this paper. DSH can
be regarded as an extension of LSH. By exploring the geometric structure of the
data, DSH avoids the purely random projections selection and uses those
projective functions which best agree with the distribution of the data.
Extensive experimental results on real-world data sets have shown that the
proposed method achieves better performance compared to the state-of-the-art
hashing approaches.
| [
"Yue Lin and Deng Cai and Cheng Li",
"['Yue Lin' 'Deng Cai' 'Cheng Li']"
] |
cs.IR cs.DB cs.LG | null | 1205.2958 | null | null | http://arxiv.org/pdf/1205.2958v1 | 2012-05-14T08:28:10Z | 2012-05-14T08:28:10Z | b-Bit Minwise Hashing in Practice: Large-Scale Batch and Online Learning
and Using GPUs for Fast Preprocessing with Simple Hash Functions | In this paper, we study several critical issues which must be tackled before
one can apply b-bit minwise hashing to the volumes of data often used
industrial applications, especially in the context of search.
1. (b-bit) Minwise hashing requires an expensive preprocessing step that
computes k (e.g., 500) minimal values after applying the corresponding
permutations for each data vector. We developed a parallelization scheme using
GPUs and observed that the preprocessing time can be reduced by a factor of
20-80 and becomes substantially smaller than the data loading time.
2. One major advantage of b-bit minwise hashing is that it can substantially
reduce the amount of memory required for batch learning. However, as online
algorithms become increasingly popular for large-scale learning in the context
of search, it is not clear if b-bit minwise yields significant improvements for
them. This paper demonstrates that $b$-bit minwise hashing provides an
effective data size/dimension reduction scheme and hence it can dramatically
reduce the data loading time for each epoch of the online training process.
This is significant because online learning often requires many (e.g., 10 to
100) epochs to reach a sufficient accuracy.
3. Another critical issue is that for very large data sets it becomes
impossible to store a (fully) random permutation matrix, due to its space
requirements. Our paper is the first study to demonstrate that $b$-bit minwise
hashing implemented using simple hash functions, e.g., the 2-universal (2U) and
4-universal (4U) hash families, can produce very similar learning results as
using fully random permutations. Experiments on datasets of up to 200GB are
presented.
| [
"Ping Li and Anshumali Shrivastava and Arnd Christian Konig",
"['Ping Li' 'Anshumali Shrivastava' 'Arnd Christian Konig']"
] |
cs.CR cs.LG | 10.5121/ijnsa.2012.4106 | 1205.3062 | null | null | http://arxiv.org/abs/1205.3062v1 | 2012-02-08T14:21:02Z | 2012-02-08T14:21:02Z | Malware Detection Module using Machine Learning Algorithms to Assist in
Centralized Security in Enterprise Networks | Malicious software is abundant in a world of innumerable computer users, who
are constantly faced with these threats from various sources like the internet,
local networks and portable drives. Malware is potentially low to high risk and
can cause systems to function incorrectly, steal data and even crash. Malware
may be executable or system library files in the form of viruses, worms,
Trojans, all aimed at breaching the security of the system and compromising
user privacy. Typically, anti-virus software is based on a signature definition
system which keeps updating from the internet and thus keeping track of known
viruses. While this may be sufficient for home-users, a security risk from a
new virus could threaten an entire enterprise network. This paper proposes a
new and more sophisticated antivirus engine that can not only scan files, but
also build knowledge and detect files as potential viruses. This is done by
extracting system API calls made by various normal and harmful executable, and
using machine learning algorithms to classify and hence, rank files on a scale
of security risk. While such a system is processor heavy, it is very effective
when used centrally to protect an enterprise network which maybe more prone to
such threats.
| [
"Priyank Singhal, Nataasha Raul",
"['Priyank Singhal' 'Nataasha Raul']"
] |
cs.LG cs.AI stat.ML | null | 1205.3109 | null | null | http://arxiv.org/pdf/1205.3109v4 | 2013-12-18T11:45:49Z | 2012-05-14T17:20:29Z | Efficient Bayes-Adaptive Reinforcement Learning using Sample-Based
Search | Bayesian model-based reinforcement learning is a formally elegant approach to
learning optimal behaviour under model uncertainty, trading off exploration and
exploitation in an ideal way. Unfortunately, finding the resulting
Bayes-optimal policies is notoriously taxing, since the search space becomes
enormous. In this paper we introduce a tractable, sample-based method for
approximate Bayes-optimal planning which exploits Monte-Carlo tree search. Our
approach outperformed prior Bayesian model-based RL algorithms by a significant
margin on several well-known benchmark problems -- because it avoids expensive
applications of Bayes rule within the search tree by lazily sampling models
from the current beliefs. We illustrate the advantages of our approach by
showing it working in an infinite state space domain which is qualitatively out
of reach of almost all previous work in Bayesian exploration.
| [
"['Arthur Guez' 'David Silver' 'Peter Dayan']",
"Arthur Guez and David Silver and Peter Dayan"
] |
cs.CV cs.AI cs.LG | null | 1205.3137 | null | null | http://arxiv.org/pdf/1205.3137v2 | 2012-08-18T04:16:13Z | 2012-05-14T18:52:57Z | Unsupervised Discovery of Mid-Level Discriminative Patches | The goal of this paper is to discover a set of discriminative patches which
can serve as a fully unsupervised mid-level visual representation. The desired
patches need to satisfy two requirements: 1) to be representative, they need to
occur frequently enough in the visual world; 2) to be discriminative, they need
to be different enough from the rest of the visual world. The patches could
correspond to parts, objects, "visual phrases", etc. but are not restricted to
be any one of them. We pose this as an unsupervised discriminative clustering
problem on a huge dataset of image patches. We use an iterative procedure which
alternates between clustering and training discriminative classifiers, while
applying careful cross-validation at each step to prevent overfitting. The
paper experimentally demonstrates the effectiveness of discriminative patches
as an unsupervised mid-level visual representation, suggesting that it could be
used in place of visual words for many tasks. Furthermore, discriminative
patches can also be used in a supervised regime, such as scene classification,
where they demonstrate state-of-the-art performance on the MIT Indoor-67
dataset.
| [
"Saurabh Singh, Abhinav Gupta, Alexei A. Efros",
"['Saurabh Singh' 'Abhinav Gupta' 'Alexei A. Efros']"
] |
cs.LG stat.ML | null | 1205.3181 | null | null | http://arxiv.org/pdf/1205.3181v1 | 2012-05-14T20:10:04Z | 2012-05-14T20:10:04Z | Multiple Identifications in Multi-Armed Bandits | We study the problem of identifying the top $m$ arms in a multi-armed bandit
game. Our proposed solution relies on a new algorithm based on successive
rejects of the seemingly bad arms, and successive accepts of the good ones.
This algorithmic contribution allows to tackle other multiple identifications
settings that were previously out of reach. In particular we show that this
idea of successive accepts and rejects applies to the multi-bandit best arm
identification problem.
| [
"['Sébastien Bubeck' 'Tengyao Wang' 'Nitin Viswanathan']",
"S\\'ebastien Bubeck, Tengyao Wang, Nitin Viswanathan"
] |
cs.NE cs.CR cs.LG | 10.1016/j.eswa.2011.08.066 | 1205.3441 | null | null | http://arxiv.org/abs/1205.3441v1 | 2012-02-20T10:25:16Z | 2012-02-20T10:25:16Z | Genetic Programming for Multibiometrics | Biometric systems suffer from some drawbacks: a biometric system can provide
in general good performances except with some individuals as its performance
depends highly on the quality of the capture. One solution to solve some of
these problems is to use multibiometrics where different biometric systems are
combined together (multiple captures of the same biometric modality, multiple
feature extraction algorithms, multiple biometric modalities...). In this
paper, we are interested in score level fusion functions application (i.e., we
use a multibiometric authentication scheme which accept or deny the claimant
for using an application). In the state of the art, the weighted sum of scores
(which is a linear classifier) and the use of an SVM (which is a non linear
classifier) provided by different biometric systems provide one of the best
performances. We present a new method based on the use of genetic programming
giving similar or better performances (depending on the complexity of the
database). We derive a score fusion function by assembling some classical
primitives functions (+, *, -, ...). We have validated the proposed method on
three significant biometric benchmark datasets from the state of the art.
| [
"['Romain Giot' 'Christophe Rosenberger']",
"Romain Giot (GREYC), Christophe Rosenberger (GREYC)"
] |
cs.LG | null | 1205.3549 | null | null | http://arxiv.org/pdf/1205.3549v2 | 2012-05-17T01:03:19Z | 2012-05-16T03:54:30Z | Normalized Maximum Likelihood Coding for Exponential Family with Its
Applications to Optimal Clustering | We are concerned with the issue of how to calculate the normalized maximum
likelihood (NML) code-length. There is a problem that the normalization term of
the NML code-length may diverge when it is continuous and unbounded and a
straightforward computation of it is highly expensive when the data domain is
finite . In previous works it has been investigated how to calculate the NML
code-length for specific types of distributions. We first propose a general
method for computing the NML code-length for the exponential family. Then we
specifically focus on Gaussian mixture model (GMM), and propose a new efficient
method for computing the NML to them. We develop it by generalizing Rissanen's
re-normalizing technique. Then we apply this method to the clustering issue, in
which a clustering structure is modeled using a GMM, and the main task is to
estimate the optimal number of clusters on the basis of the NML code-length. We
demonstrate using artificial data sets the superiority of the NML-based
clustering over other criteria such as AIC, BIC in terms of the data size
required for high accuracy rate to be achieved.
| [
"So Hirai and Kenji Yamanishi",
"['So Hirai' 'Kenji Yamanishi']"
] |
cs.LG q-fin.PM | null | 1205.3767 | null | null | http://arxiv.org/pdf/1205.3767v3 | 2014-11-04T00:36:57Z | 2012-05-16T19:17:03Z | Universal Algorithm for Online Trading Based on the Method of
Calibration | We present a universal algorithm for online trading in Stock Market which
performs asymptotically at least as good as any stationary trading strategy
that computes the investment at each step using a fixed function of the side
information that belongs to a given RKHS (Reproducing Kernel Hilbert Space).
Using a universal kernel, we extend this result for any continuous stationary
strategy. In this learning process, a trader rationally chooses his gambles
using predictions made by a randomized well-calibrated algorithm. Our strategy
is based on Dawid's notion of calibration with more general checking rules and
on some modification of Kakade and Foster's randomized rounding algorithm for
computing the well-calibrated forecasts. We combine the method of randomized
calibration with Vovk's method of defensive forecasting in RKHS. Unlike the
statistical theory, no stochastic assumptions are made about the stock prices.
Our empirical results on historical markets provide strong evidence that this
type of technical trading can "beat the market" if transaction costs are
ignored.
| [
"Vladimir V'yugin and Vladimir Trunov",
"[\"Vladimir V'yugin\" 'Vladimir Trunov']"
] |
cs.AI cs.LG cs.PL | 10.1016/j.artint.2014.08.003 | 1205.3981 | null | null | http://arxiv.org/abs/1205.3981v5 | 2014-07-28T13:41:00Z | 2012-05-17T17:00:00Z | kLog: A Language for Logical and Relational Learning with Kernels | We introduce kLog, a novel approach to statistical relational learning.
Unlike standard approaches, kLog does not represent a probability distribution
directly. It is rather a language to perform kernel-based learning on
expressive logical and relational representations. kLog allows users to specify
learning problems declaratively. It builds on simple but powerful concepts:
learning from interpretations, entity/relationship data modeling, logic
programming, and deductive databases. Access by the kernel to the rich
representation is mediated by a technique we call graphicalization: the
relational representation is first transformed into a graph --- in particular,
a grounded entity/relationship diagram. Subsequently, a choice of graph kernel
defines the feature space. kLog supports mixed numerical and symbolic data, as
well as background knowledge in the form of Prolog or Datalog programs as in
inductive logic programming systems. The kLog framework can be applied to
tackle the same range of tasks that has made statistical relational learning so
popular, including classification, regression, multitask learning, and
collective classification. We also report about empirical comparisons, showing
that kLog can be either more accurate, or much faster at the same level of
accuracy, than Tilde and Alchemy. kLog is GPLv3 licensed and is available at
http://klog.dinfo.unifi.it along with tutorials.
| [
"['Paolo Frasconi' 'Fabrizio Costa' 'Luc De Raedt' 'Kurt De Grave']",
"Paolo Frasconi, Fabrizio Costa, Luc De Raedt, Kurt De Grave"
] |
math.NA cs.LG | 10.1109/TSP.2013.2250968 | 1205.4133 | null | null | http://arxiv.org/abs/1205.4133v2 | 2013-02-20T15:07:18Z | 2012-05-18T10:54:39Z | Constrained Overcomplete Analysis Operator Learning for Cosparse Signal
Modelling | We consider the problem of learning a low-dimensional signal model from a
collection of training samples. The mainstream approach would be to learn an
overcomplete dictionary to provide good approximations of the training samples
using sparse synthesis coefficients. This famous sparse model has a less well
known counterpart, in analysis form, called the cosparse analysis model. In
this new model, signals are characterised by their parsimony in a transformed
domain using an overcomplete (linear) analysis operator. We propose to learn an
analysis operator from a training corpus using a constrained optimisation
framework based on L1 optimisation. The reason for introducing a constraint in
the optimisation framework is to exclude trivial solutions. Although there is
no final answer here for which constraint is the most relevant constraint, we
investigate some conventional constraints in the model adaptation field and use
the uniformly normalised tight frame (UNTF) for this purpose. We then derive a
practical learning algorithm, based on projected subgradients and
Douglas-Rachford splitting technique, and demonstrate its ability to robustly
recover a ground truth analysis operator, when provided with a clean training
set, of sufficient size. We also find an analysis operator for images, using
some noisy cosparse signals, which is indeed a more realistic experiment. As
the derived optimisation problem is not a convex program, we often find a local
minimum using such variational methods. Some local optimality conditions are
derived for two different settings, providing preliminary theoretical support
for the well-posedness of the learning problem under appropriate conditions.
| [
"['Mehrdad Yaghoobi' 'Sangnam Nam' 'Remi Gribonval' 'Mike E. Davies']",
"Mehrdad Yaghoobi, Sangnam Nam, Remi Gribonval and Mike E. Davies"
] |
cs.LG math.ST stat.ML stat.TH | null | 1205.4159 | null | null | http://arxiv.org/pdf/1205.4159v2 | 2012-05-25T05:32:57Z | 2012-05-18T13:56:17Z | Theory of Dependent Hierarchical Normalized Random Measures | This paper presents theory for Normalized Random Measures (NRMs), Normalized
Generalized Gammas (NGGs), a particular kind of NRM, and Dependent Hierarchical
NRMs which allow networks of dependent NRMs to be analysed. These have been
used, for instance, for time-dependent topic modelling. In this paper, we first
introduce some mathematical background of completely random measures (CRMs) and
their construction from Poisson processes, and then introduce NRMs and NGGs.
Slice sampling is also introduced for posterior inference. The dependency
operators in Poisson processes and for the corresponding CRMs and NRMs is then
introduced and Posterior inference for the NGG presented. Finally, we give
dependency and composition results when applying these operators to NRMs so
they can be used in a network with hierarchical and dependent relations.
| [
"Changyou Chen, Wray Buntine and Nan Ding",
"['Changyou Chen' 'Wray Buntine' 'Nan Ding']"
] |
cs.LG cs.AI cs.IR | null | 1205.4213 | null | null | http://arxiv.org/pdf/1205.4213v2 | 2012-06-27T16:25:02Z | 2012-05-18T18:19:13Z | Online Structured Prediction via Coactive Learning | We propose Coactive Learning as a model of interaction between a learning
system and a human user, where both have the common goal of providing results
of maximum utility to the user. At each step, the system (e.g. search engine)
receives a context (e.g. query) and predicts an object (e.g. ranking). The user
responds by correcting the system if necessary, providing a slightly improved
-- but not necessarily optimal -- object as feedback. We argue that such
feedback can often be inferred from observable user behavior, for example, from
clicks in web-search. Evaluating predictions by their cardinal utility to the
user, we propose efficient learning algorithms that have ${\cal
O}(\frac{1}{\sqrt{T}})$ average regret, even though the learning algorithm
never observes cardinal utility values as in conventional online learning. We
demonstrate the applicability of our model and learning algorithms on a movie
recommendation task, as well as ranking for web-search.
| [
"['Pannaga Shivaswamy' 'Thorsten Joachims']",
"Pannaga Shivaswamy and Thorsten Joachims"
] |
stat.ML cs.LG | null | 1205.4217 | null | null | http://arxiv.org/pdf/1205.4217v2 | 2012-07-19T13:59:13Z | 2012-05-18T19:00:51Z | Thompson Sampling: An Asymptotically Optimal Finite Time Analysis | The question of the optimality of Thompson Sampling for solving the
stochastic multi-armed bandit problem had been open since 1933. In this paper
we answer it positively for the case of Bernoulli rewards by providing the
first finite-time analysis that matches the asymptotic rate given in the Lai
and Robbins lower bound for the cumulative regret. The proof is accompanied by
a numerical comparison with other optimal policies, experiments that have been
lacking in the literature until now for the Bernoulli case.
| [
"Emilie Kaufmann, Nathaniel Korda and R\\'emi Munos",
"['Emilie Kaufmann' 'Nathaniel Korda' 'Rémi Munos']"
] |
cs.MA cs.LG | null | 1205.4220 | null | null | http://arxiv.org/pdf/1205.4220v2 | 2013-05-05T22:42:36Z | 2012-05-18T19:09:46Z | Diffusion Adaptation over Networks | Adaptive networks are well-suited to perform decentralized information
processing and optimization tasks and to model various types of self-organized
and complex behavior encountered in nature. Adaptive networks consist of a
collection of agents with processing and learning abilities. The agents are
linked together through a connection topology, and they cooperate with each
other through local interactions to solve distributed optimization, estimation,
and inference problems in real-time. The continuous diffusion of information
across the network enables agents to adapt their performance in relation to
streaming data and network conditions; it also results in improved adaptation
and learning performance relative to non-cooperative agents. This article
provides an overview of diffusion strategies for adaptation and learning over
networks. The article is divided into several sections: 1. Motivation; 2.
Mean-Square-Error Estimation; 3. Distributed Optimization via Diffusion
Strategies; 4. Adaptive Diffusion Strategies; 5. Performance of
Steepest-Descent Diffusion Strategies; 6. Performance of Adaptive Diffusion
Strategies; 7. Comparing the Performance of Cooperative Strategies; 8.
Selecting the Combination Weights; 9. Diffusion with Noisy Information
Exchanges; 10. Extensions and Further Considerations; Appendix A: Properties of
Kronecker Products; Appendix B: Graph Laplacian and Network Connectivity;
Appendix C: Stochastic Matrices; Appendix D: Block Maximum Norm; Appendix E:
Comparison with Consensus Strategies; References.
| [
"Ali H. Sayed",
"['Ali H. Sayed']"
] |
cs.LG | null | 1205.4234 | null | null | http://arxiv.org/pdf/1205.4234v2 | 2012-05-25T10:24:35Z | 2012-05-19T08:16:21Z | Visualization of features of a series of measurements with
one-dimensional cellular structure | This paper describes the method of visualization of periodic constituents and
instability areas in series of measurements, being based on the algorithm of
smoothing out and concept of one-dimensional cellular automata. A method can be
used at the analysis of temporal series, related to the volumes of thematic
publications in web-space.
| [
"D. V. Lande",
"['D. V. Lande']"
] |
cs.LG cs.AI cs.IT cs.NE math.IT physics.data-an | null | 1205.4295 | null | null | http://arxiv.org/pdf/1205.4295v1 | 2012-05-19T04:25:04Z | 2012-05-19T04:25:04Z | Efficient Methods for Unsupervised Learning of Probabilistic Models | In this thesis I develop a variety of techniques to train, evaluate, and
sample from intractable and high dimensional probabilistic models. Abstract
exceeds arXiv space limitations -- see PDF.
| [
"['Jascha Sohl-Dickstein']",
"Jascha Sohl-Dickstein"
] |
cs.LG stat.ML | null | 1205.4343 | null | null | http://arxiv.org/pdf/1205.4343v2 | 2012-08-26T00:15:55Z | 2012-05-19T16:09:15Z | New Analysis and Algorithm for Learning with Drifting Distributions | We present a new analysis of the problem of learning with drifting
distributions in the batch setting using the notion of discrepancy. We prove
learning bounds based on the Rademacher complexity of the hypothesis set and
the discrepancy of distributions both for a drifting PAC scenario and a
tracking scenario. Our bounds are always tighter and in some cases
substantially improve upon previous ones based on the $L_1$ distance. We also
present a generalization of the standard on-line to batch conversion to the
drifting scenario in terms of the discrepancy and arbitrary convex combinations
of hypotheses. We introduce a new algorithm exploiting these learning
guarantees, which we show can be formulated as a simple QP. Finally, we report
the results of preliminary experiments demonstrating the benefits of this
algorithm.
| [
"Mehryar Mohri and Andres Munoz Medina",
"['Mehryar Mohri' 'Andres Munoz Medina']"
] |
cs.LG cs.DM | null | 1205.4349 | null | null | http://arxiv.org/pdf/1205.4349v1 | 2012-05-19T17:16:53Z | 2012-05-19T17:16:53Z | From Exact Learning to Computing Boolean Functions and Back Again | The goal of the paper is to relate complexity measures associated with the
evaluation of Boolean functions (certificate complexity, decision tree
complexity) and learning dimensions used to characterize exact learning
(teaching dimension, extended teaching dimension). The high level motivation is
to discover non-trivial relations between exact learning of an unknown concept
and testing whether an unknown concept is part of a concept class or not.
Concretely, the goal is to provide lower and upper bounds of complexity
measures for one problem type in terms of the other.
| [
"Sergiu Goschin",
"['Sergiu Goschin']"
] |
cs.IT cs.LG math.IT stat.ME stat.ML | null | 1205.4471 | null | null | http://arxiv.org/pdf/1205.4471v1 | 2012-05-20T23:56:17Z | 2012-05-20T23:56:17Z | Sparse Signal Recovery in the Presence of Intra-Vector and Inter-Vector
Correlation | This work discusses the problem of sparse signal recovery when there is
correlation among the values of non-zero entries. We examine intra-vector
correlation in the context of the block sparse model and inter-vector
correlation in the context of the multiple measurement vector model, as well as
their combination. Algorithms based on the sparse Bayesian learning are
presented and the benefits of incorporating correlation at the algorithm level
are discussed. The impact of correlation on the limits of support recovery is
also discussed highlighting the different impact intra-vector and inter-vector
correlations have on such limits.
| [
"['Bhaskar D. Rao' 'Zhilin Zhang' 'Yuzhe Jin']",
"Bhaskar D. Rao, Zhilin Zhang, Yuzhe Jin"
] |
stat.ML cs.LG stat.AP | null | 1205.4476 | null | null | http://arxiv.org/pdf/1205.4476v3 | 2013-02-22T17:03:20Z | 2012-05-21T01:46:04Z | Soft Rule Ensembles for Statistical Learning | In this article supervised learning problems are solved using soft rule
ensembles. We first review the importance sampling learning ensembles (ISLE)
approach that is useful for generating hard rules. The soft rules are then
obtained with logistic regression from the corresponding hard rules. In order
to deal with the perfect separation problem related to the logistic regression,
Firth's bias corrected likelihood is used. Various examples and simulation
results show that soft rule ensembles can improve predictive performance over
hard rule ensembles.
| [
"['Deniz Akdemir' 'Nicolas Heslot']",
"Deniz Akdemir and Nicolas Heslot"
] |
cs.LG cs.DB | null | 1205.4477 | null | null | http://arxiv.org/pdf/1205.4477v1 | 2012-05-21T01:46:57Z | 2012-05-21T01:46:57Z | Streaming Algorithms for Pattern Discovery over Dynamically Changing
Event Sequences | Discovering frequent episodes over event sequences is an important data
mining task. In many applications, events constituting the data sequence arrive
as a stream, at furious rates, and recent trends (or frequent episodes) can
change and drift due to the dynamical nature of the underlying event generation
process. The ability to detect and track such the changing sets of frequent
episodes can be valuable in many application scenarios. Current methods for
frequent episode discovery are typically multipass algorithms, making them
unsuitable in the streaming context. In this paper, we propose a new streaming
algorithm for discovering frequent episodes over a window of recent events in
the stream. Our algorithm processes events as they arrive, one batch at a time,
while discovering the top frequent episodes over a window consisting of several
batches in the immediate past. We derive approximation guarantees for our
algorithm under the condition that frequent episodes are approximately
well-separated from infrequent ones in every batch of the window. We present
extensive experimental evaluations of our algorithm on both real and synthetic
data. We also present comparisons with baselines and adaptations of streaming
algorithms from itemset mining literature.
| [
"Debprakash Patnaik and Naren Ramakrishnan and Srivatsan Laxman and\n Badrish Chandramouli",
"['Debprakash Patnaik' 'Naren Ramakrishnan' 'Srivatsan Laxman'\n 'Badrish Chandramouli']"
] |
cs.LG stat.CO stat.ML | null | 1205.4481 | null | null | http://arxiv.org/pdf/1205.4481v4 | 2012-10-01T16:55:06Z | 2012-05-21T03:29:17Z | Stochastic Smoothing for Nonsmooth Minimizations: Accelerating SGD by
Exploiting Structure | In this work we consider the stochastic minimization of nonsmooth convex loss
functions, a central problem in machine learning. We propose a novel algorithm
called Accelerated Nonsmooth Stochastic Gradient Descent (ANSGD), which
exploits the structure of common nonsmooth loss functions to achieve optimal
convergence rates for a class of problems including SVMs. It is the first
stochastic algorithm that can achieve the optimal O(1/t) rate for minimizing
nonsmooth loss functions (with strong convexity). The fast rates are confirmed
by empirical comparisons, in which ANSGD significantly outperforms previous
subgradient descent algorithms including SGD.
| [
"['Hua Ouyang' 'Alexander Gray']",
"Hua Ouyang, Alexander Gray"
] |
cs.LG stat.ML | null | 1205.4656 | null | null | http://arxiv.org/pdf/1205.4656v2 | 2012-07-24T13:15:47Z | 2012-05-21T16:43:02Z | Conditional mean embeddings as regressors - supplementary | We demonstrate an equivalence between reproducing kernel Hilbert space (RKHS)
embeddings of conditional distributions and vector-valued regressors. This
connection introduces a natural regularized loss function which the RKHS
embeddings minimise, providing an intuitive understanding of the embeddings and
a justification for their use. Furthermore, the equivalence allows the
application of vector-valued regression methods and results to the problem of
learning conditional distributions. Using this link we derive a sparse version
of the embedding by considering alternative formulations. Further, by applying
convergence results for vector-valued regression to the embedding problem we
derive minimax convergence rates which are O(\log(n)/n) -- compared to current
state of the art rates of O(n^{-1/4}) -- and are valid under milder and more
intuitive assumptions. These minimax upper rates coincide with lower rates up
to a logarithmic factor, showing that the embedding method achieves nearly
optimal rates. We study our sparse embedding algorithm in a reinforcement
learning task where the algorithm shows significant improvement in sparsity
over an incomplete Cholesky decomposition.
| [
"['Steffen Grünewälder' 'Guy Lever' 'Luca Baldassarre' 'Sam Patterson'\n 'Arthur Gretton' 'Massimilano Pontil']",
"Steffen Gr\\\"unew\\\"alder, Guy Lever, Luca Baldassarre, Sam Patterson,\n Arthur Gretton, Massimilano Pontil"
] |
cs.LG | null | 1205.4698 | null | null | http://arxiv.org/pdf/1205.4698v2 | 2013-02-07T19:10:14Z | 2012-05-21T19:19:49Z | The Role of Weight Shrinking in Large Margin Perceptron Learning | We introduce into the classical perceptron algorithm with margin a mechanism
that shrinks the current weight vector as a first step of the update. If the
shrinking factor is constant the resulting algorithm may be regarded as a
margin-error-driven version of NORMA with constant learning rate. In this case
we show that the allowed strength of shrinking depends on the value of the
maximum margin. We also consider variable shrinking factors for which there is
no such dependence. In both cases we obtain new generalizations of the
perceptron with margin able to provably attain in a finite number of steps any
desirable approximation of the maximal margin hyperplane. The new approximate
maximum margin classifiers appear experimentally to be very competitive in
2-norm soft margin tasks involving linear kernels.
| [
"['Constantinos Panagiotakopoulos' 'Petroula Tsampouka']",
"Constantinos Panagiotakopoulos and Petroula Tsampouka"
] |
cs.HC cs.LG | 10.1109/VAST.2011.6102474 | 1205.4776 | null | null | http://arxiv.org/abs/1205.4776v1 | 2012-05-22T00:10:45Z | 2012-05-22T00:10:45Z | Visual and semantic interpretability of projections of high dimensional
data for classification tasks | A number of visual quality measures have been introduced in visual analytics
literature in order to automatically select the best views of high dimensional
data from a large number of candidate data projections. These methods generally
concentrate on the interpretability of the visualization and pay little
attention to the interpretability of the projection axes. In this paper, we
argue that interpretability of the visualizations and the feature
transformation functions are both crucial for visual exploration of high
dimensional labeled data. We present a two-part user study to examine these two
related but orthogonal aspects of interpretability. We first study how humans
judge the quality of 2D scatterplots of various datasets with varying number of
classes and provide comparisons with ten automated measures, including a number
of visual quality measures and related measures from various machine learning
fields. We then investigate how the user perception on interpretability of
mathematical expressions relate to various automated measures of complexity
that can be used to characterize data projection functions. We conclude with a
discussion of how automated measures of visual and semantic interpretability of
data projections can be used together for exploratory analysis in
classification tasks.
| [
"Ilknur Icke and Andrew Rosenberg",
"['Ilknur Icke' 'Andrew Rosenberg']"
] |
cs.LG | null | 1205.4810 | null | null | http://arxiv.org/pdf/1205.4810v3 | 2012-07-06T20:56:23Z | 2012-05-22T06:02:09Z | Safe Exploration in Markov Decision Processes | In environments with uncertain dynamics exploration is necessary to learn how
to perform well. Existing reinforcement learning algorithms provide strong
exploration guarantees, but they tend to rely on an ergodicity assumption. The
essence of ergodicity is that any state is eventually reachable from any other
state by following a suitable policy. This assumption allows for exploration
algorithms that operate by simply favoring states that have rarely been visited
before. For most physical systems this assumption is impractical as the systems
would break before any reasonable exploration has taken place, i.e., most
physical systems don't satisfy the ergodicity assumption. In this paper we
address the need for safe exploration methods in Markov decision processes. We
first propose a general formulation of safety through ergodicity. We show that
imposing safety by restricting attention to the resulting set of guaranteed
safe policies is NP-hard. We then present an efficient algorithm for guaranteed
safe, but potentially suboptimal, exploration. At the core is an optimization
formulation in which the constraints restrict attention to a subset of the
guaranteed safe policies and the objective favors exploration policies. Our
framework is compatible with the majority of previously proposed exploration
methods, which rely on an exploration bonus. Our experiments, which include a
Martian terrain exploration problem, show that our method is able to explore
better than classical exploration methods.
| [
"Teodor Mihai Moldovan, Pieter Abbeel",
"['Teodor Mihai Moldovan' 'Pieter Abbeel']"
] |
cs.LG | null | 1205.4839 | null | null | http://arxiv.org/pdf/1205.4839v5 | 2013-06-20T10:53:42Z | 2012-05-22T08:36:41Z | Off-Policy Actor-Critic | This paper presents the first actor-critic algorithm for off-policy
reinforcement learning. Our algorithm is online and incremental, and its
per-time-step complexity scales linearly with the number of learned weights.
Previous work on actor-critic algorithms is limited to the on-policy setting
and does not take advantage of the recent advances in off-policy gradient
temporal-difference learning. Off-policy techniques, such as Greedy-GQ, enable
a target policy to be learned while following and obtaining data from another
(behavior) policy. For many problems, however, actor-critic methods are more
practical than action value methods (like Greedy-GQ) because they explicitly
represent the policy; consequently, the policy can be stochastic and utilize a
large action space. In this paper, we illustrate how to practically combine the
generality and learning potential of off-policy learning with the flexibility
in action selection given by actor-critic methods. We derive an incremental,
linear time and space complexity algorithm that includes eligibility traces,
prove convergence under assumptions similar to previous off-policy algorithms,
and empirically show better or comparable performance to existing algorithms on
standard reinforcement-learning benchmark problems.
| [
"Thomas Degris, Martha White, Richard S. Sutton",
"['Thomas Degris' 'Martha White' 'Richard S. Sutton']"
] |
cs.LG cs.DS | null | 1205.4891 | null | null | http://arxiv.org/pdf/1205.4891v1 | 2012-05-22T12:25:01Z | 2012-05-22T12:25:01Z | Clustering is difficult only when it does not matter | Numerous papers ask how difficult it is to cluster data. We suggest that the
more relevant and interesting question is how difficult it is to cluster data
sets {\em that can be clustered well}. More generally, despite the ubiquity and
the great importance of clustering, we still do not have a satisfactory
mathematical theory of clustering. In order to properly understand clustering,
it is clearly necessary to develop a solid theoretical basis for the area. For
example, from the perspective of computational complexity theory the clustering
problem seems very hard. Numerous papers introduce various criteria and
numerical measures to quantify the quality of a given clustering. The resulting
conclusions are pessimistic, since it is computationally difficult to find an
optimal clustering of a given data set, if we go by any of these popular
criteria. In contrast, the practitioners' perspective is much more optimistic.
Our explanation for this disparity of opinions is that complexity theory
concentrates on the worst case, whereas in reality we only care for data sets
that can be clustered well.
We introduce a theoretical framework of clustering in metric spaces that
revolves around a notion of "good clustering". We show that if a good
clustering exists, then in many cases it can be efficiently found. Our
conclusion is that contrary to popular belief, clustering should not be
considered a hard task.
| [
"['Amit Daniely' 'Nati Linial' 'Michael Saks']",
"Amit Daniely and Nati Linial and Michael Saks"
] |
cs.CC cs.LG | null | 1205.4893 | null | null | http://arxiv.org/pdf/1205.4893v1 | 2012-05-22T12:30:27Z | 2012-05-22T12:30:27Z | On the practically interesting instances of MAXCUT | The complexity of a computational problem is traditionally quantified based
on the hardness of its worst case. This approach has many advantages and has
led to a deep and beautiful theory. However, from the practical perspective,
this leaves much to be desired. In application areas, practically interesting
instances very often occupy just a tiny part of an algorithm's space of
instances, and the vast majority of instances are simply irrelevant. Addressing
these issues is a major challenge for theoretical computer science which may
make theory more relevant to the practice of computer science.
Following Bilu and Linial, we apply this perspective to MAXCUT, viewed as a
clustering problem. Using a variety of techniques, we investigate practically
interesting instances of this problem. Specifically, we show how to solve in
polynomial time distinguished, metric, expanding and dense instances of MAXCUT
under mild stability assumptions. In particular, $(1+\epsilon)$-stability
(which is optimal) suffices for metric and dense MAXCUT. We also show how to
solve in polynomial time $\Omega(\sqrt{n})$-stable instances of MAXCUT,
substantially improving the best previously known result.
| [
"Yonatan Bilu and Amit Daniely and Nati Linial and Michael Saks",
"['Yonatan Bilu' 'Amit Daniely' 'Nati Linial' 'Michael Saks']"
] |
stat.ML cs.CV cs.LG math.OC | null | 1205.5012 | null | null | http://arxiv.org/pdf/1205.5012v3 | 2013-07-03T23:06:52Z | 2012-05-22T19:20:07Z | Learning Mixed Graphical Models | We consider the problem of learning the structure of a pairwise graphical
model over continuous and discrete variables. We present a new pairwise model
for graphical models with both continuous and discrete variables that is
amenable to structure learning. In previous work, authors have considered
structure learning of Gaussian graphical models and structure learning of
discrete models. Our approach is a natural generalization of these two lines of
work to the mixed case. The penalization scheme involves a novel symmetric use
of the group-lasso norm and follows naturally from a particular parametrization
of the model.
| [
"['Jason D. Lee' 'Trevor J. Hastie']",
"Jason D. Lee and Trevor J. Hastie"
] |
cs.LG stat.ML | null | 1205.5075 | null | null | http://arxiv.org/pdf/1205.5075v2 | 2013-01-18T21:06:49Z | 2012-05-23T00:02:01Z | Efficient Sparse Group Feature Selection via Nonconvex Optimization | Sparse feature selection has been demonstrated to be effective in handling
high-dimensional data. While promising, most of the existing works use convex
methods, which may be suboptimal in terms of the accuracy of feature selection
and parameter estimation. In this paper, we expand a nonconvex paradigm to
sparse group feature selection, which is motivated by applications that require
identifying the underlying group structure and performing feature selection
simultaneously. The main contributions of this article are twofold: (1)
statistically, we introduce a nonconvex sparse group feature selection model
which can reconstruct the oracle estimator. Therefore, consistent feature
selection and parameter estimation can be achieved; (2) computationally, we
propose an efficient algorithm that is applicable to large-scale problems.
Numerical results suggest that the proposed nonconvex method compares favorably
against its competitors on synthetic data and real-world applications, thus
achieving desired goal of delivering high performance.
| [
"['Shuo Xiang' 'Xiaotong Shen' 'Jieping Ye']",
"Shuo Xiang, Xiaotong Shen, Jieping Ye"
] |
cs.DB cs.LG | null | 1205.5353 | null | null | http://arxiv.org/pdf/1205.5353v1 | 2012-05-24T07:37:28Z | 2012-05-24T07:37:28Z | A hybrid clustering algorithm for data mining | Data clustering is a process of arranging similar data into groups. A
clustering algorithm partitions a data set into several groups such that the
similarity within a group is better than among groups. In this paper a hybrid
clustering algorithm based on K-mean and K-harmonic mean (KHM) is described.
The proposed algorithm is tested on five different datasets. The research is
focused on fast and accurate clustering. Its performance is compared with the
traditional K-means & KHM algorithm. The result obtained from proposed hybrid
algorithm is much better than the traditional K-mean & KHM algorithm.
| [
"Ravindra Jain",
"['Ravindra Jain']"
] |
cs.AI cs.LG | null | 1205.5367 | null | null | http://arxiv.org/pdf/1205.5367v1 | 2012-05-24T08:43:14Z | 2012-05-24T08:43:14Z | Language-Constraint Reachability Learning in Probabilistic Graphs | The probabilistic graphs framework models the uncertainty inherent in
real-world domains by means of probabilistic edges whose value quantifies the
likelihood of the edge existence or the strength of the link it represents. The
goal of this paper is to provide a learning method to compute the most likely
relationship between two nodes in a framework based on probabilistic graphs. In
particular, given a probabilistic graph we adopted the language-constraint
reachability method to compute the probability of possible interconnections
that may exists between two nodes. Each of these connections may be viewed as
feature, or a factor, between the two nodes and the corresponding probability
as its weight. Each observed link is considered as a positive instance for its
corresponding link label. Given the training set of observed links a
L2-regularized Logistic Regression has been adopted to learn a model able to
predict unobserved link labels. The experiments on a real world collaborative
filtering problem proved that the proposed approach achieves better results
than that obtained adopting classical methods.
| [
"['Claudio Taranto' 'Nicola Di Mauro' 'Floriana Esposito']",
"Claudio Taranto, Nicola Di Mauro, Floriana Esposito"
] |
stat.ML cs.LG | null | 1205.5819 | null | null | http://arxiv.org/pdf/1205.5819v2 | 2012-07-17T04:35:11Z | 2012-05-25T20:38:55Z | Measurability Aspects of the Compactness Theorem for Sample Compression
Schemes | It was proved in 1998 by Ben-David and Litman that a concept space has a
sample compression scheme of size d if and only if every finite subspace has a
sample compression scheme of size d. In the compactness theorem, measurability
of the hypotheses of the created sample compression scheme is not guaranteed;
at the same time measurability of the hypotheses is a necessary condition for
learnability. In this thesis we discuss when a sample compression scheme,
created from com- pression schemes on finite subspaces via the compactness
theorem, have measurable hypotheses. We show that if X is a standard Borel
space with a d-maximum and universally separable concept class C, then (X,C)
has a sample compression scheme of size d with universally Borel measurable
hypotheses. Additionally we introduce a new variant of compression scheme
called a copy sample compression scheme.
| [
"Damjan Kalajdzievski",
"['Damjan Kalajdzievski']"
] |
stat.ML cs.LG q-bio.GN | null | 1205.6031 | null | null | http://arxiv.org/pdf/1205.6031v2 | 2012-06-25T04:45:37Z | 2012-05-28T05:47:52Z | Towards a Mathematical Foundation of Immunology and Amino Acid Chains | We attempt to set a mathematical foundation of immunology and amino acid
chains. To measure the similarities of these chains, a kernel on strings is
defined using only the sequence of the chains and a good amino acid
substitution matrix (e.g. BLOSUM62). The kernel is used in learning machines to
predict binding affinities of peptides to human leukocyte antigens DR (HLA-DR)
molecules. On both fixed allele (Nielsen and Lund 2009) and pan-allele (Nielsen
et.al. 2010) benchmark databases, our algorithm achieves the state-of-the-art
performance. The kernel is also used to define a distance on an HLA-DR allele
set based on which a clustering analysis precisely recovers the serotype
classifications assigned by WHO (Nielsen and Lund 2009, and Marsh et.al. 2010).
These results suggest that our kernel relates well the chain structure of both
peptides and HLA-DR molecules to their biological functions, and that it offers
a simple, powerful and promising methodology to immunology and amino acid chain
studies.
| [
"['Wen-Jun Shen' 'Hau-San Wong' 'Quan-Wu Xiao' 'Xin Guo' 'Stephen Smale']",
"Wen-Jun Shen, Hau-San Wong, Quan-Wu Xiao, Xin Guo, Stephen Smale"
] |
stat.ML cs.LG | 10.1109/LSP.2012.2223757 | 1205.6210 | null | null | http://arxiv.org/abs/1205.6210v2 | 2012-10-17T09:20:15Z | 2012-05-28T20:06:45Z | Learning Dictionaries with Bounded Self-Coherence | Sparse coding in learned dictionaries has been established as a successful
approach for signal denoising, source separation and solving inverse problems
in general. A dictionary learning method adapts an initial dictionary to a
particular signal class by iteratively computing an approximate factorization
of a training data matrix into a dictionary and a sparse coding matrix. The
learned dictionary is characterized by two properties: the coherence of the
dictionary to observations of the signal class, and the self-coherence of the
dictionary atoms. A high coherence to the signal class enables the sparse
coding of signal observations with a small approximation error, while a low
self-coherence of the atoms guarantees atom recovery and a more rapid residual
error decay rate for the sparse coding algorithm. The two goals of high signal
coherence and low self-coherence are typically in conflict, therefore one seeks
a trade-off between them, depending on the application. We present a dictionary
learning method with an effective control over the self-coherence of the
trained dictionary, enabling a trade-off between maximizing the sparsity of
codings and approximating an equiangular tight frame.
| [
"Christian D. Sigg and Tomas Dikk and Joachim M. Buhmann",
"['Christian D. Sigg' 'Tomas Dikk' 'Joachim M. Buhmann']"
] |
stat.ML cs.LG stat.CO | null | 1205.6326 | null | null | http://arxiv.org/pdf/1205.6326v2 | 2012-11-05T17:39:32Z | 2012-05-29T10:59:30Z | A Framework for Evaluating Approximation Methods for Gaussian Process
Regression | Gaussian process (GP) predictors are an important component of many Bayesian
approaches to machine learning. However, even a straightforward implementation
of Gaussian process regression (GPR) requires O(n^2) space and O(n^3) time for
a dataset of n examples. Several approximation methods have been proposed, but
there is a lack of understanding of the relative merits of the different
approximations, and in what situations they are most useful. We recommend
assessing the quality of the predictions obtained as a function of the compute
time taken, and comparing to standard baselines (e.g., Subset of Data and
FITC). We empirically investigate four different approximation algorithms on
four different prediction problems, and make our code available to encourage
future comparisons.
| [
"Krzysztof Chalupka, Christopher K. I. Williams and Iain Murray",
"['Krzysztof Chalupka' 'Christopher K. I. Williams' 'Iain Murray']"
] |
cs.LG | null | 1205.6432 | null | null | http://arxiv.org/pdf/1205.6432v2 | 2012-06-01T14:12:58Z | 2012-05-29T17:40:04Z | Multiclass Learning Approaches: A Theoretical Comparison with
Implications | We theoretically analyze and compare the following five popular multiclass
classification methods: One vs. All, All Pairs, Tree-based classifiers, Error
Correcting Output Codes (ECOC) with randomly generated code matrices, and
Multiclass SVM. In the first four methods, the classification is based on a
reduction to binary classification. We consider the case where the binary
classifier comes from a class of VC dimension $d$, and in particular from the
class of halfspaces over $\reals^d$. We analyze both the estimation error and
the approximation error of these methods. Our analysis reveals interesting
conclusions of practical relevance, regarding the success of the different
approaches under various conditions. Our proof technique employs tools from VC
theory to analyze the \emph{approximation error} of hypothesis classes. This is
in sharp contrast to most, if not all, previous uses of VC theory, which only
deal with estimation error.
| [
"['Amit Daniely' 'Sivan Sabato' 'Shai Shalev Shwartz']",
"Amit Daniely and Sivan Sabato and Shai Shalev Shwartz"
] |
stat.ML cs.LG q-bio.QM | null | 1205.6523 | null | null | http://arxiv.org/pdf/1205.6523v1 | 2012-05-30T01:23:01Z | 2012-05-30T01:23:01Z | Finding Important Genes from High-Dimensional Data: An Appraisal of
Statistical Tests and Machine-Learning Approaches | Over the past decades, statisticians and machine-learning researchers have
developed literally thousands of new tools for the reduction of
high-dimensional data in order to identify the variables most responsible for a
particular trait. These tools have applications in a plethora of settings,
including data analysis in the fields of business, education, forensics, and
biology (such as microarray, proteomics, brain imaging), to name a few.
In the present work, we focus our investigation on the limitations and
potential misuses of certain tools in the analysis of the benchmark colon
cancer data (2,000 variables; Alon et al., 1999) and the prostate cancer data
(6,033 variables; Efron, 2010, 2008). Our analysis demonstrates that models
that produce 100% accuracy measures often select different sets of genes and
cannot stand the scrutiny of parameter estimates and model stability.
Furthermore, we created a host of simulation datasets and "artificial
diseases" to evaluate the reliability of commonly used statistical and data
mining tools. We found that certain widely used models can classify the data
with 100% accuracy without using any of the variables responsible for the
disease. With moderate sample size and suitable pre-screening, stochastic
gradient boosting will be shown to be a superior model for gene selection and
variable screening from high-dimensional datasets.
| [
"Chamont Wang, Jana Gevertz, Chaur-Chin Chen, Leonardo Auslender",
"['Chamont Wang' 'Jana Gevertz' 'Chaur-Chin Chen' 'Leonardo Auslender']"
] |
cs.CV cs.LG | null | 1205.6544 | null | null | http://arxiv.org/pdf/1205.6544v1 | 2012-05-30T05:07:55Z | 2012-05-30T05:07:55Z | A Brief Summary of Dictionary Learning Based Approach for Classification
(revised) | This note presents some representative methods which are based on dictionary
learning (DL) for classification. We do not review the sophisticated methods or
frameworks that involve DL for classification, such as online DL and spatial
pyramid matching (SPM), but rather, we concentrate on the direct DL-based
classification methods. Here, the "so-called direct DL-based method" is the
approach directly deals with DL framework by adding some meaningful penalty
terms. By listing some representative methods, we can roughly divide them into
two categories, i.e. (1) directly making the dictionary discriminative and (2)
forcing the sparse coefficients discriminative to push the discrimination power
of the dictionary. From this taxonomy, we can expect some extensions of them as
future researches.
| [
"Shu Kong, Donghui Wang",
"['Shu Kong' 'Donghui Wang']"
] |
cs.IT cs.LG math.IT | null | 1205.6849 | null | null | http://arxiv.org/pdf/1205.6849v1 | 2012-05-30T22:24:50Z | 2012-05-30T22:24:50Z | Beyond $\ell_1$-norm minimization for sparse signal recovery | Sparse signal recovery has been dominated by the basis pursuit denoise (BPDN)
problem formulation for over a decade. In this paper, we propose an algorithm
that outperforms BPDN in finding sparse solutions to underdetermined linear
systems of equations at no additional computational cost. Our algorithm, called
WSPGL1, is a modification of the spectral projected gradient for $\ell_1$
minimization (SPGL1) algorithm in which the sequence of LASSO subproblems are
replaced by a sequence of weighted LASSO subproblems with constant weights
applied to a support estimate. The support estimate is derived from the data
and is updated at every iteration. The algorithm also modifies the Pareto curve
at every iteration to reflect the new weighted $\ell_1$ minimization problem
that is being solved. We demonstrate through extensive simulations that the
sparse recovery performance of our algorithm is superior to that of $\ell_1$
minimization and approaches the recovery performance of iterative re-weighted
$\ell_1$ (IRWL1) minimization of Cand{\`e}s, Wakin, and Boyd, although it does
not match it in general. Moreover, our algorithm has the computational cost of
a single BPDN problem.
| [
"Hassan Mansour",
"['Hassan Mansour']"
] |
math.ST cs.LG stat.TH | 10.3150/13-BEJ582 | 1206.0068 | null | null | http://arxiv.org/abs/1206.0068v3 | 2015-04-15T05:10:35Z | 2012-06-01T02:26:58Z | Posterior contraction of the population polytope in finite admixture
models | We study the posterior contraction behavior of the latent population
structure that arises in admixture models as the amount of data increases. We
adopt the geometric view of admixture models - alternatively known as topic
models - as a data generating mechanism for points randomly sampled from the
interior of a (convex) population polytope, whose extreme points correspond to
the population structure variables of interest. Rates of posterior contraction
are established with respect to Hausdorff metric and a minimum matching
Euclidean metric defined on polytopes. Tools developed include posterior
asymptotics of hierarchical models and arguments from convex geometry.
| [
"['XuanLong Nguyen']",
"XuanLong Nguyen"
] |
cs.LG stat.ML | null | 1206.0333 | null | null | http://arxiv.org/pdf/1206.0333v1 | 2012-06-02T00:48:27Z | 2012-06-02T00:48:27Z | Sparse Trace Norm Regularization | We study the problem of estimating multiple predictive functions from a
dictionary of basis functions in the nonparametric regression setting. Our
estimation scheme assumes that each predictive function can be estimated in the
form of a linear combination of the basis functions. By assuming that the
coefficient matrix admits a sparse low-rank structure, we formulate the
function estimation problem as a convex program regularized by the trace norm
and the $\ell_1$-norm simultaneously. We propose to solve the convex program
using the accelerated gradient (AG) method and the alternating direction method
of multipliers (ADMM) respectively; we also develop efficient algorithms to
solve the key components in both AG and ADMM. In addition, we conduct
theoretical analysis on the proposed function estimation scheme: we derive a
key property of the optimal solution to the convex program; based on an
assumption on the basis functions, we establish a performance bound of the
proposed function estimation scheme (via the composite regularization).
Simulation studies demonstrate the effectiveness and efficiency of the proposed
algorithms.
| [
"Jianhui Chen and Jieping Ye",
"['Jianhui Chen' 'Jieping Ye']"
] |
cs.IR cs.LG | null | 1206.0335 | null | null | http://arxiv.org/pdf/1206.0335v1 | 2012-06-02T01:37:22Z | 2012-06-02T01:37:22Z | A Route Confidence Evaluation Method for Reliable Hierarchical Text
Categorization | Hierarchical Text Categorization (HTC) is becoming increasingly important
with the rapidly growing amount of text data available in the World Wide Web.
Among the different strategies proposed to cope with HTC, the Local Classifier
per Node (LCN) approach attains good performance by mirroring the underlying
class hierarchy while enforcing a top-down strategy in the testing step.
However, the problem of embedding hierarchical information (parent-child
relationship) to improve the performance of HTC systems still remains open. A
confidence evaluation method for a selected route in the hierarchy is proposed
to evaluate the reliability of the final candidate labels in an HTC system. In
order to take into account the information embedded in the hierarchy, weight
factors are used to take into account the importance of each level. An
acceptance/rejection strategy in the top-down decision making process is
proposed, which improves the overall categorization accuracy by rejecting a few
percentage of samples, i.e., those with low reliability score. Experimental
results on the Reuters benchmark dataset (RCV1- v2) confirm the effectiveness
of the proposed method, compared to other state-of-the art HTC methods.
| [
"Nima Hatami, Camelia Chira and Giuliano Armano",
"['Nima Hatami' 'Camelia Chira' 'Giuliano Armano']"
] |
cs.CV cs.LG stat.CO | null | 1206.0338 | null | null | http://arxiv.org/pdf/1206.0338v4 | 2014-04-28T13:56:09Z | 2012-06-02T02:44:05Z | Poisson noise reduction with non-local PCA | Photon-limited imaging arises when the number of photons collected by a
sensor array is small relative to the number of detector elements. Photon
limitations are an important concern for many applications such as spectral
imaging, night vision, nuclear medicine, and astronomy. Typically a Poisson
distribution is used to model these observations, and the inherent
heteroscedasticity of the data combined with standard noise removal methods
yields significant artifacts. This paper introduces a novel denoising algorithm
for photon-limited images which combines elements of dictionary learning and
sparse patch-based representations of images. The method employs both an
adaptation of Principal Component Analysis (PCA) for Poisson noise and recently
developed sparsity-regularized convex optimization algorithms for
photon-limited images. A comprehensive empirical evaluation of the proposed
method helps characterize the performance of this approach relative to other
state-of-the-art denoising methods. The results reveal that, despite its
conceptual simplicity, Poisson PCA-based denoising appears to be highly
competitive in very low light regimes.
| [
"Joseph Salmon and Zachary Harmany and Charles-Alban Deledalle and\n Rebecca Willett",
"['Joseph Salmon' 'Zachary Harmany' 'Charles-Alban Deledalle'\n 'Rebecca Willett']"
] |
cs.SI cs.IT cs.LG math.IT | 10.1109/JSTSP.2013.2245859 | 1206.0652 | null | null | http://arxiv.org/abs/1206.0652v4 | 2012-11-21T21:31:48Z | 2012-05-30T18:19:56Z | Learning in Hierarchical Social Networks | We study a social network consisting of agents organized as a hierarchical
M-ary rooted tree, common in enterprise and military organizational structures.
The goal is to aggregate information to solve a binary hypothesis testing
problem. Each agent at a leaf of the tree, and only such an agent, makes a
direct measurement of the underlying true hypothesis. The leaf agent then makes
a decision and sends it to its supervising agent, at the next level of the
tree. Each supervising agent aggregates the decisions from the M members of its
group, produces a summary message, and sends it to its supervisor at the next
level, and so on. Ultimately, the agent at the root of the tree makes an
overall decision. We derive upper and lower bounds for the Type I and II error
probabilities associated with this decision with respect to the number of leaf
agents, which in turn characterize the converge rates of the Type I, Type II,
and total error probabilities. We also provide a message-passing scheme
involving non-binary message alphabets and characterize the exponent of the
error probability with respect to the message alphabet size.
| [
"Zhenliang Zhang, Edwin K. P. Chong, Ali Pezeshki, William Moran, and\n Stephen D. Howard",
"['Zhenliang Zhang' 'Edwin K. P. Chong' 'Ali Pezeshki' 'William Moran'\n 'Stephen D. Howard']"
] |
math.GT cs.LG stat.ML | null | 1206.0771 | null | null | http://arxiv.org/pdf/1206.0771v1 | 2012-06-04T21:22:26Z | 2012-06-04T21:22:26Z | Topological graph clustering with thin position | A clustering algorithm partitions a set of data points into smaller sets
(clusters) such that each subset is more tightly packed than the whole. Many
approaches to clustering translate the vector data into a graph with edges
reflecting a distance or similarity metric on the points, then look for highly
connected subgraphs. We introduce such an algorithm based on ideas borrowed
from the topological notion of thin position for knots and 3-dimensional
manifolds.
| [
"['Jesse Johnson']",
"Jesse Johnson"
] |
cs.AI cs.LG | null | 1206.0855 | null | null | http://arxiv.org/pdf/1206.0855v1 | 2012-06-05T09:35:44Z | 2012-06-05T09:35:44Z | A Mixed Observability Markov Decision Process Model for Musical Pitch | Partially observable Markov decision processes have been widely used to
provide models for real-world decision making problems. In this paper, we will
provide a method in which a slightly different version of them called Mixed
observability Markov decision process, MOMDP, is going to join with our
problem. Basically, we aim at offering a behavioural model for interaction of
intelligent agents with musical pitch environment and we will show that how
MOMDP can shed some light on building up a decision making model for musical
pitch conveniently.
| [
"Pouyan Rafiei Fard, Keyvan Yahya",
"['Pouyan Rafiei Fard' 'Keyvan Yahya']"
] |
cs.CC cs.DS cs.LG | null | 1206.0985 | null | null | http://arxiv.org/pdf/1206.0985v1 | 2012-06-05T16:39:29Z | 2012-06-05T16:39:29Z | Nearly optimal solutions for the Chow Parameters Problem and low-weight
approximation of halfspaces | The \emph{Chow parameters} of a Boolean function $f: \{-1,1\}^n \to \{-1,1\}$
are its $n+1$ degree-0 and degree-1 Fourier coefficients. It has been known
since 1961 (Chow, Tannenbaum) that the (exact values of the) Chow parameters of
any linear threshold function $f$ uniquely specify $f$ within the space of all
Boolean functions, but until recently (O'Donnell and Servedio) nothing was
known about efficient algorithms for \emph{reconstructing} $f$ (exactly or
approximately) from exact or approximate values of its Chow parameters. We
refer to this reconstruction problem as the \emph{Chow Parameters Problem.}
Our main result is a new algorithm for the Chow Parameters Problem which,
given (sufficiently accurate approximations to) the Chow parameters of any
linear threshold function $f$, runs in time $\tilde{O}(n^2)\cdot
(1/\eps)^{O(\log^2(1/\eps))}$ and with high probability outputs a
representation of an LTF $f'$ that is $\eps$-close to $f$. The only previous
algorithm (O'Donnell and Servedio) had running time $\poly(n) \cdot
2^{2^{\tilde{O}(1/\eps^2)}}.$
As a byproduct of our approach, we show that for any linear threshold
function $f$ over $\{-1,1\}^n$, there is a linear threshold function $f'$ which
is $\eps$-close to $f$ and has all weights that are integers at most $\sqrt{n}
\cdot (1/\eps)^{O(\log^2(1/\eps))}$. This significantly improves the best
previous result of Diakonikolas and Servedio which gave a $\poly(n) \cdot
2^{\tilde{O}(1/\eps^{2/3})}$ weight bound, and is close to the known lower
bound of $\max\{\sqrt{n},$ $(1/\eps)^{\Omega(\log \log (1/\eps))}\}$ (Goldberg,
Servedio). Our techniques also yield improved algorithms for related problems
in learning theory.
| [
"Anindya De, Ilias Diakonikolas, Vitaly Feldman, Rocco A. Servedio",
"['Anindya De' 'Ilias Diakonikolas' 'Vitaly Feldman' 'Rocco A. Servedio']"
] |
cs.LG | null | 1206.0994 | null | null | http://arxiv.org/pdf/1206.0994v1 | 2012-04-20T01:58:40Z | 2012-04-20T01:58:40Z | An Optimization Framework for Semi-Supervised and Transfer Learning
using Multiple Classifiers and Clusterers | Unsupervised models can provide supplementary soft constraints to help
classify new, "target" data since similar instances in the target set are more
likely to share the same class label. Such models can also help detect possible
differences between training and target distributions, which is useful in
applications where concept drift may take place, as in transfer learning
settings. This paper describes a general optimization framework that takes as
input class membership estimates from existing classifiers learnt on previously
encountered "source" data, as well as a similarity matrix from a cluster
ensemble operating solely on the target data to be classified, and yields a
consensus labeling of the target data. This framework admits a wide range of
loss functions and classification/clustering methods. It exploits properties of
Bregman divergences in conjunction with Legendre duality to yield a principled
and scalable approach. A variety of experiments show that the proposed
framework can yield results substantially superior to those provided by popular
transductive learning techniques or by naively applying classifiers learnt on
the original task to the target data.
| [
"Ayan Acharya, Eduardo R. Hruschka, Joydeep Ghosh, Sreangsu Acharyya",
"['Ayan Acharya' 'Eduardo R. Hruschka' 'Joydeep Ghosh' 'Sreangsu Acharyya']"
] |
cs.IR cs.LG | 10.5121/ijaia.2012.3205 | 1206.1011 | null | null | http://arxiv.org/abs/1206.1011v1 | 2012-04-06T20:50:59Z | 2012-04-06T20:50:59Z | A Machine Learning Approach For Opinion Holder Extraction In Arabic
Language | Opinion mining aims at extracting useful subjective information from reliable
amounts of text. Opinion mining holder recognition is a task that has not been
considered yet in Arabic Language. This task essentially requires deep
understanding of clauses structures. Unfortunately, the lack of a robust,
publicly available, Arabic parser further complicates the research. This paper
presents a leading research for the opinion holder extraction in Arabic news
independent from any lexical parsers. We investigate constructing a
comprehensive feature set to compensate the lack of parsing structural
outcomes. The proposed feature set is tuned from English previous works coupled
with our proposed semantic field and named entities features. Our feature
analysis is based on Conditional Random Fields (CRF) and semi-supervised
pattern recognition techniques. Different research models are evaluated via
cross-validation experiments achieving 54.03 F-measure. We publicly release our
own research outcome corpus and lexicon for opinion mining community to
encourage further research.
| [
"Mohamed Elarnaoty, Samir AbdelRahman, and Aly Fahmy",
"['Mohamed Elarnaoty' 'Samir AbdelRahman' 'Aly Fahmy']"
] |
stat.ML cs.LG | null | 1206.1088 | null | null | http://arxiv.org/pdf/1206.1088v2 | 2012-06-23T01:15:47Z | 2012-06-05T23:20:39Z | Bayesian Structure Learning for Markov Random Fields with a Spike and
Slab Prior | In recent years a number of methods have been developed for automatically
learning the (sparse) connectivity structure of Markov Random Fields. These
methods are mostly based on L1-regularized optimization which has a number of
disadvantages such as the inability to assess model uncertainty and expensive
cross-validation to find the optimal regularization parameter. Moreover, the
model's predictive performance may degrade dramatically with a suboptimal value
of the regularization parameter (which is sometimes desirable to induce
sparseness). We propose a fully Bayesian approach based on a "spike and slab"
prior (similar to L0 regularization) that does not suffer from these
shortcomings. We develop an approximate MCMC method combining Langevin dynamics
and reversible jump MCMC to conduct inference in this model. Experiments show
that the proposed model learns a good combination of the structure and
parameter values without the need for separate hyper-parameter tuning.
Moreover, the model's predictive performance is much more robust than L1-based
methods with hyper-parameter settings that induce highly sparse model
structures.
| [
"Yutian Chen, Max Welling",
"['Yutian Chen' 'Max Welling']"
] |
stat.ML cs.LG | null | 1206.1106 | null | null | http://arxiv.org/pdf/1206.1106v2 | 2013-02-18T16:09:50Z | 2012-06-06T02:06:57Z | No More Pesky Learning Rates | The performance of stochastic gradient descent (SGD) depends critically on
how learning rates are tuned and decreased over time. We propose a method to
automatically adjust multiple learning rates so as to minimize the expected
error at any one time. The method relies on local gradient variations across
samples. In our approach, learning rates can increase as well as decrease,
making it suitable for non-stationary problems. Using a number of convex and
non-convex learning tasks, we show that the resulting algorithm matches the
performance of SGD or other adaptive approaches with their best settings
obtained through systematic search, and effectively removes the need for
learning rate tuning.
| [
"Tom Schaul, Sixin Zhang and Yann LeCun",
"['Tom Schaul' 'Sixin Zhang' 'Yann LeCun']"
] |
cs.LG | null | 1206.1121 | null | null | http://arxiv.org/pdf/1206.1121v2 | 2012-09-01T07:40:47Z | 2012-06-06T04:56:47Z | Comparison of the C4.5 and a Naive Bayes Classifier for the Prediction
of Lung Cancer Survivability | Numerous data mining techniques have been developed to extract information
and identify patterns and predict trends from large data sets. In this study,
two classification techniques, the J48 implementation of the C4.5 algorithm and
a Naive Bayes classifier are applied to predict lung cancer survivability from
an extensive data set with fifteen years of patient records. The purpose of the
project is to verify the predictive effectiveness of the two techniques on
real, historical data. Besides the performance outcome that renders J48
marginally better than the Naive Bayes technique, there is a detailed
description of the data and the required pre-processing activities. The
performance results confirm expectations while some of the issues that appeared
during experimentation, underscore the value of having domain-specific
understanding to leverage any domain-specific characteristics inherent in the
data.
| [
"George Dimitoglou, James A. Adams, Carol M. Jim",
"['George Dimitoglou' 'James A. Adams' 'Carol M. Jim']"
] |
cs.LG cs.IR | null | 1206.1147 | null | null | http://arxiv.org/pdf/1206.1147v2 | 2012-06-08T14:07:26Z | 2012-06-06T08:34:43Z | Memory-Efficient Topic Modeling | As one of the simplest probabilistic topic modeling techniques, latent
Dirichlet allocation (LDA) has found many important applications in text
mining, computer vision and computational biology. Recent training algorithms
for LDA can be interpreted within a unified message passing framework. However,
message passing requires storing previous messages with a large amount of
memory space, increasing linearly with the number of documents or the number of
topics. Therefore, the high memory usage is often a major problem for topic
modeling of massive corpora containing a large number of topics. To reduce the
space complexity, we propose a novel algorithm without storing previous
messages for training LDA: tiny belief propagation (TBP). The basic idea of TBP
relates the message passing algorithms with the non-negative matrix
factorization (NMF) algorithms, which absorb the message updating into the
message passing process, and thus avoid storing previous messages. Experimental
results on four large data sets confirm that TBP performs comparably well or
even better than current state-of-the-art training algorithms for LDA but with
a much less memory consumption. TBP can do topic modeling when massive corpora
cannot fit in the computer memory, for example, extracting thematic topics from
7 GB PUBMED corpora on a common desktop computer with 2GB memory.
| [
"Jia Zeng, Zhi-Qiang Liu and Xiao-Qin Cao",
"['Jia Zeng' 'Zhi-Qiang Liu' 'Xiao-Qin Cao']"
] |
cs.LG | null | 1206.1208 | null | null | http://arxiv.org/pdf/1206.1208v2 | 2012-06-29T18:56:20Z | 2012-06-06T13:03:31Z | Cumulative Step-size Adaptation on Linear Functions: Technical Report | The CSA-ES is an Evolution Strategy with Cumulative Step size Adaptation,
where the step size is adapted measuring the length of a so-called cumulative
path. The cumulative path is a combination of the previous steps realized by
the algorithm, where the importance of each step decreases with time. This
article studies the CSA-ES on composites of strictly increasing with affine
linear functions through the investigation of its underlying Markov chains.
Rigorous results on the change and the variation of the step size are derived
with and without cumulation. The step-size diverges geometrically fast in most
cases. Furthermore, the influence of the cumulation parameter is studied.
| [
"Alexandre Adrien Chotard (LRI, INRIA Saclay - Ile de France), Anne\n Auger (INRIA Saclay - Ile de France), Nikolaus Hansen (LRI, INRIA Saclay -\n Ile de France, MSR - INRIA)",
"['Alexandre Adrien Chotard' 'Anne Auger' 'Nikolaus Hansen']"
] |
math.OC cs.LG stat.ML | null | 1206.1270 | null | null | http://arxiv.org/pdf/1206.1270v2 | 2013-02-02T23:40:56Z | 2012-06-06T16:42:27Z | Factoring nonnegative matrices with linear programs | This paper describes a new approach, based on linear programming, for
computing nonnegative matrix factorizations (NMFs). The key idea is a
data-driven model for the factorization where the most salient features in the
data are used to express the remaining features. More precisely, given a data
matrix X, the algorithm identifies a matrix C such that X approximately equals
CX and some linear constraints. The constraints are chosen to ensure that the
matrix C selects features; these features can then be used to find a low-rank
NMF of X. A theoretical analysis demonstrates that this approach has guarantees
similar to those of the recent NMF algorithm of Arora et al. (2012). In
contrast with this earlier work, the proposed method extends to more general
noise models and leads to efficient, scalable algorithms. Experiments with
synthetic and real datasets provide evidence that the new approach is also
superior in practice. An optimized C++ implementation can factor a
multigigabyte matrix in a matter of minutes.
| [
"['Victor Bittorf' 'Benjamin Recht' 'Christopher Re' 'Joel A. Tropp']",
"Victor Bittorf and Benjamin Recht and Christopher Re and Joel A. Tropp"
] |
stat.ML cs.LG | null | 1206.1402 | null | null | http://arxiv.org/pdf/1206.1402v1 | 2012-06-07T05:14:22Z | 2012-06-07T05:14:22Z | A New Greedy Algorithm for Multiple Sparse Regression | This paper proposes a new algorithm for multiple sparse regression in high
dimensions, where the task is to estimate the support and values of several
(typically related) sparse vectors from a few noisy linear measurements. Our
algorithm is a "forward-backward" greedy procedure that -- uniquely -- operates
on two distinct classes of objects. In particular, we organize our target
sparse vectors as a matrix; our algorithm involves iterative addition and
removal of both (a) individual elements, and (b) entire rows (corresponding to
shared features), of the matrix.
Analytically, we establish that our algorithm manages to recover the supports
(exactly) and values (approximately) of the sparse vectors, under assumptions
similar to existing approaches based on convex optimization. However, our
algorithm has a much smaller computational complexity. Perhaps most
interestingly, it is seen empirically to require visibly fewer samples. Ours
represents the first attempt to extend greedy algorithms to the class of models
that can only/best be represented by a combination of component structural
assumptions (sparse and group-sparse, in our case).
| [
"Ali Jalali and Sujay Sanghavi",
"['Ali Jalali' 'Sujay Sanghavi']"
] |
cs.LG stat.ML | null | 1206.1529 | null | null | http://arxiv.org/pdf/1206.1529v5 | 2013-04-10T08:39:10Z | 2012-06-07T15:33:12Z | Sparse projections onto the simplex | Most learning methods with rank or sparsity constraints use convex
relaxations, which lead to optimization with the nuclear norm or the
$\ell_1$-norm. However, several important learning applications cannot benefit
from this approach as they feature these convex norms as constraints in
addition to the non-convex rank and sparsity constraints. In this setting, we
derive efficient sparse projections onto the simplex and its extension, and
illustrate how to use them to solve high-dimensional learning problems in
quantum tomography, sparse density estimation and portfolio selection with
non-convex constraints.
| [
"Anastasios Kyrillidis, Stephen Becker, Volkan Cevher and, Christoph\n Koch",
"['Anastasios Kyrillidis' 'Stephen Becker' 'Volkan Cevher and'\n 'Christoph Koch']"
] |
stat.ML cs.DS cs.LG cs.NA math.OC | null | 1206.1623 | null | null | null | null | null | Proximal Newton-type methods for minimizing composite functions | We generalize Newton-type methods for minimizing smooth functions to handle a
sum of two convex functions: a smooth function and a nonsmooth function with a
simple proximal mapping. We show that the resulting proximal Newton-type
methods inherit the desirable convergence behavior of Newton-type methods for
minimizing smooth functions, even when search directions are computed
inexactly. Many popular methods tailored to problems arising in bioinformatics,
signal processing, and statistical learning are special cases of proximal
Newton-type methods, and our analysis yields new convergence results for some
of these methods.
| [
"Jason D. Lee, Yuekai Sun, Michael A. Saunders"
] |
null | null | 1206.1623v | null | null | http://arxiv.org/pdf/1206.1623v13 | 2014-03-17T22:08:25Z | 2012-06-07T21:31:23Z | Proximal Newton-type methods for minimizing composite functions | We generalize Newton-type methods for minimizing smooth functions to handle a sum of two convex functions: a smooth function and a nonsmooth function with a simple proximal mapping. We show that the resulting proximal Newton-type methods inherit the desirable convergence behavior of Newton-type methods for minimizing smooth functions, even when search directions are computed inexactly. Many popular methods tailored to problems arising in bioinformatics, signal processing, and statistical learning are special cases of proximal Newton-type methods, and our analysis yields new convergence results for some of these methods. | [
"['Jason D. Lee' 'Yuekai Sun' 'Michael A. Saunders']"
] |
cs.CV cs.IT cs.LG math.IT | null | 1206.2058 | null | null | http://arxiv.org/pdf/1206.2058v1 | 2012-06-10T21:22:50Z | 2012-06-10T21:22:50Z | Dimension Reduction by Mutual Information Discriminant Analysis | In the past few decades, researchers have proposed many discriminant analysis
(DA) algorithms for the study of high-dimensional data in a variety of
problems. Most DA algorithms for feature extraction are based on
transformations that simultaneously maximize the between-class scatter and
minimize the withinclass scatter matrices. This paper presents a novel DA
algorithm for feature extraction using mutual information (MI). However, it is
not always easy to obtain an accurate estimation for high-dimensional MI. In
this paper, we propose an efficient method for feature extraction that is based
on one-dimensional MI estimations. We will refer to this algorithm as mutual
information discriminant analysis (MIDA). The performance of this proposed
method was evaluated using UCI databases. The results indicate that MIDA
provides robust performance over different data sets with different
characteristics and that MIDA always performs better than, or at least
comparable to, the best performing algorithms.
| [
"Ali Shadvar",
"['Ali Shadvar']"
] |
cs.LG | null | 1206.2190 | null | null | http://arxiv.org/pdf/1206.2190v1 | 2012-06-11T13:00:51Z | 2012-06-11T13:00:51Z | Communication-Efficient Parallel Belief Propagation for Latent Dirichlet
Allocation | This paper presents a novel communication-efficient parallel belief
propagation (CE-PBP) algorithm for training latent Dirichlet allocation (LDA).
Based on the synchronous belief propagation (BP) algorithm, we first develop a
parallel belief propagation (PBP) algorithm on the parallel architecture.
Because the extensive communication delay often causes a low efficiency of
parallel topic modeling, we further use Zipf's law to reduce the total
communication cost in PBP. Extensive experiments on different data sets
demonstrate that CE-PBP achieves a higher topic modeling accuracy and reduces
more than 80% communication cost than the state-of-the-art parallel Gibbs
sampling (PGS) algorithm.
| [
"['Jian-feng Yan' 'Zhi-Qiang Liu' 'Yang Gao' 'Jia Zeng']",
"Jian-feng Yan, Zhi-Qiang Liu, Yang Gao, Jia Zeng"
] |
cs.LG stat.ML | null | 1206.2248 | null | null | http://arxiv.org/pdf/1206.2248v6 | 2016-02-03T21:13:20Z | 2012-06-11T15:14:59Z | Fast Cross-Validation via Sequential Testing | With the increasing size of today's data sets, finding the right parameter
configuration in model selection via cross-validation can be an extremely
time-consuming task. In this paper we propose an improved cross-validation
procedure which uses nonparametric testing coupled with sequential analysis to
determine the best parameter set on linearly increasing subsets of the data. By
eliminating underperforming candidates quickly and keeping promising candidates
as long as possible, the method speeds up the computation while preserving the
capability of the full cross-validation. Theoretical considerations underline
the statistical power of our procedure. The experimental evaluation shows that
our method reduces the computation time by a factor of up to 120 compared to a
full cross-validation with a negligible impact on the accuracy.
| [
"['Tammo Krueger' 'Danny Panknin' 'Mikio Braun']",
"Tammo Krueger, Danny Panknin, Mikio Braun"
] |
math.OC cs.LG | null | 1206.2372 | null | null | http://arxiv.org/pdf/1206.2372v2 | 2012-11-18T20:33:10Z | 2012-06-11T20:22:43Z | PRISMA: PRoximal Iterative SMoothing Algorithm | Motivated by learning problems including max-norm regularized matrix
completion and clustering, robust PCA and sparse inverse covariance selection,
we propose a novel optimization algorithm for minimizing a convex objective
which decomposes into three parts: a smooth part, a simple non-smooth Lipschitz
part, and a simple non-smooth non-Lipschitz part. We use a time variant
smoothing strategy that allows us to obtain a guarantee that does not depend on
knowing in advance the total number of iterations nor a bound on the domain.
| [
"Francesco Orabona and Andreas Argyriou and Nathan Srebro",
"['Francesco Orabona' 'Andreas Argyriou' 'Nathan Srebro']"
] |
cs.LG cs.DS cs.FL | null | 1206.2691 | null | null | http://arxiv.org/pdf/1206.2691v1 | 2012-06-13T00:27:36Z | 2012-06-13T00:27:36Z | IDS: An Incremental Learning Algorithm for Finite Automata | We present a new algorithm IDS for incremental learning of deterministic
finite automata (DFA). This algorithm is based on the concept of distinguishing
sequences introduced in (Angluin81). We give a rigorous proof that two versions
of this learning algorithm correctly learn in the limit. Finally we present an
empirical performance analysis that compares these two algorithms, focussing on
learning times and different types of learning queries. We conclude that IDS is
an efficient algorithm for software engineering applications of automata
learning, such as testing and model inference.
| [
"Muddassar A. Sindhu, Karl Meinke",
"['Muddassar A. Sindhu' 'Karl Meinke']"
] |
stat.ML cs.LG | null | 1206.2944 | null | null | http://arxiv.org/pdf/1206.2944v2 | 2012-08-29T06:36:23Z | 2012-06-13T21:23:15Z | Practical Bayesian Optimization of Machine Learning Algorithms | Machine learning algorithms frequently require careful tuning of model
hyperparameters, regularization terms, and optimization parameters.
Unfortunately, this tuning is often a "black art" that requires expert
experience, unwritten rules of thumb, or sometimes brute-force search. Much
more appealing is the idea of developing automatic approaches which can
optimize the performance of a given learning algorithm to the task at hand. In
this work, we consider the automatic tuning problem within the framework of
Bayesian optimization, in which a learning algorithm's generalization
performance is modeled as a sample from a Gaussian process (GP). The tractable
posterior distribution induced by the GP leads to efficient use of the
information gathered by previous experiments, enabling optimal choices about
what parameters to try next. Here we show how the effects of the Gaussian
process prior and the associated inference procedure can have a large impact on
the success or failure of Bayesian optimization. We show that thoughtful
choices can lead to results that exceed expert-level performance in tuning
machine learning algorithms. We also describe new algorithms that take into
account the variable cost (duration) of learning experiments and that can
leverage the presence of multiple cores for parallel experimentation. We show
that these proposed algorithms improve on previous automatic procedures and can
reach or surpass human expert-level optimization on a diverse set of
contemporary algorithms including latent Dirichlet allocation, structured SVMs
and convolutional neural networks.
| [
"Jasper Snoek, Hugo Larochelle and Ryan P. Adams",
"['Jasper Snoek' 'Hugo Larochelle' 'Ryan P. Adams']"
] |
cs.LG stat.ML | null | 1206.3072 | null | null | http://arxiv.org/pdf/1206.3072v1 | 2012-06-14T11:05:55Z | 2012-06-14T11:05:55Z | Statistical Consistency of Finite-dimensional Unregularized Linear
Classification | This manuscript studies statistical properties of linear classifiers obtained
through minimization of an unregularized convex risk over a finite sample.
Although the results are explicitly finite-dimensional, inputs may be passed
through feature maps; in this way, in addition to treating the consistency of
logistic regression, this analysis also handles boosting over a finite weak
learning class with, for instance, the exponential, logistic, and hinge losses.
In this finite-dimensional setting, it is still possible to fit arbitrary
decision boundaries: scaling the complexity of the weak learning class with the
sample size leads to the optimal classification risk almost surely.
| [
"['Matus Telgarsky']",
"Matus Telgarsky"
] |
cs.LG cs.DC | 10.1109/TSP.2012.2232663 | 1206.3099 | null | null | http://arxiv.org/abs/1206.3099v2 | 2012-11-12T23:33:32Z | 2012-06-14T13:10:35Z | Sparse Distributed Learning Based on Diffusion Adaptation | This article proposes diffusion LMS strategies for distributed estimation
over adaptive networks that are able to exploit sparsity in the underlying
system model. The approach relies on convex regularization, common in
compressive sensing, to enhance the detection of sparsity via a diffusive
process over the network. The resulting algorithms endow networks with learning
abilities and allow them to learn the sparse structure from the incoming data
in real-time, and also to track variations in the sparsity of the model. We
provide convergence and mean-square performance analysis of the proposed method
and show under what conditions it outperforms the unregularized diffusion
version. We also show how to adaptively select the regularization parameter.
Simulation results illustrate the advantage of the proposed filters for sparse
data recovery.
| [
"Paolo Di Lorenzo and Ali H. Sayed",
"['Paolo Di Lorenzo' 'Ali H. Sayed']"
] |
stat.ML cs.LG | null | 1206.3137 | null | null | http://arxiv.org/pdf/1206.3137v1 | 2012-06-14T15:21:24Z | 2012-06-14T15:21:24Z | Identifiability and Unmixing of Latent Parse Trees | This paper explores unsupervised learning of parsing models along two
directions. First, which models are identifiable from infinite data? We use a
general technique for numerically checking identifiability based on the rank of
a Jacobian matrix, and apply it to several standard constituency and dependency
parsing models. Second, for identifiable models, how do we estimate the
parameters efficiently? EM suffers from local optima, while recent work using
spectral methods cannot be directly applied since the topology of the parse
tree varies across sentences. We develop a strategy, unmixing, which deals with
this additional complexity for restricted classes of parsing models.
| [
"Daniel Hsu and Sham M. Kakade and Percy Liang",
"['Daniel Hsu' 'Sham M. Kakade' 'Percy Liang']"
] |
cs.LG cs.DS | null | 1206.3204 | null | null | http://arxiv.org/pdf/1206.3204v2 | 2012-06-15T18:11:27Z | 2012-06-14T18:23:46Z | Improved Spectral-Norm Bounds for Clustering | Aiming to unify known results about clustering mixtures of distributions
under separation conditions, Kumar and Kannan[2010] introduced a deterministic
condition for clustering datasets. They showed that this single deterministic
condition encompasses many previously studied clustering assumptions. More
specifically, their proximity condition requires that in the target
$k$-clustering, the projection of a point $x$ onto the line joining its cluster
center $\mu$ and some other center $\mu'$, is a large additive factor closer to
$\mu$ than to $\mu'$. This additive factor can be roughly described as $k$
times the spectral norm of the matrix representing the differences between the
given (known) dataset and the means of the (unknown) target clustering.
Clearly, the proximity condition implies center separation -- the distance
between any two centers must be as large as the above mentioned bound.
In this paper we improve upon the work of Kumar and Kannan along several
axes. First, we weaken the center separation bound by a factor of $\sqrt{k}$,
and secondly we weaken the proximity condition by a factor of $k$. Using these
weaker bounds we still achieve the same guarantees when all points satisfy the
proximity condition. We also achieve better guarantees when only
$(1-\epsilon)$-fraction of the points satisfy the weaker proximity condition.
The bulk of our analysis relies only on center separation under which one can
produce a clustering which (i) has low error, (ii) has low $k$-means cost, and
(iii) has centers very close to the target centers.
Our improved separation condition allows us to match the results of the
Planted Partition Model of McSherry[2001], improve upon the results of
Ostrovsky et al[2006], and improve separation results for mixture of Gaussian
models in a particular setting.
| [
"['Pranjal Awasthi' 'Or Sheffet']",
"Pranjal Awasthi, Or Sheffet"
] |
cs.LG stat.ML | null | 1206.3231 | null | null | http://arxiv.org/pdf/1206.3231v1 | 2012-06-13T12:32:13Z | 2012-06-13T12:32:13Z | CORL: A Continuous-state Offset-dynamics Reinforcement Learner | Continuous state spaces and stochastic, switching dynamics characterize a
number of rich, realworld domains, such as robot navigation across varying
terrain. We describe a reinforcementlearning algorithm for learning in these
domains and prove for certain environments the algorithm is probably
approximately correct with a sample complexity that scales polynomially with
the state-space dimension. Unfortunately, no optimal planning techniques exist
in general for such problems; instead we use fitted value iteration to solve
the learned MDP, and include the error due to approximate planning in our
bounds. Finally, we report an experiment using a robotic car driving over
varying terrain to demonstrate that these dynamics representations adequately
capture real-world dynamics and that our algorithm can be used to efficiently
solve such problems.
| [
"['Emma Brunskill' 'Bethany Leffler' 'Lihong Li' 'Michael L. Littman'\n 'Nicholas Roy']",
"Emma Brunskill, Bethany Leffler, Lihong Li, Michael L. Littman,\n Nicholas Roy"
] |
cs.LG cs.DS stat.ML | null | 1206.3236 | null | null | http://arxiv.org/pdf/1206.3236v1 | 2012-06-13T14:17:24Z | 2012-06-13T14:17:24Z | Learning Inclusion-Optimal Chordal Graphs | Chordal graphs can be used to encode dependency models that are representable
by both directed acyclic and undirected graphs. This paper discusses a very
simple and efficient algorithm to learn the chordal structure of a
probabilistic model from data. The algorithm is a greedy hill-climbing search
algorithm that uses the inclusion boundary neighborhood over chordal graphs. In
the limit of a large sample size and under appropriate hypotheses on the
scoring criterion, we prove that the algorithm will find a structure that is
inclusion-optimal when the dependency model of the data-generating distribution
can be represented exactly by an undirected graph. The algorithm is evaluated
on simulated datasets.
| [
"['Vincent Auvray' 'Louis Wehenkel']",
"Vincent Auvray, Louis Wehenkel"
] |
cs.DM cs.LG stat.ML | null | 1206.3237 | null | null | http://arxiv.org/pdf/1206.3237v1 | 2012-06-13T14:17:43Z | 2012-06-13T14:17:43Z | Clique Matrices for Statistical Graph Decomposition and Parameterising
Restricted Positive Definite Matrices | We introduce Clique Matrices as an alternative representation of undirected
graphs, being a generalisation of the incidence matrix representation. Here we
use clique matrices to decompose a graph into a set of possibly overlapping
clusters, de ned as well-connected subsets of vertices. The decomposition is
based on a statistical description which encourages clusters to be well
connected and few in number. Inference is carried out using a variational
approximation. Clique matrices also play a natural role in parameterising
positive de nite matrices under zero constraints on elements of the matrix. We
show that clique matrices can parameterise all positive de nite matrices
restricted according to a decomposable graph and form a structured Factor
Analysis approximation in the non-decomposable case.
| [
"['David Barber']",
"David Barber"
] |
cs.LG stat.ML | null | 1206.3238 | null | null | http://arxiv.org/pdf/1206.3238v1 | 2012-06-13T14:18:22Z | 2012-06-13T14:18:22Z | Greedy Block Coordinate Descent for Large Scale Gaussian Process
Regression | We propose a variable decomposition algorithm -greedy block coordinate
descent (GBCD)- in order to make dense Gaussian process regression practical
for large scale problems. GBCD breaks a large scale optimization into a series
of small sub-problems. The challenge in variable decomposition algorithms is
the identification of a subproblem (the active set of variables) that yields
the largest improvement. We analyze the limitations of existing methods and
cast the active set selection into a zero-norm constrained optimization problem
that we solve using greedy methods. By directly estimating the decrease in the
objective function, we obtain not only efficient approximate solutions for
GBCD, but we are also able to demonstrate that the method is globally
convergent. Empirical comparisons against competing dense methods like
Conjugate Gradient or SMO show that GBCD is an order of magnitude faster.
Comparisons against sparse GP methods show that GBCD is both accurate and
capable of handling datasets of 100,000 samples or more.
| [
"Liefeng Bo, Cristian Sminchisescu",
"['Liefeng Bo' 'Cristian Sminchisescu']"
] |
cs.LG stat.ML | null | 1206.3241 | null | null | http://arxiv.org/pdf/1206.3241v1 | 2012-06-13T15:04:13Z | 2012-06-13T15:04:13Z | Approximating the Partition Function by Deleting and then Correcting for
Model Edges | We propose an approach for approximating the partition function which is
based on two steps: (1) computing the partition function of a simplified model
which is obtained by deleting model edges, and (2) rectifying the result by
applying an edge-by-edge correction. The approach leads to an intuitive
framework in which one can trade-off the quality of an approximation with the
complexity of computing it. It also includes the Bethe free energy
approximation as a degenerate case. We develop the approach theoretically in
this paper and provide a number of empirical results that reveal its practical
utility.
| [
"['Arthur Choi' 'Adnan Darwiche']",
"Arthur Choi, Adnan Darwiche"
] |
cs.LG stat.ML | null | 1206.3242 | null | null | http://arxiv.org/pdf/1206.3242v1 | 2012-06-13T15:04:49Z | 2012-06-13T15:04:49Z | Multi-View Learning in the Presence of View Disagreement | Traditional multi-view learning approaches suffer in the presence of view
disagreement,i.e., when samples in each view do not belong to the same class
due to view corruption, occlusion or other noise processes. In this paper we
present a multi-view learning approach that uses a conditional entropy
criterion to detect view disagreement. Once detected, samples with view
disagreement are filtered and standard multi-view learning methods can be
successfully applied to the remaining samples. Experimental evaluation on
synthetic and audio-visual databases demonstrates that the detection and
filtering of view disagreement considerably increases the performance of
traditional multi-view learning approaches.
| [
"['C. Christoudias' 'Raquel Urtasun' 'Trevor Darrell']",
"C. Christoudias, Raquel Urtasun, Trevor Darrell"
] |
cs.LG stat.ML | null | 1206.3243 | null | null | http://arxiv.org/pdf/1206.3243v1 | 2012-06-13T15:05:35Z | 2012-06-13T15:05:35Z | Bounds on the Bethe Free Energy for Gaussian Networks | We address the problem of computing approximate marginals in Gaussian
probabilistic models by using mean field and fractional Bethe approximations.
As an extension of Welling and Teh (2001), we define the Gaussian fractional
Bethe free energy in terms of the moment parameters of the approximate
marginals and derive an upper and lower bound for it. We give necessary
conditions for the Gaussian fractional Bethe free energies to be bounded from
below. It turns out that the bounding condition is the same as the pairwise
normalizability condition derived by Malioutov et al. (2006) as a sufficient
condition for the convergence of the message passing algorithm. By giving a
counterexample, we disprove the conjecture in Welling and Teh (2001): even when
the Bethe free energy is not bounded from below, it can possess a local minimum
to which the minimization algorithms can converge.
| [
"['Botond Cseke' 'Tom Heskes']",
"Botond Cseke, Tom Heskes"
] |
cs.LG stat.ML | null | 1206.3247 | null | null | http://arxiv.org/pdf/1206.3247v1 | 2012-06-13T15:09:01Z | 2012-06-13T15:09:01Z | Learning Convex Inference of Marginals | Graphical models trained using maximum likelihood are a common tool for
probabilistic inference of marginal distributions. However, this approach
suffers difficulties when either the inference process or the model is
approximate. In this paper, the inference process is first defined to be the
minimization of a convex function, inspired by free energy approximations.
Learning is then done directly in terms of the performance of the inference
process at univariate marginal prediction. The main novelty is that this is a
direct minimization of emperical risk, where the risk measures the accuracy of
predicted marginals.
| [
"Justin Domke",
"['Justin Domke']"
] |
cs.LG stat.ML | null | 1206.3249 | null | null | http://arxiv.org/pdf/1206.3249v1 | 2012-06-13T15:09:50Z | 2012-06-13T15:09:50Z | Projected Subgradient Methods for Learning Sparse Gaussians | Gaussian Markov random fields (GMRFs) are useful in a broad range of
applications. In this paper we tackle the problem of learning a sparse GMRF in
a high-dimensional space. Our approach uses the l1-norm as a regularization on
the inverse covariance matrix. We utilize a novel projected gradient method,
which is faster than previous methods in practice and equal to the best
performing of these in asymptotic complexity. We also extend the l1-regularized
objective to the problem of sparsifying entire blocks within the inverse
covariance matrix. Our methods generalize fairly easily to this case, while
other methods do not. We demonstrate that our extensions give better
generalization performance on two real domains--biological network analysis and
a 2D-shape modeling image task.
| [
"['John Duchi' 'Stephen Gould' 'Daphne Koller']",
"John Duchi, Stephen Gould, Daphne Koller"
] |
cs.LG stat.ML | null | 1206.3252 | null | null | http://arxiv.org/pdf/1206.3252v1 | 2012-06-13T15:11:36Z | 2012-06-13T15:11:36Z | Convex Point Estimation using Undirected Bayesian Transfer Hierarchies | When related learning tasks are naturally arranged in a hierarchy, an
appealing approach for coping with scarcity of instances is that of transfer
learning using a hierarchical Bayes framework. As fully Bayesian computations
can be difficult and computationally demanding, it is often desirable to use
posterior point estimates that facilitate (relatively) efficient prediction.
However, the hierarchical Bayes framework does not always lend itself naturally
to this maximum aposteriori goal. In this work we propose an undirected
reformulation of hierarchical Bayes that relies on priors in the form of
similarity measures. We introduce the notion of "degree of transfer" weights on
components of these similarity measures, and show how they can be automatically
learned within a joint probabilistic framework. Importantly, our reformulation
results in a convex objective for many learning problems, thus facilitating
optimal posterior point estimation using standard optimization techniques. In
addition, we no longer require proper priors, allowing for flexible and
straightforward specification of joint distributions over transfer hierarchies.
We show that our framework is effective for learning models that are part of
transfer hierarchies for two real-life tasks: object shape modeling using
Gaussian density estimation and document classification.
| [
"['Gal Elidan' 'Ben Packer' 'Geremy Heitz' 'Daphne Koller']",
"Gal Elidan, Ben Packer, Geremy Heitz, Daphne Koller"
] |
cs.IR cs.CL cs.LG stat.ML | null | 1206.3254 | null | null | http://arxiv.org/pdf/1206.3254v1 | 2012-06-13T15:30:14Z | 2012-06-13T15:30:14Z | Latent Topic Models for Hypertext | Latent topic models have been successfully applied as an unsupervised topic
discovery technique in large document collections. With the proliferation of
hypertext document collection such as the Internet, there has also been great
interest in extending these approaches to hypertext [6, 9]. These approaches
typically model links in an analogous fashion to how they model words - the
document-link co-occurrence matrix is modeled in the same way that the
document-word co-occurrence matrix is modeled in standard topic models. In this
paper we present a probabilistic generative model for hypertext document
collections that explicitly models the generation of links. Specifically, links
from a word w to a document d depend directly on how frequent the topic of w is
in d, in addition to the in-degree of d. We show how to perform EM learning on
this model efficiently. By not modeling links as analogous to words, we end up
using far fewer free parameters and obtain better link prediction results.
| [
"Amit Gruber, Michal Rosen-Zvi, Yair Weiss",
"['Amit Gruber' 'Michal Rosen-Zvi' 'Yair Weiss']"
] |
cs.LG stat.ML | null | 1206.3256 | null | null | http://arxiv.org/pdf/1206.3256v1 | 2012-06-13T15:31:21Z | 2012-06-13T15:31:21Z | Multi-View Learning over Structured and Non-Identical Outputs | In many machine learning problems, labeled training data is limited but
unlabeled data is ample. Some of these problems have instances that can be
factored into multiple views, each of which is nearly sufficent in determining
the correct labels. In this paper we present a new algorithm for probabilistic
multi-view learning which uses the idea of stochastic agreement between views
as regularization. Our algorithm works on structured and unstructured problems
and easily generalizes to partial agreement scenarios. For the full agreement
case, our algorithm minimizes the Bhattacharyya distance between the models of
each view, and performs better than CoBoosting and two-view Perceptron on
several flat and structured classification problems.
| [
"Kuzman Ganchev, Joao Graca, John Blitzer, Ben Taskar",
"['Kuzman Ganchev' 'Joao Graca' 'John Blitzer' 'Ben Taskar']"
] |
cs.LG stat.ML | null | 1206.3257 | null | null | http://arxiv.org/pdf/1206.3257v1 | 2012-06-13T15:31:57Z | 2012-06-13T15:31:57Z | Constrained Approximate Maximum Entropy Learning of Markov Random Fields | Parameter estimation in Markov random fields (MRFs) is a difficult task, in
which inference over the network is run in the inner loop of a gradient descent
procedure. Replacing exact inference with approximate methods such as loopy
belief propagation (LBP) can suffer from poor convergence. In this paper, we
provide a different approach for combining MRF learning and Bethe
approximation. We consider the dual of maximum likelihood Markov network
learning - maximizing entropy with moment matching constraints - and then
approximate both the objective and the constraints in the resulting
optimization problem. Unlike previous work along these lines (Teh & Welling,
2003), our formulation allows parameter sharing between features in a general
log-linear model, parameter regularization and conditional training. We show
that piecewise training (Sutton & McCallum, 2005) is a very restricted special
case of this formulation. We study two optimization strategies: one based on a
single convex approximation and one that uses repeated convex approximations.
We show results on several real-world networks that demonstrate that these
algorithms can significantly outperform learning with loopy and piecewise. Our
results also provide a framework for analyzing the trade-offs of different
relaxations of the entropy objective and of the constraints.
| [
"['Varun Ganapathi' 'David Vickrey' 'John Duchi' 'Daphne Koller']",
"Varun Ganapathi, David Vickrey, John Duchi, Daphne Koller"
] |
cs.LG stat.ML | null | 1206.3259 | null | null | http://arxiv.org/pdf/1206.3259v1 | 2012-06-13T15:33:06Z | 2012-06-13T15:33:06Z | Cumulative distribution networks and the derivative-sum-product
algorithm | We introduce a new type of graphical model called a "cumulative distribution
network" (CDN), which expresses a joint cumulative distribution as a product of
local functions. Each local function can be viewed as providing evidence about
possible orderings, or rankings, of variables. Interestingly, we find that the
conditional independence properties of CDNs are quite different from other
graphical models. We also describe a messagepassing algorithm that efficiently
computes conditional cumulative distributions. Due to the unique independence
properties of the CDN, these messages do not in general have a one-to-one
correspondence with messages exchanged in standard algorithms, such as belief
propagation. We demonstrate the application of CDNs for structured ranking
learning using a previously-studied multi-player gaming dataset.
| [
"Jim Huang, Brendan J. Frey",
"['Jim Huang' 'Brendan J. Frey']"
] |
stat.ML cs.AI cs.LG | null | 1206.3260 | null | null | http://arxiv.org/pdf/1206.3260v1 | 2012-06-13T15:33:32Z | 2012-06-13T15:33:32Z | Causal discovery of linear acyclic models with arbitrary distributions | An important task in data analysis is the discovery of causal relationships
between observed variables. For continuous-valued data, linear acyclic causal
models are commonly used to model the data-generating process, and the
inference of such models is a well-studied problem. However, existing methods
have significant limitations. Methods based on conditional independencies
(Spirtes et al. 1993; Pearl 2000) cannot distinguish between
independence-equivalent models, whereas approaches purely based on Independent
Component Analysis (Shimizu et al. 2006) are inapplicable to data which is
partially Gaussian. In this paper, we generalize and combine the two
approaches, to yield a method able to learn the model structure in many cases
for which the previous methods provide answers that are either incorrect or are
not as informative as possible. We give exact graphical conditions for when two
distinct models represent the same family of distributions, and empirically
demonstrate the power of our method through thorough simulations.
| [
"Patrik O. Hoyer, Aapo Hyvarinen, Richard Scheines, Peter L. Spirtes,\n Joseph Ramsey, Gustavo Lacerda, Shohei Shimizu",
"['Patrik O. Hoyer' 'Aapo Hyvarinen' 'Richard Scheines' 'Peter L. Spirtes'\n 'Joseph Ramsey' 'Gustavo Lacerda' 'Shohei Shimizu']"
] |
cs.LG stat.ML | null | 1206.3262 | null | null | http://arxiv.org/pdf/1206.3262v1 | 2012-06-13T15:34:21Z | 2012-06-13T15:34:21Z | Convergent Message-Passing Algorithms for Inference over General Graphs
with Convex Free Energies | Inference problems in graphical models can be represented as a constrained
optimization of a free energy function. It is known that when the Bethe free
energy is used, the fixedpoints of the belief propagation (BP) algorithm
correspond to the local minima of the free energy. However BP fails to converge
in many cases of interest. Moreover, the Bethe free energy is non-convex for
graphical models with cycles thus introducing great difficulty in deriving
efficient algorithms for finding local minima of the free energy for general
graphs. In this paper we introduce two efficient BP-like algorithms, one
sequential and the other parallel, that are guaranteed to converge to the
global minimum, for any graph, over the class of energies known as "convex free
energies". In addition, we propose an efficient heuristic for setting the
parameters of the convex free energy based on the structure of the graph.
| [
"['Tamir Hazan' 'Amnon Shashua']",
"Tamir Hazan, Amnon Shashua"
] |
cs.LG stat.ML | null | 1206.3269 | null | null | http://arxiv.org/pdf/1206.3269v1 | 2012-06-13T15:37:30Z | 2012-06-13T15:37:30Z | Bayesian Out-Trees | A Bayesian treatment of latent directed graph structure for non-iid data is
provided where each child datum is sampled with a directed conditional
dependence on a single unknown parent datum. The latent graph structure is
assumed to lie in the family of directed out-tree graphs which leads to
efficient Bayesian inference. The latent likelihood of the data and its
gradients are computable in closed form via Tutte's directed matrix tree
theorem using determinants and inverses of the out-Laplacian. This novel
likelihood subsumes iid likelihood, is exchangeable and yields efficient
unsupervised and semi-supervised learning algorithms. In addition to handling
taxonomy and phylogenetic datasets the out-tree assumption performs
surprisingly well as a semi-parametric density estimator on standard iid
datasets. Experiments with unsupervised and semisupervised learning are shown
on various UCI and taxonomy datasets.
| [
"['Tony S. Jebara']",
"Tony S. Jebara"
] |
cs.LG stat.ML | null | 1206.3270 | null | null | http://arxiv.org/pdf/1206.3270v1 | 2012-06-13T15:38:07Z | 2012-06-13T15:38:07Z | Estimation and Clustering with Infinite Rankings | This paper presents a natural extension of stagewise ranking to the the case
of infinitely many items. We introduce the infinite generalized Mallows model
(IGM), describe its properties and give procedures to estimate it from data.
For estimation of multimodal distributions we introduce the
Exponential-Blurring-Mean-Shift nonparametric clustering algorithm. The
experiments highlight the properties of the new model and demonstrate that
infinite models can be simple, elegant and practical.
| [
"['Marina Meila' 'Le Bao']",
"Marina Meila, Le Bao"
] |
cs.LG stat.ML | null | 1206.3274 | null | null | http://arxiv.org/pdf/1206.3274v1 | 2012-06-13T15:39:51Z | 2012-06-13T15:39:51Z | Small Sample Inference for Generalization Error in Classification Using
the CUD Bound | Confidence measures for the generalization error are crucial when small
training samples are used to construct classifiers. A common approach is to
estimate the generalization error by resampling and then assume the resampled
estimator follows a known distribution to form a confidence set [Kohavi 1995,
Martin 1996,Yang 2006]. Alternatively, one might bootstrap the resampled
estimator of the generalization error to form a confidence set. Unfortunately,
these methods do not reliably provide sets of the desired confidence. The poor
performance appears to be due to the lack of smoothness of the generalization
error as a function of the learned classifier. This results in a non-normal
distribution of the estimated generalization error. We construct a confidence
set for the generalization error by use of a smooth upper bound on the
deviation between the resampled estimate and generalization error. The
confidence set is formed by bootstrapping this upper bound. In cases in which
the approximation class for the classifier can be represented as a parametric
additive model, we provide a computationally efficient algorithm. This method
exhibits superior performance across a series of test and simulated data sets.
| [
"Eric B. Laber, Susan A. Murphy",
"['Eric B. Laber' 'Susan A. Murphy']"
] |
cs.LG cs.CE q-bio.QM | null | 1206.3275 | null | null | http://arxiv.org/pdf/1206.3275v1 | 2012-06-13T15:40:51Z | 2012-06-13T15:40:51Z | Learning Hidden Markov Models for Regression using Path Aggregation | We consider the task of learning mappings from sequential data to real-valued
responses. We present and evaluate an approach to learning a type of hidden
Markov model (HMM) for regression. The learning process involves inferring the
structure and parameters of a conventional HMM, while simultaneously learning a
regression model that maps features that characterize paths through the model
to continuous responses. Our results, in both synthetic and biological domains,
demonstrate the value of jointly learning the two components of our approach.
| [
"['Keith Noto' 'Mark Craven']",
"Keith Noto, Mark Craven"
] |
cs.LG stat.ML | null | 1206.3279 | null | null | http://arxiv.org/pdf/1206.3279v1 | 2012-06-13T15:42:35Z | 2012-06-13T15:42:35Z | The Phylogenetic Indian Buffet Process: A Non-Exchangeable Nonparametric
Prior for Latent Features | Nonparametric Bayesian models are often based on the assumption that the
objects being modeled are exchangeable. While appropriate in some applications
(e.g., bag-of-words models for documents), exchangeability is sometimes assumed
simply for computational reasons; non-exchangeable models might be a better
choice for applications based on subject matter. Drawing on ideas from
graphical models and phylogenetics, we describe a non-exchangeable prior for a
class of nonparametric latent feature models that is nearly as efficient
computationally as its exchangeable counterpart. Our model is applicable to the
general setting in which the dependencies between objects can be expressed
using a tree, where edge lengths indicate the strength of relationships. We
demonstrate an application to modeling probabilistic choice.
| [
"['Kurt T. Miller' 'Thomas Griffiths' 'Michael I. Jordan']",
"Kurt T. Miller, Thomas Griffiths, Michael I. Jordan"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.