categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
stat.ML cs.LG physics.data-an physics.soc-ph | null | 1207.1115 | null | null | http://arxiv.org/pdf/1207.1115v1 | 2012-07-03T16:20:08Z | 2012-07-03T16:20:08Z | Inferring land use from mobile phone activity | Understanding the spatiotemporal distribution of people within a city is
crucial to many planning applications. Obtaining data to create required
knowledge, currently involves costly survey methods. At the same time
ubiquitous mobile sensors from personal GPS devices to mobile phones are
collecting massive amounts of data on urban systems. The locations,
communications, and activities of millions of people are recorded and stored by
new information technologies. This work utilizes novel dynamic data, generated
by mobile phone users, to measure spatiotemporal changes in population. In the
process, we identify the relationship between land use and dynamic population
over the course of a typical week. A machine learning classification algorithm
is used to identify clusters of locations with similar zoned uses and mobile
phone activity patterns. It is shown that the mobile phone data is capable of
delivering useful information on actual land use that supplements zoning
regulations.
| [
"['Jameson L. Toole' 'Michael Ulm' 'Dietmar Bauer' 'Marta C. Gonzalez']",
"Jameson L. Toole, Michael Ulm, Dietmar Bauer, Marta C. Gonzalez"
] |
cs.LG stat.ML | null | 1207.1358 | null | null | http://arxiv.org/pdf/1207.1358v1 | 2012-07-04T12:14:50Z | 2012-07-04T12:14:50Z | Unsupervised spectral learning | In spectral clustering and spectral image segmentation, the data is partioned
starting from a given matrix of pairwise similarities S. the matrix S is
constructed by hand, or learned on a separate training set. In this paper we
show how to achieve spectral clustering in unsupervised mode. Our algorithm
starts with a set of observed pairwise features, which are possible components
of an unknown, parametric similarity function. This function is learned
iteratively, at the same time as the clustering of the data. The algorithm
shows promosing results on synthetic and real data.
| [
"['Susan Shortreed' 'Marina Meila']",
"Susan Shortreed, Marina Meila"
] |
cs.LG stat.ML | null | 1207.1364 | null | null | http://arxiv.org/pdf/1207.1364v1 | 2012-07-04T16:03:10Z | 2012-07-04T16:03:10Z | Learning from Sparse Data by Exploiting Monotonicity Constraints | When training data is sparse, more domain knowledge must be incorporated into
the learning algorithm in order to reduce the effective size of the hypothesis
space. This paper builds on previous work in which knowledge about qualitative
monotonicities was formally represented and incorporated into learning
algorithms (e.g., Clark & Matwin's work with the CN2 rule learning algorithm).
We show how to interpret knowledge of qualitative influences, and in particular
of monotonicities, as constraints on probability distributions, and to
incorporate this knowledge into Bayesian network learning algorithms. We show
that this yields improved accuracy, particularly with very small training sets
(e.g. less than 10 examples).
| [
"['Eric E. Altendorf' 'Angelo C. Restificar' 'Thomas G. Dietterich']",
"Eric E. Altendorf, Angelo C. Restificar, Thomas G. Dietterich"
] |
cs.LG stat.ML | null | 1207.1366 | null | null | http://arxiv.org/pdf/1207.1366v1 | 2012-07-04T16:03:31Z | 2012-07-04T16:03:31Z | Learning Factor Graphs in Polynomial Time & Sample Complexity | We study computational and sample complexity of parameter and structure
learning in graphical models. Our main result shows that the class of factor
graphs with bounded factor size and bounded connectivity can be learned in
polynomial time and polynomial number of samples, assuming that the data is
generated by a network in this class. This result covers both parameter
estimation for a known network structure and structure learning. It implies as
a corollary that we can learn factor graphs for both Bayesian networks and
Markov networks of bounded degree, in polynomial time and sample complexity.
Unlike maximum likelihood estimation, our method does not require inference in
the underlying network, and so applies to networks where inference is
intractable. We also show that the error of our learned model degrades
gracefully when the generating distribution is not a member of the target class
of networks.
| [
"Pieter Abbeel, Daphne Koller, Andrew Y. Ng",
"['Pieter Abbeel' 'Daphne Koller' 'Andrew Y. Ng']"
] |
cs.LG stat.ML | null | 1207.1379 | null | null | http://arxiv.org/pdf/1207.1379v1 | 2012-07-04T16:10:01Z | 2012-07-04T16:10:01Z | On the Detection of Concept Changes in Time-Varying Data Stream by
Testing Exchangeability | A martingale framework for concept change detection based on testing data
exchangeability was recently proposed (Ho, 2005). In this paper, we describe
the proposed change-detection test based on the Doob's Maximal Inequality and
show that it is an approximation of the sequential probability ratio test
(SPRT). The relationship between the threshold value used in the proposed test
and its size and power is deduced from the approximation. The mean delay time
before a change is detected is estimated using the average sample number of a
SPRT. The performance of the test using various threshold values is examined on
five different data stream scenarios simulated using two synthetic data sets.
Finally, experimental results show that the test is effective in detecting
changes in time-varying data streams simulated using three benchmark data sets.
| [
"Shen-Shyang Ho, Harry Wechsler",
"['Shen-Shyang Ho' 'Harry Wechsler']"
] |
cs.MS cs.LG stat.ML | null | 1207.1380 | null | null | http://arxiv.org/pdf/1207.1380v1 | 2012-07-04T16:10:18Z | 2012-07-04T16:10:18Z | Bayes Blocks: An Implementation of the Variational Bayesian Building
Blocks Framework | A software library for constructing and learning probabilistic models is
presented. The library offers a set of building blocks from which a large
variety of static and dynamic models can be built. These include hierarchical
models for variances of other variables and many nonlinear models. The
underlying variational Bayesian machinery, providing for fast and robust
estimation but being mathematically rather involved, is almost completely
hidden from the user thus making it very easy to use the library. The building
blocks include Gaussian, rectified Gaussian and mixture-of-Gaussians variables
and computational nodes which can be combined rather freely.
| [
"Markus Harva, Tapani Raiko, Antti Honkela, Harri Valpola, Juha\n Karhunen",
"['Markus Harva' 'Tapani Raiko' 'Antti Honkela' 'Harri Valpola'\n 'Juha Karhunen']"
] |
cs.LG stat.ML | null | 1207.1382 | null | null | http://arxiv.org/pdf/1207.1382v1 | 2012-07-04T16:12:02Z | 2012-07-04T16:12:02Z | Maximum Margin Bayesian Networks | We consider the problem of learning Bayesian network classifiers that
maximize the marginover a set of classification variables. We find that this
problem is harder for Bayesian networks than for undirected graphical models
like maximum margin Markov networks. The main difficulty is that the parameters
in a Bayesian network must satisfy additional normalization constraints that an
undirected graphical model need not respect. These additional constraints
complicate the optimization task. Nevertheless, we derive an effective training
algorithm that solves the maximum margin training problem for a range of
Bayesian network topologies, and converges to an approximate solution for
arbitrary network topologies. Experimental results show that the method can
demonstrate improved generalization performance over Markov networks when the
directed graphical structure encodes relevant knowledge. In practice, the
training technique allows one to combine prior knowledge expressed as a
directed (causal) model with state of the art discriminative learning methods.
| [
"['Yuhong Guo' 'Dana Wilkinson' 'Dale Schuurmans']",
"Yuhong Guo, Dana Wilkinson, Dale Schuurmans"
] |
cs.AI cs.LG stat.ML | null | 1207.1387 | null | null | http://arxiv.org/pdf/1207.1387v1 | 2012-07-04T16:13:39Z | 2012-07-04T16:13:39Z | Learning Bayesian Network Parameters with Prior Knowledge about
Context-Specific Qualitative Influences | We present a method for learning the parameters of a Bayesian network with
prior knowledge about the signs of influences between variables. Our method
accommodates not just the standard signs, but provides for context-specific
signs as well. We show how the various signs translate into order constraints
on the network parameters and how isotonic regression can be used to compute
order-constrained estimates from the available data. Our experimental results
show that taking prior knowledge about the signs of influences into account
leads to an improved fit of the true distribution, especially when only a small
sample of data is available. Moreover, the computed estimates are guaranteed to
be consistent with the specified signs, thereby resulting in a network that is
more likely to be accepted by experts in its domain of application.
| [
"['Ad Feelders' 'Linda C. van der Gaag']",
"Ad Feelders, Linda C. van der Gaag"
] |
cs.LG stat.ML | null | 1207.1393 | null | null | http://arxiv.org/pdf/1207.1393v1 | 2012-07-04T16:16:02Z | 2012-07-04T16:16:02Z | Learning about individuals from group statistics | We propose a new problem formulation which is similar to, but more
informative than, the binary multiple-instance learning problem. In this
setting, we are given groups of instances (described by feature vectors) along
with estimates of the fraction of positively-labeled instances per group. The
task is to learn an instance level classifier from this information. That is,
we are trying to estimate the unknown binary labels of individuals from
knowledge of group statistics. We propose a principled probabilistic model to
solve this problem that accounts for uncertainty in the parameters and in the
unknown individual labels. This model is trained with an efficient MCMC
algorithm. Its performance is demonstrated on both synthetic and real-world
data arising in general object recognition.
| [
"['Hendrik Kuck' 'Nando de Freitas']",
"Hendrik Kuck, Nando de Freitas"
] |
stat.CO cs.LG stat.ML | null | 1207.1396 | null | null | http://arxiv.org/pdf/1207.1396v1 | 2012-07-04T16:17:01Z | 2012-07-04T16:17:01Z | Toward Practical N2 Monte Carlo: the Marginal Particle Filter | Sequential Monte Carlo techniques are useful for state estimation in
non-linear, non-Gaussian dynamic models. These methods allow us to approximate
the joint posterior distribution using sequential importance sampling. In this
framework, the dimension of the target distribution grows with each time step,
thus it is necessary to introduce some resampling steps to ensure that the
estimates provided by the algorithm have a reasonable variance. In many
applications, we are only interested in the marginal filtering distribution
which is defined on a space of fixed dimension. We present a Sequential Monte
Carlo algorithm called the Marginal Particle Filter which operates directly on
the marginal distribution, hence avoiding having to perform importance sampling
on a space of growing dimension. Using this idea, we also derive an improved
version of the auxiliary particle filter. We show theoretic and empirical
results which demonstrate a reduction in variance over conventional particle
filtering, and present techniques for reducing the cost of the marginal
particle filter with N particles from O(N2) to O(N logN).
| [
"Mike Klaas, Nando de Freitas, Arnaud Doucet",
"['Mike Klaas' 'Nando de Freitas' 'Arnaud Doucet']"
] |
cs.LG stat.ML | null | 1207.1403 | null | null | http://arxiv.org/pdf/1207.1403v1 | 2012-07-04T16:19:55Z | 2012-07-04T16:19:55Z | Obtaining Calibrated Probabilities from Boosting | Boosted decision trees typically yield good accuracy, precision, and ROC
area. However, because the outputs from boosting are not well calibrated
posterior probabilities, boosting yields poor squared error and cross-entropy.
We empirically demonstrate why AdaBoost predicts distorted probabilities and
examine three calibration methods for correcting this distortion: Platt
Scaling, Isotonic Regression, and Logistic Correction. We also experiment with
boosting using log-loss instead of the usual exponential loss. Experiments show
that Logistic Correction and boosting with log-loss work well when boosting
weak models such as decision stumps, but yield poor performance when boosting
more complex models such as full decision trees. Platt Scaling and Isotonic
Regression, however, significantly improve the probabilities predicted by
| [
"Alexandru Niculescu-Mizil, Richard A. Caruana",
"['Alexandru Niculescu-Mizil' 'Richard A. Caruana']"
] |
cs.LG cs.DS stat.ML | null | 1207.1404 | null | null | http://arxiv.org/pdf/1207.1404v1 | 2012-07-04T16:20:12Z | 2012-07-04T16:20:12Z | A submodular-supermodular procedure with applications to discriminative
structure learning | In this paper, we present an algorithm for minimizing the difference between
two submodular functions using a variational framework which is based on (an
extension of) the concave-convex procedure [17]. Because several commonly used
metrics in machine learning, like mutual information and conditional mutual
information, are submodular, the problem of minimizing the difference of two
submodular problems arises naturally in many machine learning applications. Two
such applications are learning discriminatively structured graphical models and
feature selection under computational complexity constraints. A commonly used
metric for measuring discriminative capacity is the EAR measure which is the
difference between two conditional mutual information terms. Feature selection
taking complexity considerations into account also fall into this framework
because both the information that a set of features provide and the cost of
computing and using the features can be modeled as submodular functions. This
problem is NP-hard, and we give a polynomial time heuristic for it. We also
present results on synthetic data to show that classifiers based on
discriminative graphical models using this algorithm can significantly
outperform classifiers based on generative graphical models.
| [
"['Mukund Narasimhan' 'Jeff A. Bilmes']",
"Mukund Narasimhan, Jeff A. Bilmes"
] |
cs.LG cs.AI | null | 1207.1406 | null | null | http://arxiv.org/pdf/1207.1406v1 | 2012-07-04T16:20:45Z | 2012-07-04T16:20:45Z | A Conditional Random Field for Discriminatively-trained Finite-state
String Edit Distance | The need to measure sequence similarity arises in information extraction,
object identity, data mining, biological sequence analysis, and other domains.
This paper presents discriminative string-edit CRFs, a finitestate conditional
random field model for edit sequences between strings. Conditional random
fields have advantages over generative approaches to this problem, such as pair
HMMs or the work of Ristad and Yianilos, because as conditionally-trained
methods, they enable the use of complex, arbitrary actions and features of the
input strings. As in generative models, the training data does not have to
specify the edit sequences between the given string pairs. Unlike generative
models, however, our model is trained on both positive and negative instances
of string pairs. We present positive experimental results on several data sets.
| [
"['Andrew McCallum' 'Kedar Bellare' 'Fernando Pereira']",
"Andrew McCallum, Kedar Bellare, Fernando Pereira"
] |
cs.LG stat.ML | null | 1207.1409 | null | null | http://arxiv.org/pdf/1207.1409v1 | 2012-07-04T16:22:14Z | 2012-07-04T16:22:14Z | Piecewise Training for Undirected Models | For many large undirected models that arise in real-world applications, exact
maximumlikelihood training is intractable, because it requires computing
marginal distributions of the model. Conditional training is even more
difficult, because the partition function depends not only on the parameters,
but also on the observed input, requiring repeated inference over each training
example. An appealing idea for such models is to independently train a local
undirected classifier over each clique, afterwards combining the learned
weights into a single global model. In this paper, we show that this piecewise
method can be justified as minimizing a new family of upper bounds on the log
partition function. On three natural-language data sets, piecewise training is
more accurate than pseudolikelihood, and often performs comparably to global
training using belief propagation.
| [
"Charles Sutton, Andrew McCallum",
"['Charles Sutton' 'Andrew McCallum']"
] |
cs.LG cs.MS stat.ML | null | 1207.1413 | null | null | http://arxiv.org/pdf/1207.1413v1 | 2012-07-04T16:23:35Z | 2012-07-04T16:23:35Z | Discovery of non-gaussian linear causal models using ICA | In recent years, several methods have been proposed for the discovery of
causal structure from non-experimental data (Spirtes et al. 2000; Pearl 2000).
Such methods make various assumptions on the data generating process to
facilitate its identification from purely observational data. Continuing this
line of research, we show how to discover the complete causal structure of
continuous-valued data, under the assumptions that (a) the data generating
process is linear, (b) there are no unobserved confounders, and (c) disturbance
variables have non-gaussian distributions of non-zero variances. The solution
relies on the use of the statistical method known as independent component
analysis (ICA), and does not require any pre-specified time-ordering of the
variables. We provide a complete Matlab package for performing this LiNGAM
analysis (short for Linear Non-Gaussian Acyclic Model), and demonstrate the
effectiveness of the method using artificially generated data.
| [
"Shohei Shimizu, Aapo Hyvarinen, Yutaka Kano, Patrik O. Hoyer",
"['Shohei Shimizu' 'Aapo Hyvarinen' 'Yutaka Kano' 'Patrik O. Hoyer']"
] |
cs.IR cs.LG stat.ML | null | 1207.1414 | null | null | http://arxiv.org/pdf/1207.1414v1 | 2012-07-04T16:23:52Z | 2012-07-04T16:23:52Z | Two-Way Latent Grouping Model for User Preference Prediction | We introduce a novel latent grouping model for predicting the relevance of a
new document to a user. The model assumes a latent group structure for both
users and documents. We compared the model against a state-of-the-art method,
the User Rating Profile model, where only users have a latent group structure.
We estimate both models by Gibbs sampling. The new method predicts relevance
more accurately for new documents that have few known ratings. The reason is
that generalization over documents then becomes necessary and hence the twoway
grouping is profitable.
| [
"Eerika Savia, Kai Puolamaki, Janne Sinkkonen, Samuel Kaski",
"['Eerika Savia' 'Kai Puolamaki' 'Janne Sinkkonen' 'Samuel Kaski']"
] |
cs.LG stat.ML | null | 1207.1417 | null | null | http://arxiv.org/pdf/1207.1417v1 | 2012-07-04T16:25:12Z | 2012-07-04T16:25:12Z | The DLR Hierarchy of Approximate Inference | We propose a hierarchy for approximate inference based on the Dobrushin,
Lanford, Ruelle (DLR) equations. This hierarchy includes existing algorithms,
such as belief propagation, and also motivates novel algorithms such as
factorized neighbors (FN) algorithms and variants of mean field (MF)
algorithms. In particular, we show that extrema of the Bethe free energy
correspond to approximate solutions of the DLR equations. In addition, we
demonstrate a close connection between these approximate algorithms and Gibbs
sampling. Finally, we compare and contrast various of the algorithms in the DLR
hierarchy on spin-glass problems. The experiments show that algorithms higher
up in the hierarchy give more accurate results when they converge but tend to
be less stable.
| [
"['Michal Rosen-Zvi' 'Michael I. Jordan' 'Alan Yuille']",
"Michal Rosen-Zvi, Michael I. Jordan, Alan Yuille"
] |
cs.LG stat.ML | null | 1207.1421 | null | null | http://arxiv.org/pdf/1207.1421v1 | 2012-07-04T16:28:10Z | 2012-07-04T16:28:10Z | A Function Approximation Approach to Estimation of Policy Gradient for
POMDP with Structured Policies | We consider the estimation of the policy gradient in partially observable
Markov decision processes (POMDP) with a special class of structured policies
that are finite-state controllers. We show that the gradient estimation can be
done in the Actor-Critic framework, by making the critic compute a "value"
function that does not depend on the states of POMDP. This function is the
conditional mean of the true value function that depends on the states. We show
that the critic can be implemented using temporal difference (TD) methods with
linear function approximations, and the analytical results on TD and
Actor-Critic can be transfered to this case. Although Actor-Critic algorithms
have been used extensively in Markov decision processes (MDP), up to now they
have not been proposed for POMDP as an alternative to the earlier proposal
GPOMDP algorithm, an actor-only method. Furthermore, we show that the same idea
applies to semi-Markov problems with a subset of finite-state controllers.
| [
"Huizhen Yu",
"['Huizhen Yu']"
] |
cs.LG cs.DB stat.ML | null | 1207.1423 | null | null | http://arxiv.org/pdf/1207.1423v1 | 2012-07-04T16:28:40Z | 2012-07-04T16:28:40Z | Mining Associated Text and Images with Dual-Wing Harmoniums | We propose a multi-wing harmonium model for mining multimedia data that
extends and improves on earlier models based on two-layer random fields, which
capture bidirectional dependencies between hidden topic aspects and observed
inputs. This model can be viewed as an undirected counterpart of the two-layer
directed models such as LDA for similar tasks, but bears significant difference
in inference/learning cost tradeoffs, latent topic representations, and topic
mixing mechanisms. In particular, our model facilitates efficient inference and
robust topic mixing, and potentially provides high flexibilities in modeling
the latent topic spaces. A contrastive divergence and a variational algorithm
are derived for learning. We specialized our model to a dual-wing harmonium for
captioned images, incorporating a multivariate Poisson for word-counts and a
multivariate Gaussian for color histogram. We present empirical results on the
applications of this model to classification, retrieval and image annotation on
news video collections, and we report an extensive comparison with various
extant models.
| [
"['Eric P. Xing' 'Rong Yan' 'Alexander G. Hauptmann']",
"Eric P. Xing, Rong Yan, Alexander G. Hauptmann"
] |
cs.LG cs.AI stat.ML | null | 1207.1429 | null | null | http://arxiv.org/pdf/1207.1429v1 | 2012-07-04T16:31:04Z | 2012-07-04T16:31:04Z | Ordering-Based Search: A Simple and Effective Algorithm for Learning
Bayesian Networks | One of the basic tasks for Bayesian networks (BNs) is that of learning a
network structure from data. The BN-learning problem is NP-hard, so the
standard solution is heuristic search. Many approaches have been proposed for
this task, but only a very small number outperform the baseline of greedy
hill-climbing with tabu lists; moreover, many of the proposed algorithms are
quite complex and hard to implement. In this paper, we propose a very simple
and easy-to-implement method for addressing this task. Our approach is based on
the well-known fact that the best network (of bounded in-degree) consistent
with a given node ordering can be found very efficiently. We therefore propose
a search not over the space of structures, but over the space of orderings,
selecting for each ordering the best network consistent with it. This search
space is much smaller, makes more global search steps, has a lower branching
factor, and avoids costly acyclicity checks. We present results for this
algorithm on both synthetic and real data sets, evaluating both the score of
the network found and in the running time. We show that ordering-based search
outperforms the standard baseline, and is competitive with recent algorithms
that are much harder to implement.
| [
"['Marc Teyssier' 'Daphne Koller']",
"Marc Teyssier, Daphne Koller"
] |
quant-ph cs.LG | 10.1088/1367-2630/14/10/103013 | 1207.1655 | null | null | http://arxiv.org/abs/1207.1655v2 | 2012-09-18T02:07:11Z | 2012-07-06T15:17:55Z | Robust Online Hamiltonian Learning | In this work we combine two distinct machine learning methodologies,
sequential Monte Carlo and Bayesian experimental design, and apply them to the
problem of inferring the dynamical parameters of a quantum system. We design
the algorithm with practicality in mind by including parameters that control
trade-offs between the requirements on computational and experimental
resources. The algorithm can be implemented online (during experimental data
collection), avoiding the need for storage and post-processing. Most
importantly, our algorithm is capable of learning Hamiltonian parameters even
when the parameters change from experiment-to-experiment, and also when
additional noise processes are present and unknown. The algorithm also
numerically estimates the Cramer-Rao lower bound, certifying its own
performance.
| [
"['Christopher E. Granade' 'Christopher Ferrie' 'Nathan Wiebe' 'D. G. Cory']",
"Christopher E. Granade, Christopher Ferrie, Nathan Wiebe, D. G. Cory"
] |
stat.ML cs.LG stat.AP | null | 1207.1965 | null | null | http://arxiv.org/pdf/1207.1965v1 | 2012-07-09T06:47:39Z | 2012-07-09T06:47:39Z | Forecasting electricity consumption by aggregating specialized experts | We consider the setting of sequential prediction of arbitrary sequences based
on specialized experts. We first provide a review of the relevant literature
and present two theoretical contributions: a general analysis of the specialist
aggregation rule of Freund et al. (1997) and an adaptation of fixed-share rules
of Herbster and Warmuth (1998) in this setting. We then apply these rules to
the sequential short-term (one-day-ahead) forecasting of electricity
consumption; to do so, we consider two data sets, a Slovakian one and a French
one, respectively concerned with hourly and half-hourly predictions. We follow
a general methodology to perform the stated empirical studies and detail in
particular tuning issues of the learning parameters. The introduced aggregation
rules demonstrate an improved accuracy on the data sets at hand; the
improvements lie in a reduced mean squared error but also in a more robust
behavior with respect to large occasional errors.
| [
"Marie Devaine (DMA), Pierre Gaillard (DMA, INRIA Paris -\n Rocquencourt), Yannig Goude, Gilles Stoltz (DMA, INRIA Paris - Rocquencourt,\n GREGH)",
"['Marie Devaine' 'Pierre Gaillard' 'Yannig Goude' 'Gilles Stoltz']"
] |
stat.ML cs.LG stat.ME | null | 1207.1977 | null | null | http://arxiv.org/pdf/1207.1977v1 | 2012-07-09T08:05:44Z | 2012-07-09T08:05:44Z | Estimating a Causal Order among Groups of Variables in Linear Models | The machine learning community has recently devoted much attention to the
problem of inferring causal relationships from statistical data. Most of this
work has focused on uncovering connections among scalar random variables. We
generalize existing methods to apply to collections of multi-dimensional random
vectors, focusing on techniques applicable to linear models. The performance of
the resulting algorithms is evaluated and compared in simulations, which show
that our methods can, in many cases, provide useful information on causal
relationships even for relatively small sample sizes.
| [
"Doris Entner, Patrik O. Hoyer",
"['Doris Entner' 'Patrik O. Hoyer']"
] |
null | null | 1207.2328 | null | null | http://arxiv.org/abs/1207.2328v2 | 2012-07-13T09:41:10Z | 2012-07-10T12:22:21Z | Comparative Study for Inference of Hidden Classes in Stochastic Block
Models | Inference of hidden classes in stochastic block model is a classical problem with important applications. Most commonly used methods for this problem involve na"{i}ve mean field approaches or heuristic spectral methods. Recently, belief propagation was proposed for this problem. In this contribution we perform a comparative study between the three methods on synthetically created networks. We show that belief propagation shows much better performance when compared to na"{i}ve mean field and spectral approaches. This applies to accuracy, computational efficiency and the tendency to overfit the data. | [
"['Pan Zhang' 'Florent Krzakala' 'Jörg Reichardt' 'Lenka Zdeborová']"
] |
cs.SI cs.LG math.ST physics.soc-ph stat.ML stat.TH | 10.1214/13-AOS1138 | 1207.2340 | null | null | http://arxiv.org/abs/1207.2340v3 | 2013-11-05T15:49:54Z | 2012-07-10T13:28:32Z | Pseudo-likelihood methods for community detection in large sparse
networks | Many algorithms have been proposed for fitting network models with
communities, but most of them do not scale well to large networks, and often
fail on sparse networks. Here we propose a new fast pseudo-likelihood method
for fitting the stochastic block model for networks, as well as a variant that
allows for an arbitrary degree distribution by conditioning on degrees. We show
that the algorithms perform well under a range of settings, including on very
sparse networks, and illustrate on the example of a network of political blogs.
We also propose spectral clustering with perturbations, a method of independent
interest, which works well on sparse networks where regular spectral clustering
fails, and use it to provide an initial value for pseudo-likelihood. We prove
that pseudo-likelihood provides consistent estimates of the communities under a
mild condition on the starting value, for the case of a block model with two
communities.
| [
"['Arash A. Amini' 'Aiyou Chen' 'Peter J. Bickel' 'Elizaveta Levina']",
"Arash A. Amini, Aiyou Chen, Peter J. Bickel, Elizaveta Levina"
] |
cs.CV cs.LG | 10.1109/TSP.2013.2274276 | 1207.2488 | null | null | http://arxiv.org/abs/1207.2488v4 | 2013-11-26T17:29:04Z | 2012-07-10T20:52:46Z | Kernelized Supervised Dictionary Learning | In this paper, we propose supervised dictionary learning (SDL) by
incorporating information on class labels into the learning of the dictionary.
To this end, we propose to learn the dictionary in a space where the dependency
between the signals and their corresponding labels is maximized. To maximize
this dependency, the recently introduced Hilbert Schmidt independence criterion
(HSIC) is used. One of the main advantages of this novel approach for SDL is
that it can be easily kernelized by incorporating a kernel, particularly a
data-derived kernel such as normalized compression distance, into the
formulation. The learned dictionary is compact and the proposed approach is
fast. We show that it outperforms other unsupervised and supervised dictionary
learning approaches in the literature, using real-world data.
| [
"Mehrdad J. Gangeh, Ali Ghodsi, Mohamed S. Kamel",
"['Mehrdad J. Gangeh' 'Ali Ghodsi' 'Mohamed S. Kamel']"
] |
cs.LG cs.RO stat.ML | null | 1207.2491 | null | null | http://arxiv.org/pdf/1207.2491v1 | 2012-07-10T21:19:33Z | 2012-07-10T21:19:33Z | A Spectral Learning Approach to Range-Only SLAM | We present a novel spectral learning algorithm for simultaneous localization
and mapping (SLAM) from range data with known correspondences. This algorithm
is an instance of a general spectral system identification framework, from
which it inherits several desirable properties, including statistical
consistency and no local optima. Compared with popular batch optimization or
multiple-hypothesis tracking (MHT) methods for range-only SLAM, our spectral
approach offers guaranteed low computational requirements and good tracking
performance. Compared with popular extended Kalman filter (EKF) or extended
information filter (EIF) approaches, and many MHT ones, our approach does not
need to linearize a transition or measurement model; such linearizations can
cause severe errors in EKFs and EIFs, and to a lesser extent MHT, particularly
for the highly non-Gaussian posteriors encountered in range-only SLAM. We
provide a theoretical analysis of our method, including finite-sample error
bounds. Finally, we demonstrate on a real-world robotic SLAM problem that our
algorithm is not only theoretically justified, but works well in practice: in a
comparison of multiple methods, the lowest errors come from a combination of
our algorithm with batch optimization, but our method alone produces nearly as
good a result at far lower computational cost.
| [
"['Byron Boots' 'Geoffrey J. Gordon']",
"Byron Boots and Geoffrey J. Gordon"
] |
stat.ML cs.CR cs.LG | null | 1207.2812 | null | null | http://arxiv.org/pdf/1207.2812v3 | 2013-08-07T21:48:35Z | 2012-07-12T00:05:02Z | Near-Optimal Algorithms for Differentially-Private Principal Components | Principal components analysis (PCA) is a standard tool for identifying good
low-dimensional approximations to data in high dimension. Many data sets of
interest contain private or sensitive information about individuals. Algorithms
which operate on such data should be sensitive to the privacy risks in
publishing their outputs. Differential privacy is a framework for developing
tradeoffs between privacy and the utility of these outputs. In this paper we
investigate the theory and empirical performance of differentially private
approximations to PCA and propose a new method which explicitly optimizes the
utility of the output. We show that the sample complexity of the proposed
method differs from the existing procedure in the scaling with the data
dimension, and that our method is nearly optimal in terms of this scaling. We
furthermore illustrate our results, showing that on real data there is a large
performance gap between the existing method and our method.
| [
"['Kamalika Chaudhuri' 'Anand D. Sarwate' 'Kaushik Sinha']",
"Kamalika Chaudhuri, Anand D. Sarwate, Kaushik Sinha"
] |
stat.ML cs.LG cs.SY | null | 1207.2940 | null | null | http://arxiv.org/pdf/1207.2940v5 | 2016-08-17T13:23:57Z | 2012-07-12T12:37:57Z | Expectation Propagation in Gaussian Process Dynamical Systems: Extended
Version | Rich and complex time-series data, such as those generated from engineering
systems, financial markets, videos or neural recordings, are now a common
feature of modern data analysis. Explaining the phenomena underlying these
diverse data sets requires flexible and accurate models. In this paper, we
promote Gaussian process dynamical systems (GPDS) as a rich model class that is
appropriate for such analysis. In particular, we present a message passing
algorithm for approximate inference in GPDSs based on expectation propagation.
By posing inference as a general message passing problem, we iterate
forward-backward smoothing. Thus, we obtain more accurate posterior
distributions over latent structures, resulting in improved predictive
performance compared to state-of-the-art GPDS smoothers, which are special
cases of our general message passing algorithm. Hence, we provide a unifying
approach within which to contextualize message passing in GPDSs.
| [
"Marc Peter Deisenroth and Shakir Mohamed",
"['Marc Peter Deisenroth' 'Shakir Mohamed']"
] |
cs.LG stat.ML | null | 1207.3012 | null | null | http://arxiv.org/pdf/1207.3012v2 | 2013-02-08T00:08:51Z | 2012-07-12T16:33:49Z | Optimal rates for first-order stochastic convex optimization under
Tsybakov noise condition | We focus on the problem of minimizing a convex function $f$ over a convex set
$S$ given $T$ queries to a stochastic first order oracle. We argue that the
complexity of convex minimization is only determined by the rate of growth of
the function around its minimizer $x^*_{f,S}$, as quantified by a Tsybakov-like
noise condition. Specifically, we prove that if $f$ grows at least as fast as
$\|x-x^*_{f,S}\|^\kappa$ around its minimum, for some $\kappa > 1$, then the
optimal rate of learning $f(x^*_{f,S})$ is
$\Theta(T^{-\frac{\kappa}{2\kappa-2}})$. The classic rate $\Theta(1/\sqrt T)$
for convex functions and $\Theta(1/T)$ for strongly convex functions are
special cases of our result for $\kappa \rightarrow \infty$ and $\kappa=2$, and
even faster rates are attained for $\kappa <2$. We also derive tight bounds for
the complexity of learning $x_{f,S}^*$, where the optimal rate is
$\Theta(T^{-\frac{1}{2\kappa-2}})$. Interestingly, these precise rates for
convex optimization also characterize the complexity of active learning and our
results further strengthen the connections between the two fields, both of
which rely on feedback-driven queries.
| [
"['Aaditya Ramdas' 'Aarti Singh']",
"Aaditya Ramdas and Aarti Singh"
] |
cs.DC cs.LG stat.ML | null | 1207.3031 | null | null | http://arxiv.org/pdf/1207.3031v2 | 2012-07-20T03:08:51Z | 2012-07-12T17:38:46Z | Distributed Strongly Convex Optimization | A lot of effort has been invested into characterizing the convergence rates
of gradient based algorithms for non-linear convex optimization. Recently,
motivated by large datasets and problems in machine learning, the interest has
shifted towards distributed optimization. In this work we present a distributed
algorithm for strongly convex constrained optimization. Each node in a network
of n computers converges to the optimum of a strongly convex, L-Lipchitz
continuous, separable objective at a rate O(log (sqrt(n) T) / T) where T is the
number of iterations. This rate is achieved in the online setting where the
data is revealed one at a time to the nodes, and in the batch setting where
each node has access to its full local dataset from the start. The same
convergence rate is achieved in expectation when the subgradients used at each
node are corrupted with additive zero-mean noise.
| [
"['Konstantinos I. Tsianos' 'Michael G. Rabbat']",
"Konstantinos I. Tsianos and Michael G. Rabbat"
] |
cs.CV cs.LG | null | 1207.3071 | null | null | http://arxiv.org/pdf/1207.3071v2 | 2013-11-26T17:32:56Z | 2012-07-12T19:37:13Z | Supervised Texture Classification Using a Novel Compression-Based
Similarity Measure | Supervised pixel-based texture classification is usually performed in the
feature space. We propose to perform this task in (dis)similarity space by
introducing a new compression-based (dis)similarity measure. The proposed
measure utilizes two dimensional MPEG-1 encoder, which takes into consideration
the spatial locality and connectivity of pixels in the images. The proposed
formulation has been carefully designed based on MPEG encoder functionality. To
this end, by design, it solely uses P-frame coding to find the (dis)similarity
among patches/images. We show that the proposed measure works properly on both
small and large patch sizes. Experimental results show that the proposed
approach significantly improves the performance of supervised pixel-based
texture classification on Brodatz and outdoor images compared to other
compression-based dissimilarity measures as well as approaches performed in
feature space. It also improves the computation speed by about 40% compared to
its rivals.
| [
"Mehrdad J. Gangeh, Ali Ghodsi, and Mohamed S. Kamel",
"['Mehrdad J. Gangeh' 'Ali Ghodsi' 'Mohamed S. Kamel']"
] |
cs.CV cs.LG eess.IV q-bio.CB stat.ML | null | 1207.3127 | null | null | http://arxiv.org/pdf/1207.3127v1 | 2012-07-13T01:22:04Z | 2012-07-13T01:22:04Z | Tracking Tetrahymena Pyriformis Cells using Decision Trees | Matching cells over time has long been the most difficult step in cell
tracking. In this paper, we approach this problem by recasting it as a
classification problem. We construct a feature set for each cell, and compute a
feature difference vector between a cell in the current frame and a cell in a
previous frame. Then we determine whether the two cells represent the same cell
over time by training decision trees as our binary classifiers. With the output
of decision trees, we are able to formulate an assignment problem for our cell
association task and solve it using a modified version of the Hungarian
algorithm.
| [
"['Quan Wang' 'Yan Ou' 'A. Agung Julius' 'Kim L. Boyer' 'Min Jun Kim']",
"Quan Wang, Yan Ou, A. Agung Julius, Kim L. Boyer, Min Jun Kim"
] |
cs.LG cs.IT math.IT | null | 1207.3269 | null | null | http://arxiv.org/pdf/1207.3269v2 | 2014-10-27T22:16:18Z | 2012-07-13T14:56:38Z | The Price of Privacy in Untrusted Recommendation Engines | Recent increase in online privacy concerns prompts the following question:
can a recommender system be accurate if users do not entrust it with their
private data? To answer this, we study the problem of learning item-clusters
under local differential privacy, a powerful, formal notion of data privacy. We
develop bounds on the sample-complexity of learning item-clusters from
privatized user inputs. Significantly, our results identify a sample-complexity
separation between learning in an information-rich and an information-scarce
regime, thereby highlighting the interaction between privacy and the amount of
information (ratings) available to each user.
In the information-rich regime, where each user rates at least a constant
fraction of items, a spectral clustering approach is shown to achieve a
sample-complexity lower bound derived from a simple information-theoretic
argument based on Fano's inequality. However, the information-scarce regime,
where each user rates only a vanishing fraction of items, is found to require a
fundamentally different approach both for lower bounds and algorithms. To this
end, we develop new techniques for bounding mutual information under a notion
of channel-mismatch, and also propose a new algorithm, MaxSense, and show that
it achieves optimal sample-complexity in this setting.
The techniques we develop for bounding mutual information may be of broader
interest. To illustrate this, we show their applicability to $(i)$ learning
based on 1-bit sketches, and $(ii)$ adaptive learning, where queries can be
adapted based on answers to past queries.
| [
"Siddhartha Banerjee, Nidhi Hegde and Laurent Massouli\\'e",
"['Siddhartha Banerjee' 'Nidhi Hegde' 'Laurent Massoulié']"
] |
cs.CV cs.LG | null | 1207.3389 | null | null | http://arxiv.org/pdf/1207.3389v2 | 2012-07-18T07:36:12Z | 2012-07-14T04:44:17Z | Incremental Learning of 3D-DCT Compact Representations for Robust Visual
Tracking | Visual tracking usually requires an object appearance model that is robust to
changing illumination, pose and other factors encountered in video. In this
paper, we construct an appearance model using the 3D discrete cosine transform
(3D-DCT). The 3D-DCT is based on a set of cosine basis functions, which are
determined by the dimensions of the 3D signal and thus independent of the input
video data. In addition, the 3D-DCT can generate a compact energy spectrum
whose high-frequency coefficients are sparse if the appearance samples are
similar. By discarding these high-frequency coefficients, we simultaneously
obtain a compact 3D-DCT based object representation and a signal
reconstruction-based similarity measure (reflecting the information loss from
signal reconstruction). To efficiently update the object representation, we
propose an incremental 3D-DCT algorithm, which decomposes the 3D-DCT into
successive operations of the 2D discrete cosine transform (2D-DCT) and 1D
discrete cosine transform (1D-DCT) on the input video data.
| [
"Xi Li and Anthony Dick and Chunhua Shen and Anton van den Hengel and\n Hanzi Wang",
"['Xi Li' 'Anthony Dick' 'Chunhua Shen' 'Anton van den Hengel' 'Hanzi Wang']"
] |
cs.LG cs.CV | null | 1207.3394 | null | null | http://arxiv.org/pdf/1207.3394v1 | 2012-07-14T06:13:48Z | 2012-07-14T06:13:48Z | Dimension Reduction by Mutual Information Feature Extraction | During the past decades, to study high-dimensional data in a large variety of
problems, researchers have proposed many Feature Extraction algorithms. One of
the most effective approaches for optimal feature extraction is based on mutual
information (MI). However it is not always easy to get an accurate estimation
for high dimensional MI. In terms of MI, the optimal feature extraction is
creating a feature set from the data which jointly have the largest dependency
on the target class and minimum redundancy. In this paper, a
component-by-component gradient ascent method is proposed for feature
extraction which is based on one-dimensional MI estimates. We will refer to
this algorithm as Mutual Information Feature Extraction (MIFX). The performance
of this proposed method is evaluated using UCI databases. The results indicate
that MIFX provides a robust performance over different data sets which are
almost always the best or comparable to the best ones.
| [
"Ali Shadvar",
"['Ali Shadvar']"
] |
stat.ML cs.LG cs.NA | null | 1207.3438 | null | null | http://arxiv.org/pdf/1207.3438v1 | 2012-07-14T16:19:40Z | 2012-07-14T16:19:40Z | MahNMF: Manhattan Non-negative Matrix Factorization | Non-negative matrix factorization (NMF) approximates a non-negative matrix
$X$ by a product of two non-negative low-rank factor matrices $W$ and $H$. NMF
and its extensions minimize either the Kullback-Leibler divergence or the
Euclidean distance between $X$ and $W^T H$ to model the Poisson noise or the
Gaussian noise. In practice, when the noise distribution is heavy tailed, they
cannot perform well. This paper presents Manhattan NMF (MahNMF) which minimizes
the Manhattan distance between $X$ and $W^T H$ for modeling the heavy tailed
Laplacian noise. Similar to sparse and low-rank matrix decompositions, MahNMF
robustly estimates the low-rank part and the sparse part of a non-negative
matrix and thus performs effectively when data are contaminated by outliers. We
extend MahNMF for various practical applications by developing box-constrained
MahNMF, manifold regularized MahNMF, group sparse MahNMF, elastic net inducing
MahNMF, and symmetric MahNMF. The major contribution of this paper lies in two
fast optimization algorithms for MahNMF and its extensions: the rank-one
residual iteration (RRI) method and Nesterov's smoothing method. In particular,
by approximating the residual matrix by the outer product of one row of W and
one row of $H$ in MahNMF, we develop an RRI method to iteratively update each
variable of $W$ and $H$ in a closed form solution. Although RRI is efficient
for small scale MahNMF and some of its extensions, it is neither scalable to
large scale matrices nor flexible enough to optimize all MahNMF extensions.
Since the objective functions of MahNMF and its extensions are neither convex
nor smooth, we apply Nesterov's smoothing method to recursively optimize one
factor matrix with another matrix fixed. By setting the smoothing parameter
inversely proportional to the iteration number, we improve the approximation
accuracy iteratively for both MahNMF and its extensions.
| [
"Naiyang Guan, Dacheng Tao, Zhigang Luo, John Shawe-Taylor",
"['Naiyang Guan' 'Dacheng Tao' 'Zhigang Luo' 'John Shawe-Taylor']"
] |
cs.LG stat.ML | null | 1207.3520 | null | null | http://arxiv.org/pdf/1207.3520v1 | 2012-07-15T15:06:35Z | 2012-07-15T15:06:35Z | Improved brain pattern recovery through ranking approaches | Inferring the functional specificity of brain regions from functional
Magnetic Resonance Images (fMRI) data is a challenging statistical problem.
While the General Linear Model (GLM) remains the standard approach for brain
mapping, supervised learning techniques (a.k.a.} decoding) have proven to be
useful to capture multivariate statistical effects distributed across voxels
and brain regions. Up to now, much effort has been made to improve decoding by
incorporating prior knowledge in the form of a particular regularization term.
In this paper we demonstrate that further improvement can be made by accounting
for non-linearities using a ranking approach rather than the commonly used
least-square regression. Through simulation, we compare the recovery properties
of our approach to linear models commonly used in fMRI based decoding. We
demonstrate the superiority of ranking with a real fMRI dataset.
| [
"Fabian Pedregosa (INRIA Paris - Rocquencourt), Alexandre Gramfort\n (LNAO, INRIA Saclay - Ile de France), Ga\\\"el Varoquaux (LNAO, INRIA Saclay -\n Ile de France), Bertrand Thirion (INRIA Saclay - Ile de France), Christophe\n Pallier (NEUROSPIN), Elodie Cauvet (NEUROSPIN)",
"['Fabian Pedregosa' 'Alexandre Gramfort' 'Gaël Varoquaux'\n 'Bertrand Thirion' 'Christophe Pallier' 'Elodie Cauvet']"
] |
cs.NI cs.AI cs.LG | 10.1109/IB2Com.2011.6217894 | 1207.3560 | null | null | http://arxiv.org/abs/1207.3560v1 | 2012-07-16T01:08:39Z | 2012-07-16T01:08:39Z | Diagnosing client faults using SVM-based intelligent inference from TCP
packet traces | We present the Intelligent Automated Client Diagnostic (IACD) system, which
only relies on inference from Transmission Control Protocol (TCP) packet traces
for rapid diagnosis of client device problems that cause network performance
issues. Using soft-margin Support Vector Machine (SVM) classifiers, the system
(i) distinguishes link problems from client problems, and (ii) identifies
characteristics unique to client faults to report the root cause of the client
device problem. Experimental evaluation demonstrated the capability of the IACD
system to distinguish between faulty and healthy links and to diagnose the
client faults with 98% accuracy in healthy links. The system can perform fault
diagnosis independent of the client's specific TCP implementation, enabling
diagnosis capability on diverse range of client computers.
| [
"['Chathuranga Widanapathirana' 'Y. Ahmet Sekercioglu'\n 'Paul G. Fitzpatrick' 'Milosh V. Ivanovich' 'Jonathan C. Li']",
"Chathuranga Widanapathirana, Y. Ahmet Sekercioglu, Paul G.\n Fitzpatrick, Milosh V. Ivanovich, Jonathan C. Li"
] |
cs.LG cs.CV | null | 1207.3598 | null | null | http://arxiv.org/pdf/1207.3598v2 | 2012-09-30T17:04:22Z | 2012-07-16T08:22:36Z | Learning to rank from medical imaging data | Medical images can be used to predict a clinical score coding for the
severity of a disease, a pain level or the complexity of a cognitive task. In
all these cases, the predicted variable has a natural order. While a standard
classifier discards this information, we would like to take it into account in
order to improve prediction performance. A standard linear regression does
model such information, however the linearity assumption is likely not be
satisfied when predicting from pixel intensities in an image. In this paper we
address these modeling challenges with a supervised learning procedure where
the model aims to order or rank images. We use a linear model for its
robustness in high dimension and its possible interpretation. We show on
simulations and two fMRI datasets that this approach is able to predict the
correct ordering on pairs of images, yielding higher prediction accuracy than
standard regression and multiclass classification techniques.
| [
"Fabian Pedregosa (INRIA Paris - Rocquencourt, INRIA Saclay - Ile de\n France), Alexandre Gramfort (INRIA Saclay - Ile de France, LNAO), Ga\\\"el\n Varoquaux (INRIA Saclay - Ile de France, LNAO), Elodie Cauvet (NEUROSPIN),\n Christophe Pallier (NEUROSPIN), Bertrand Thirion (INRIA Saclay - Ile de\n France)",
"['Fabian Pedregosa' 'Alexandre Gramfort' 'Gaël Varoquaux' 'Elodie Cauvet'\n 'Christophe Pallier' 'Bertrand Thirion']"
] |
cs.CV cs.LG | 10.1109/IVCNZ.2009.5378367 | 1207.3607 | null | null | http://arxiv.org/abs/1207.3607v1 | 2012-07-16T09:23:06Z | 2012-07-16T09:23:06Z | Fusing image representations for classification using support vector
machines | In order to improve classification accuracy different image representations
are usually combined. This can be done by using two different fusing schemes.
In feature level fusion schemes, image representations are combined before the
classification process. In classifier fusion, the decisions taken separately
based on individual representations are fused to make a decision. In this paper
the main methods derived for both strategies are evaluated. Our experimental
results show that classifier fusion performs better. Specifically Bayes belief
integration is the best performing strategy for image classification task.
| [
"Can Demirkesen (BIT Lab, LJK), Hocine Cherifi (BIT Lab, Le2i)",
"['Can Demirkesen' 'Hocine Cherifi']"
] |
cs.NE cs.AI cs.LG nlin.AO | null | 1207.3760 | null | null | http://arxiv.org/pdf/1207.3760v1 | 2012-07-16T18:41:32Z | 2012-07-16T18:41:32Z | Towards a Self-Organized Agent-Based Simulation Model for Exploration of
Human Synaptic Connections | In this paper, the early design of our self-organized agent-based simulation
model for exploration of synaptic connections that faithfully generates what is
observed in natural situation is given. While we take inspiration from
neuroscience, our intent is not to create a veridical model of processes in
neurodevelopmental biology, nor to represent a real biological system. Instead,
our goal is to design a simulation model that learns acting in the same way of
human nervous system by using findings on human subjects using reflex
methodologies in order to estimate unknown connections.
| [
"['Önder Gürcan' 'Carole Bernon' 'Kemal S. Türker']",
"\\\"Onder G\\\"urcan, Carole Bernon, Kemal S. T\\\"urker"
] |
math.ST cs.LG stat.ML stat.TH | 10.1214/19-EJS1635 | 1207.3772 | null | null | http://arxiv.org/abs/1207.3772v4 | 2019-11-13T17:30:55Z | 2012-07-16T19:26:24Z | Surrogate Losses in Passive and Active Learning | Active learning is a type of sequential design for supervised machine
learning, in which the learning algorithm sequentially requests the labels of
selected instances from a large pool of unlabeled data points. The objective is
to produce a classifier of relatively low risk, as measured under the 0-1 loss,
ideally using fewer label requests than the number of random labeled data
points sufficient to achieve the same. This work investigates the potential
uses of surrogate loss functions in the context of active learning.
Specifically, it presents an active learning algorithm based on an arbitrary
classification-calibrated surrogate loss function, along with an analysis of
the number of label requests sufficient for the classifier returned by the
algorithm to achieve a given risk under the 0-1 loss. Interestingly, these
results cannot be obtained by simply optimizing the surrogate risk via active
learning to an extent sufficient to provide a guarantee on the 0-1 loss, as is
common practice in the analysis of surrogate losses for passive learning. Some
of the results have additional implications for the use of surrogate losses in
passive learning.
| [
"Steve Hanneke and Liu Yang",
"['Steve Hanneke' 'Liu Yang']"
] |
cs.LG | null | 1207.3790 | null | null | http://arxiv.org/pdf/1207.3790v1 | 2012-07-16T08:49:34Z | 2012-07-16T08:49:34Z | Accuracy Measures for the Comparison of Classifiers | The selection of the best classification algorithm for a given dataset is a
very widespread problem. It is also a complex one, in the sense it requires to
make several important methodological choices. Among them, in this work we
focus on the measure used to assess the classification performance and rank the
algorithms. We present the most popular measures and discuss their properties.
Despite the numerous measures proposed over the years, many of them turn out to
be equivalent in this specific case, to have interpretation problems, or to be
unsuitable for our purpose. Consequently, classic overall success rate or
marginal rates should be preferred for this specific task.
| [
"['Vincent Labatut' 'Hocine Cherifi']",
"Vincent Labatut (BIT Lab), Hocine Cherifi (Le2i)"
] |
cs.IT cs.LG math.IT | null | 1207.3859 | null | null | http://arxiv.org/pdf/1207.3859v3 | 2012-12-01T23:30:36Z | 2012-07-17T01:50:46Z | Approximate Message Passing with Consistent Parameter Estimation and
Applications to Sparse Learning | We consider the estimation of an i.i.d. (possibly non-Gaussian) vector $\xbf
\in \R^n$ from measurements $\ybf \in \R^m$ obtained by a general cascade model
consisting of a known linear transform followed by a probabilistic
componentwise (possibly nonlinear) measurement channel. A novel method, called
adaptive generalized approximate message passing (Adaptive GAMP), that enables
joint learning of the statistics of the prior and measurement channel along
with estimation of the unknown vector $\xbf$ is presented. The proposed
algorithm is a generalization of a recently-developed EM-GAMP that uses
expectation-maximization (EM) iterations where the posteriors in the E-steps
are computed via approximate message passing. The methodology can be applied to
a large class of learning problems including the learning of sparse priors in
compressed sensing or identification of linear-nonlinear cascade models in
dynamical systems and neural spiking processes. We prove that for large i.i.d.
Gaussian transform matrices the asymptotic componentwise behavior of the
adaptive GAMP algorithm is predicted by a simple set of scalar state evolution
equations. In addition, we show that when a certain maximum-likelihood
estimation can be performed in each step, the adaptive GAMP method can yield
asymptotically consistent parameter estimates, which implies that the algorithm
achieves a reconstruction quality equivalent to the oracle algorithm that knows
the correct parameter values. Remarkably, this result applies to essentially
arbitrary parametrizations of the unknown distributions, including ones that
are nonlinear and non-Gaussian. The adaptive GAMP methodology thus provides a
systematic, general and computationally efficient method applicable to a large
range of complex linear-nonlinear models with provable guarantees.
| [
"Ulugbek S. Kamilov, Sundeep Rangan, Alyson K. Fletcher, Michael Unser",
"['Ulugbek S. Kamilov' 'Sundeep Rangan' 'Alyson K. Fletcher'\n 'Michael Unser']"
] |
stat.ML cs.LG | null | 1207.3961 | null | null | http://arxiv.org/pdf/1207.3961v3 | 2012-11-15T00:44:30Z | 2012-07-17T11:54:31Z | Ensemble Clustering with Logic Rules | In this article, the logic rule ensembles approach to supervised learning is
applied to the unsupervised or semi-supervised clustering. Logic rules which
were obtained by combining simple conjunctive rules are used to partition the
input space and an ensemble of these rules is used to define a similarity
matrix. Similarity partitioning is used to partition the data in an
hierarchical manner. We have used internal and external measures of cluster
validity to evaluate the quality of clusterings or to identify the number of
clusters.
| [
"Deniz Akdemir",
"['Deniz Akdemir']"
] |
cs.CV cs.LG | null | 1207.4089 | null | null | http://arxiv.org/pdf/1207.4089v1 | 2012-07-17T19:05:18Z | 2012-07-17T19:05:18Z | A Two-Stage Combined Classifier in Scale Space Texture Classification | Textures often show multiscale properties and hence multiscale techniques are
considered useful for texture analysis. Scale-space theory as a biologically
motivated approach may be used to construct multiscale textures. In this paper
various ways are studied to combine features on different scales for texture
classification of small image patches. We use the N-jet of derivatives up to
the second order at different scales to generate distinct pattern
representations (DPR) of feature subsets. Each feature subset in the DPR is
given to a base classifier (BC) of a two-stage combined classifier. The
decisions made by these BCs are combined in two stages over scales and
derivatives. Various combining systems and their significances and differences
are discussed. The learning curves are used to evaluate the performances. We
found for small sample sizes combining classifiers performs significantly
better than combining feature spaces (CFS). It is also shown that combining
classifiers performs better than the support vector machine on CFS in
multiscale texture classification.
| [
"Mehrdad J. Gangeh, Robert P. W. Duin, Bart M. ter Haar Romeny, Mohamed\n S. Kamel",
"['Mehrdad J. Gangeh' 'Robert P. W. Duin' 'Bart M. ter Haar Romeny'\n 'Mohamed S. Kamel']"
] |
cs.LG stat.ML | null | 1207.4110 | null | null | http://arxiv.org/pdf/1207.4110v1 | 2012-07-11T14:41:52Z | 2012-07-11T14:41:52Z | The Minimum Information Principle for Discriminative Learning | Exponential models of distributions are widely used in machine learning for
classiffication and modelling. It is well known that they can be interpreted as
maximum entropy models under empirical expectation constraints. In this work,
we argue that for classiffication tasks, mutual information is a more suitable
information theoretic measure to be optimized. We show how the principle of
minimum mutual information generalizes that of maximum entropy, and provides a
comprehensive framework for building discriminative classiffiers. A game
theoretic interpretation of our approach is then given, and several
generalization bounds provided. We present iterative algorithms for solving the
minimum information problem and its convex dual, and demonstrate their
performance on various classiffication tasks. The results show that minimum
information classiffiers outperform the corresponding maximum entropy models.
| [
"['Amir Globerson' 'Naftali Tishby']",
"Amir Globerson, Naftali Tishby"
] |
cs.LG stat.ML | null | 1207.4112 | null | null | http://arxiv.org/pdf/1207.4112v1 | 2012-07-11T14:42:26Z | 2012-07-11T14:42:26Z | Algebraic Statistics in Model Selection | We develop the necessary theory in computational algebraic geometry to place
Bayesian networks into the realm of algebraic statistics. We present an
algebra{statistics dictionary focused on statistical modeling. In particular,
we link the notion of effiective dimension of a Bayesian network with the
notion of algebraic dimension of a variety. We also obtain the independence and
non{independence constraints on the distributions over the observable variables
implied by a Bayesian network with hidden variables, via a generating set of an
ideal of polynomials associated to the network. These results extend previous
work on the subject. Finally, the relevance of these results for model
selection is discussed.
| [
"Luis David Garcia",
"['Luis David Garcia']"
] |
cs.LG stat.ML | null | 1207.4113 | null | null | http://arxiv.org/pdf/1207.4113v1 | 2012-07-11T14:42:45Z | 2012-07-11T14:42:45Z | On-line Prediction with Kernels and the Complexity Approximation
Principle | The paper describes an application of Aggregating Algorithm to the problem of
regression. It generalizes earlier results concerned with plain linear
regression to kernel techniques and presents an on-line algorithm which
performs nearly as well as any oblivious kernel predictor. The paper contains
the derivation of an estimate on the performance of this algorithm. The
estimate is then used to derive an application of the Complexity Approximation
Principle to kernel methods.
| [
"Alex Gammerman, Yuri Kalnishkan, Vladimir Vovk",
"['Alex Gammerman' 'Yuri Kalnishkan' 'Vladimir Vovk']"
] |
stat.ME cs.LG stat.ML | null | 1207.4118 | null | null | http://arxiv.org/pdf/1207.4118v1 | 2012-07-11T14:44:26Z | 2012-07-11T14:44:26Z | Iterative Conditional Fitting for Gaussian Ancestral Graph Models | Ancestral graph models, introduced by Richardson and Spirtes (2002),
generalize both Markov random fields and Bayesian networks to a class of graphs
with a global Markov property that is closed under conditioning and
marginalization. By design, ancestral graphs encode precisely the conditional
independence structures that can arise from Bayesian networks with selection
and unobserved (hidden/latent) variables. Thus, ancestral graph models provide
a potentially very useful framework for exploratory model selection when
unobserved variables might be involved in the data-generating process but no
particular hidden structure can be specified. In this paper, we present the
Iterative Conditional Fitting (ICF) algorithm for maximum likelihood estimation
in Gaussian ancestral graph models. The name reflects that in each step of the
procedure a conditional distribution is estimated, subject to constraints,
while a marginal distribution is held fixed. This approach is in duality to the
well-known Iterative Proportional Fitting algorithm, in which marginal
distributions are fitted while conditional distributions are held fixed.
| [
"['Mathias Drton' 'Thomas S. Richardson']",
"Mathias Drton, Thomas S. Richardson"
] |
cs.LG stat.ML | null | 1207.4125 | null | null | http://arxiv.org/pdf/1207.4125v1 | 2012-07-11T14:46:50Z | 2012-07-11T14:46:50Z | Applying Discrete PCA in Data Analysis | Methods for analysis of principal components in discrete data have existed
for some time under various names such as grade of membership modelling,
probabilistic latent semantic analysis, and genotype inference with admixture.
In this paper we explore a number of extensions to the common theory, and
present some application of these methods to some common statistical tasks. We
show that these methods can be interpreted as a discrete version of ICA. We
develop a hierarchical version yielding components at different levels of
detail, and additional techniques for Gibbs sampling. We compare the algorithms
on a text prediction task using support vector machines, and to information
retrieval.
| [
"['Wray L. Buntine' 'Aleks Jakulin']",
"Wray L. Buntine, Aleks Jakulin"
] |
cs.LG stat.ML | null | 1207.4131 | null | null | http://arxiv.org/pdf/1207.4131v1 | 2012-07-11T14:48:54Z | 2012-07-11T14:48:54Z | Exponential Families for Conditional Random Fields | In this paper we de ne conditional random elds in reproducing kernel Hilbert
spaces and show connections to Gaussian Process classi cation. More speci
cally, we prove decomposition results for undirected graphical models and we
give constructions for kernels. Finally we present e cient means of solving the
optimization problem using reduced rank decompositions and we show how
stationarity can be exploited e ciently in the optimization process.
| [
"Yasemin Altun, Alex Smola, Thomas Hofmann",
"['Yasemin Altun' 'Alex Smola' 'Thomas Hofmann']"
] |
cs.LG cs.AI stat.ML | null | 1207.4132 | null | null | http://arxiv.org/pdf/1207.4132v1 | 2012-07-11T14:51:03Z | 2012-07-11T14:51:03Z | MOB-ESP and other Improvements in Probability Estimation | A key prerequisite to optimal reasoning under uncertainty in intelligent
systems is to start with good class probability estimates. This paper improves
on the current best probability estimation trees (Bagged-PETs) and also
presents a new ensemble-based algorithm (MOB-ESP). Comparisons are made using
several benchmark datasets and multiple metrics. These experiments show that
MOB-ESP outputs significantly more accurate class probabilities than either the
baseline BPETs algorithm or the enhanced version presented here (EB-PETs).
These results are based on metrics closely associated with the average accuracy
of the predictions. MOB-ESP also provides much better probability rankings than
B-PETs. The paper further suggests how these estimation techniques can be
applied in concert with a broader category of classifiers.
| [
"['Rodney Nielsen']",
"Rodney Nielsen"
] |
cs.LG stat.ML | null | 1207.4133 | null | null | http://arxiv.org/pdf/1207.4133v1 | 2012-07-11T14:51:23Z | 2012-07-11T14:51:23Z | "Ideal Parent" Structure Learning for Continuous Variable Networks | In recent years, there is a growing interest in learning Bayesian networks
with continuous variables. Learning the structure of such networks is a
computationally expensive procedure, which limits most applications to
parameter learning. This problem is even more acute when learning networks with
hidden variables. We present a general method for significantly speeding the
structure search algorithm for continuous variable networks with common
parametric distributions. Importantly, our method facilitates the addition of
new hidden variables into the network structure efficiently. We demonstrate the
method on several data sets, both for learning structure on fully observable
data, and for introducing new hidden variables during structure search.
| [
"['Iftach Nachman' 'Gal Elidan' 'Nir Friedman']",
"Iftach Nachman, Gal Elidan, Nir Friedman"
] |
cs.LG stat.ML | null | 1207.4134 | null | null | http://arxiv.org/pdf/1207.4134v1 | 2012-07-11T14:51:41Z | 2012-07-11T14:51:41Z | Bayesian Learning in Undirected Graphical Models: Approximate MCMC
algorithms | Bayesian learning in undirected graphical models|computing posterior
distributions over parameters and predictive quantities is exceptionally
difficult. We conjecture that for general undirected models, there are no
tractable MCMC (Markov Chain Monte Carlo) schemes giving the correct
equilibrium distribution over parameters. While this intractability, due to the
partition function, is familiar to those performing parameter optimisation,
Bayesian learning of posterior distributions over undirected model parameters
has been unexplored and poses novel challenges. we propose several approximate
MCMC schemes and test on fully observed binary models (Boltzmann machines) for
a small coronary heart disease data set and larger artificial systems. While
approximations must perform well on the model, their interaction with the
sampling scheme is also important. Samplers based on variational mean- field
approximations generally performed poorly, more advanced methods using loopy
propagation, brief sampling and stochastic dynamics lead to acceptable
parameter posteriors. Finally, we demonstrate these techniques on a Markov
random field with hidden variables.
| [
"['Iain Murray' 'Zoubin Ghahramani']",
"Iain Murray, Zoubin Ghahramani"
] |
cs.LG stat.ML | null | 1207.4138 | null | null | http://arxiv.org/pdf/1207.4138v1 | 2012-07-11T14:52:51Z | 2012-07-11T14:52:51Z | Active Model Selection | Classical learning assumes the learner is given a labeled data sample, from
which it learns a model. The field of Active Learning deals with the situation
where the learner begins not with a training sample, but instead with resources
that it can use to obtain information to help identify the optimal model. To
better understand this task, this paper presents and analyses the simplified
"(budgeted) active model selection" version, which captures the pure
exploration aspect of many active learning problems in a clean and simple
problem formulation. Here the learner can use a fixed budget of "model probes"
(where each probe evaluates the specified model on a random indistinguishable
instance) to identify which of a given set of possible models has the highest
expected accuracy. Our goal is a policy that sequentially determines which
model to probe next, based on the information observed so far. We present a
formal description of this task, and show that it is NPhard in general. We then
investigate a number of algorithms for this task, including several existing
ones (eg, "Round-Robin", "Interval Estimation", "Gittins") as well as some
novel ones (e.g., "Biased-Robin"), describing first their approximation
properties and then their empirical performance on various problem instances.
We observe empirically that the simple biased-robin algorithm significantly
outperforms the other algorithms in the case of identical costs and priors.
| [
"['Omid Madani' 'Daniel J. Lizotte' 'Russell Greiner']",
"Omid Madani, Daniel J. Lizotte, Russell Greiner"
] |
cs.LG stat.ML | null | 1207.4139 | null | null | http://arxiv.org/pdf/1207.4139v1 | 2012-07-11T14:53:33Z | 2012-07-11T14:53:33Z | An Extended Cencov-Campbell Characterization of Conditional Information
Geometry | We formulate and prove an axiomatic characterization of conditional
information geometry, for both the normalized and the nonnormalized cases. This
characterization extends the axiomatic derivation of the Fisher geometry by
Cencov and Campbell to the cone of positive conditional models, and as a
special case to the manifold of conditional distributions. Due to the close
connection between the conditional I-divergence and the product Fisher
information metric the characterization provides a new axiomatic interpretation
of the primal problems underlying logistic regression and AdaBoost.
| [
"Guy Lebanon",
"['Guy Lebanon']"
] |
cs.LG stat.ML | null | 1207.4142 | null | null | http://arxiv.org/pdf/1207.4142v1 | 2012-07-11T14:54:25Z | 2012-07-11T14:54:25Z | Conditional Chow-Liu Tree Structures for Modeling Discrete-Valued Vector
Time Series | We consider the problem of modeling discrete-valued vector time series data
using extensions of Chow-Liu tree models to capture both dependencies across
time and dependencies across variables. Conditional Chow-Liu tree models are
introduced, as an extension to standard Chow-Liu trees, for modeling
conditional rather than joint densities. We describe learning algorithms for
such models and show how they can be used to learn parsimonious representations
for the output distributions in hidden Markov models. These models are applied
to the important problem of simulating and forecasting daily precipitation
occurrence for networks of rain stations. To demonstrate the effectiveness of
the models, we compare their performance versus a number of alternatives using
historical precipitation data from Southwestern Australia and the Western
United States. We illustrate how the structure and parameters of the models can
be used to provide an improved meteorological interpretation of such data.
| [
"['Sergey Kirshner' 'Padhraic Smyth' 'Andrew Robertson']",
"Sergey Kirshner, Padhraic Smyth, Andrew Robertson"
] |
cs.LG stat.ML | null | 1207.4144 | null | null | http://arxiv.org/pdf/1207.4144v1 | 2012-07-11T14:54:55Z | 2012-07-11T14:54:55Z | A Generative Bayesian Model for Aggregating Experts' Probabilities | In order to improve forecasts, a decisionmaker often combines probabilities
given by various sources, such as human experts and machine learning
classifiers. When few training data are available, aggregation can be improved
by incorporating prior knowledge about the event being forecasted and about
salient properties of the experts. To this end, we develop a generative
Bayesian aggregation model for probabilistic classi cation. The model includes
an event-specific prior, measures of individual experts' bias, calibration,
accuracy, and a measure of dependence betweeen experts. Rather than require
absolute measures, we show that aggregation may be expressed in terms of
relative accuracy between experts. The model results in a weighted logarithmic
opinion pool (LogOps) that satis es consistency criteria such as the external
Bayesian property. We derive analytic solutions for independent and for
exchangeable experts. Empirical tests demonstrate the model's use, comparing
its accuracy with other aggregation methods.
| [
"['Joseph Kahn']",
"Joseph Kahn"
] |
cs.LG cs.IR stat.ML | null | 1207.4146 | null | null | http://arxiv.org/pdf/1207.4146v1 | 2012-07-11T14:55:41Z | 2012-07-11T14:55:41Z | A Bayesian Approach toward Active Learning for Collaborative Filtering | Collaborative filtering is a useful technique for exploiting the preference
patterns of a group of users to predict the utility of items for the active
user. In general, the performance of collaborative filtering depends on the
number of rated examples given by the active user. The more the number of rated
examples given by the active user, the more accurate the predicted ratings will
be. Active learning provides an effective way to acquire the most informative
rated examples from active users. Previous work on active learning for
collaborative filtering only considers the expected loss function based on the
estimated model, which can be misleading when the estimated model is
inaccurate. This paper takes one step further by taking into account of the
posterior distribution of the estimated model, which results in more robust
active learning algorithm. Empirical studies with datasets of movie ratings
show that when the number of ratings from the active user is restricted to be
small, active learning methods only based on the estimated model don't perform
well while the active learning method using the model distribution achieves
substantially better performance.
| [
"['Rong Jin' 'Luo Si']",
"Rong Jin, Luo Si"
] |
cs.LG stat.ML | null | 1207.4148 | null | null | http://arxiv.org/pdf/1207.4148v1 | 2012-07-11T14:56:09Z | 2012-07-11T14:56:09Z | Dynamical Systems Trees | We propose dynamical systems trees (DSTs) as a flexible class of models for
describing multiple processes that interact via a hierarchy of aggregating
parent chains. DSTs extend Kalman filters, hidden Markov models and nonlinear
dynamical systems to an interactive group scenario. Various individual
processes interact as communities and sub-communities in a tree structure that
is unrolled in time. To accommodate nonlinear temporal activity, each
individual leaf process is modeled as a dynamical system containing discrete
and/or continuous hidden states with discrete and/or Gaussian emissions.
Subsequent higher level parent processes act like hidden Markov models and
mediate the interaction between leaf processes or between other parent
processes in the hierarchy. Aggregator chains are parents of child processes
that they combine and mediate, yielding a compact overall parameterization. We
provide tractable inference and learning algorithms for arbitrary DST
topologies via an efficient structured mean-field algorithm. The diverse
applicability of DSTs is demonstrated by experiments on gene expression data
and by modeling group behavior in the setting of an American football game.
| [
"['Andrew Howard' 'Tony S. Jebara']",
"Andrew Howard, Tony S. Jebara"
] |
stat.CO cs.LG | null | 1207.4149 | null | null | http://arxiv.org/pdf/1207.4149v1 | 2012-07-11T14:56:43Z | 2012-07-11T14:56:43Z | From Fields to Trees | We present new MCMC algorithms for computing the posterior distributions and
expectations of the unknown variables in undirected graphical models with
regular structure. For demonstration purposes, we focus on Markov Random Fields
(MRFs). By partitioning the MRFs into non-overlapping trees, it is possible to
compute the posterior distribution of a particular tree exactly by conditioning
on the remaining tree. These exact solutions allow us to construct efficient
blocked and Rao-Blackwellised MCMC algorithms. We show empirically that tree
sampling is considerably more efficient than other partitioned sampling schemes
and the naive Gibbs sampler, even in cases where loopy belief propagation fails
to converge. We prove that tree sampling exhibits lower variance than the naive
Gibbs sampler and other naive partitioning schemes using the theoretical
measure of maximal correlation. We also construct new information theory tools
for comparing different MCMC schemes and show that, under these, tree sampling
is more efficient.
| [
"['Firas Hamze' 'Nando de Freitas']",
"Firas Hamze, Nando de Freitas"
] |
cs.LG cs.DS stat.ML | null | 1207.4151 | null | null | http://arxiv.org/pdf/1207.4151v1 | 2012-07-11T14:57:38Z | 2012-07-11T14:57:38Z | PAC-learning bounded tree-width Graphical Models | We show that the class of strongly connected graphical models with treewidth
at most k can be properly efficiently PAC-learnt with respect to the
Kullback-Leibler Divergence. Previous approaches to this problem, such as those
of Chow ([1]), and Ho gen ([7]) have shown that this class is PAC-learnable by
reducing it to a combinatorial optimization problem. However, for k > 1, this
problem is NP-complete ([15]), and so unless P=NP, these approaches will take
exponential amounts of time. Our approach differs significantly from these, in
that it first attempts to find approximate conditional independencies by
solving (polynomially many) submodular optimization problems, and then using a
dynamic programming formulation to combine the approximate conditional
independence information to derive a graphical model with underlying graph of
the tree-width specified. This gives us an efficient (polynomial time in the
number of random variables) PAC-learning algorithm which requires only
polynomial number of samples of the true distribution, and only polynomial
running time.
| [
"['Mukund Narasimhan' 'Jeff A. Bilmes']",
"Mukund Narasimhan, Jeff A. Bilmes"
] |
cs.IR cs.LG | null | 1207.4152 | null | null | http://arxiv.org/pdf/1207.4152v1 | 2012-07-11T14:59:15Z | 2012-07-11T14:59:15Z | Maximum Entropy for Collaborative Filtering | Within the task of collaborative filtering two challenges for computing
conditional probabilities exist. First, the amount of training data available
is typically sparse with respect to the size of the domain. Thus, support for
higher-order interactions is generally not present. Second, the variables that
we are conditioning upon vary for each query. That is, users label different
variables during each query. For this reason, there is no consistent input to
output mapping. To address these problems we purpose a maximum entropy approach
using a non-standard measure of entropy. This approach can be simplified to
solving a set of linear equations that can be efficiently solved.
| [
"Lawrence Zitnick, Takeo Kanade",
"['Lawrence Zitnick' 'Takeo Kanade']"
] |
cs.LG stat.ML | null | 1207.4155 | null | null | http://arxiv.org/pdf/1207.4155v1 | 2012-07-11T14:59:55Z | 2012-07-11T14:59:55Z | Similarity-Driven Cluster Merging Method for Unsupervised Fuzzy
Clustering | In this paper, a similarity-driven cluster merging method is proposed for
unsuper-vised fuzzy clustering. The cluster merging method is used to resolve
the problem of cluster validation. Starting with an overspecified number of
clusters in the data, pairs of similar clusters are merged based on the
proposed similarity-driven cluster merging criterion. The similarity between
clusters is calculated by a fuzzy cluster similarity matrix, while an adaptive
threshold is used for merging. In addition, a modified generalized ob- jective
function is used for prototype-based fuzzy clustering. The function includes
the p-norm distance measure as well as principal components of the clusters.
The number of the principal components is determined automatically from the
data being clustered. The properties of this unsupervised fuzzy clustering
algorithm are illustrated by several experiments.
| [
"['Xuejian Xiong' 'Kap Chan' 'Kian Lee Tan']",
"Xuejian Xiong, Kap Chan, Kian Lee Tan"
] |
cs.LG stat.ML | null | 1207.4156 | null | null | http://arxiv.org/pdf/1207.4156v1 | 2012-07-11T15:00:11Z | 2012-07-11T15:00:11Z | Graph partition strategies for generalized mean field inference | An autonomous variational inference algorithm for arbitrary graphical models
requires the ability to optimize variational approximations over the space of
model parameters as well as over the choice of tractable families used for the
variational approximation. In this paper, we present a novel combination of
graph partitioning algorithms with a generalized mean field (GMF) inference
algorithm. This combination optimizes over disjoint clustering of variables and
performs inference using those clusters. We provide a formal analysis of the
relationship between the graph cut and the GMF approximation, and explore
several graph partition strategies empirically. Our empirical results provide
rather clear support for a weighted version of MinCut as a useful clustering
algorithm for GMF inference, which is consistent with the implications from the
formal analysis.
| [
"['Eric P. Xing' 'Michael I. Jordan' 'Stuart Russell']",
"Eric P. Xing, Michael I. Jordan, Stuart Russell"
] |
cs.LG cs.DL cs.IR stat.ML | null | 1207.4157 | null | null | http://arxiv.org/pdf/1207.4157v1 | 2012-07-11T15:00:28Z | 2012-07-11T15:00:28Z | An Integrated, Conditional Model of Information Extraction and
Coreference with Applications to Citation Matching | Although information extraction and coreference resolution appear together in
many applications, most current systems perform them as ndependent steps. This
paper describes an approach to integrated inference for extraction and
coreference based on conditionally-trained undirected graphical models. We
discuss the advantages of conditional probability training, and of a
coreference model structure based on graph partitioning. On a data set of
research paper citations, we show significant reduction in error by using
extraction uncertainty to improve coreference citation matching accuracy, and
using coreference to improve the accuracy of the extracted fields.
| [
"['Ben Wellner' 'Andrew McCallum' 'Fuchun Peng' 'Michael Hay']",
"Ben Wellner, Andrew McCallum, Fuchun Peng, Michael Hay"
] |
cs.AI cs.LG | null | 1207.4158 | null | null | http://arxiv.org/pdf/1207.4158v1 | 2012-07-11T15:01:36Z | 2012-07-11T15:01:36Z | On the Choice of Regions for Generalized Belief Propagation | Generalized belief propagation (GBP) has proven to be a promising technique
for approximate inference tasks in AI and machine learning. However, the choice
of a good set of clusters to be used in GBP has remained more of an art then a
science until this day. This paper proposes a sequential approach to adding new
clusters of nodes and their interactions (i.e. "regions") to the approximation.
We first review and analyze the recently introduced region graphs and find that
three kinds of operations ("split", "merge" and "death") leave the free energy
and (under some conditions) the fixed points of GBP invariant. This leads to
the notion of "weakly irreducible" regions as the natural candidates to be
added to the approximation. Computational complexity of the GBP algorithm is
controlled by restricting attention to regions with small "region-width".
Combining the above with an efficient (i.e. local in the graph) measure to
predict the improved accuracy of GBP leads to the sequential "region pursuit"
algorithm for adding new regions bottom-up to the region graph. Experiments
show that this algorithm can indeed perform close to optimally.
| [
"Max Welling",
"['Max Welling']"
] |
stat.AP cs.LG stat.ME | null | 1207.4162 | null | null | http://arxiv.org/pdf/1207.4162v2 | 2012-08-08T20:45:28Z | 2012-07-11T15:03:00Z | ARMA Time-Series Modeling with Graphical Models | We express the classic ARMA time-series model as a directed graphical model.
In doing so, we find that the deterministic relationships in the model make it
effectively impossible to use the EM algorithm for learning model parameters.
To remedy this problem, we replace the deterministic relationships with
Gaussian distributions having a small variance, yielding the stochastic ARMA
(ARMA) model. This modification allows us to use the EM algorithm to learn
parmeters and to forecast,even in situations where some data is missing. This
modification, in conjunction with the graphicalmodel approach, also allows us
to include cross predictors in situations where there are multiple times series
and/or additional nontemporal covariates. More surprising,experiments suggest
that the move to stochastic ARMA yields improved accuracy through better
smoothing. We demonstrate improvements afforded by cross prediction and better
smoothing on real data.
| [
"['Bo Thiesson' 'David Maxwell Chickering' 'David Heckerman'\n 'Christopher Meek']",
"Bo Thiesson, David Maxwell Chickering, David Heckerman, Christopher\n Meek"
] |
cs.LG stat.ML | null | 1207.4164 | null | null | http://arxiv.org/pdf/1207.4164v1 | 2012-07-11T15:03:34Z | 2012-07-11T15:03:34Z | Factored Latent Analysis for far-field tracking data | This paper uses Factored Latent Analysis (FLA) to learn a factorized,
segmental representation for observations of tracked objects over time.
Factored Latent Analysis is latent class analysis in which the observation
space is subdivided and each aspect of the original space is represented by a
separate latent class model. One could simply treat these factors as completely
independent and ignore their interdependencies or one could concatenate them
together and attempt to learn latent class structure for the complete
observation space. Alternatively, FLA allows the interdependencies to be
exploited in estimating an effective model, which is also capable of
representing a factored latent state. In this paper, FLA is used to learn a set
of factored latent classes to represent different modalities of observations of
tracked objects. Different characteristics of the state of tracked objects are
each represented by separate latent class models, including normalized size,
normalized speed, normalized direction, and position. This model also enables
effective temporal segmentation of these sequences. This method is data-driven,
unsupervised using only pairwise observation statistics. This data-driven and
unsupervised activity classi- fication technique exhibits good performance in
multiple challenging environments.
| [
"Chris Stauffer",
"['Chris Stauffer']"
] |
cs.AI cs.LG | null | 1207.4167 | null | null | http://arxiv.org/pdf/1207.4167v1 | 2012-07-11T15:05:10Z | 2012-07-11T15:05:10Z | Predictive State Representations: A New Theory for Modeling Dynamical
Systems | Modeling dynamical systems, both for control purposes and to make predictions
about their behavior, is ubiquitous in science and engineering. Predictive
state representations (PSRs) are a recently introduced class of models for
discrete-time dynamical systems. The key idea behind PSRs and the closely
related OOMs (Jaeger's observable operator models) is to represent the state of
the system as a set of predictions of observable outcomes of experiments one
can do in the system. This makes PSRs rather different from history-based
models such as nth-order Markov models and hidden-state-based models such as
HMMs and POMDPs. We introduce an interesting construct, the systemdynamics
matrix, and show how PSRs can be derived simply from it. We also use this
construct to show formally that PSRs are more general than both nth-order
Markov models and HMMs/POMDPs. Finally, we discuss the main difference between
PSRs and OOMs and conclude with directions for future work.
| [
"['Satinder Singh' 'Michael James' 'Matthew Rudary']",
"Satinder Singh, Michael James, Matthew Rudary"
] |
cs.IR cs.LG stat.ML | null | 1207.4169 | null | null | http://arxiv.org/pdf/1207.4169v1 | 2012-07-11T15:05:53Z | 2012-07-11T15:05:53Z | The Author-Topic Model for Authors and Documents | We introduce the author-topic model, a generative model for documents that
extends Latent Dirichlet Allocation (LDA; Blei, Ng, & Jordan, 2003) to include
authorship information. Each author is associated with a multinomial
distribution over topics and each topic is associated with a multinomial
distribution over words. A document with multiple authors is modeled as a
distribution over topics that is a mixture of the distributions associated with
the authors. We apply the model to a collection of 1,700 NIPS conference papers
and 160,000 CiteSeer abstracts. Exact inference is intractable for these
datasets and we use Gibbs sampling to estimate the topic and author
distributions. We compare the performance with two other generative models for
documents, which are special cases of the author-topic model: LDA (a topic
model) and a simple author model in which each author is associated with a
distribution over words rather than a distribution over topics. We show topics
recovered by the author-topic model, and demonstrate applications to computing
similarity between authors and entropy of author output.
| [
"['Michal Rosen-Zvi' 'Thomas Griffiths' 'Mark Steyvers' 'Padhraic Smyth']",
"Michal Rosen-Zvi, Thomas Griffiths, Mark Steyvers, Padhraic Smyth"
] |
cs.LG stat.ML | null | 1207.4172 | null | null | http://arxiv.org/pdf/1207.4172v1 | 2012-07-11T15:06:59Z | 2012-07-11T15:06:59Z | Variational Chernoff Bounds for Graphical Models | Recent research has made significant progress on the problem of bounding log
partition functions for exponential family graphical models. Such bounds have
associated dual parameters that are often used as heuristic estimates of the
marginal probabilities required in inference and learning. However these
variational estimates do not give rigorous bounds on marginal probabilities,
nor do they give estimates for probabilities of more general events than simple
marginals. In this paper we build on this recent work by deriving rigorous
upper and lower bounds on event probabilities for graphical models. Our
approach is based on the use of generalized Chernoff bounds to express bounds
on event probabilities in terms of convex optimization problems; these
optimization problems, in turn, require estimates of generalized log partition
functions. Simulations indicate that this technique can result in useful,
rigorous bounds to complement the heuristic variational estimates, with
comparable computational cost.
| [
"['Pradeep Ravikumar' 'John Lafferty']",
"Pradeep Ravikumar, John Lafferty"
] |
cs.LG cs.IR stat.ML | null | 1207.4180 | null | null | http://arxiv.org/pdf/1207.4180v1 | 2012-07-12T19:48:03Z | 2012-07-12T19:48:03Z | A Hierarchical Graphical Model for Record Linkage | The task of matching co-referent records is known among other names as rocord
linkage. For large record-linkage problems, often there is little or no labeled
data available, but unlabeled data shows a reasonable clear structure. For such
problems, unsupervised or semi-supervised methods are preferable to supervised
methods. In this paper, we describe a hierarchical graphical model framework
for the linakge-problem in an unsupervised setting. In addition to proposing
new methods, we also cast existing unsupervised probabilistic record-linkage
methods in this framework. Some of the techniques we propose to minimize
overfitting in the above model are of interest in the general graphical model
setting. We describe a method for incorporating monotinicity constraints in a
graphical model. We also outline a bootstrapping approach of using
"single-field" classifiers to noisily label latent variables in a hierarchical
model. Experimental results show that our proposed unsupervised methods perform
quite competitively even with fully supervised record-linkage methods.
| [
"['Pradeep Ravikumar' 'William Cohen']",
"Pradeep Ravikumar, William Cohen"
] |
cs.LG stat.ML | null | 1207.4255 | null | null | http://arxiv.org/pdf/1207.4255v2 | 2015-10-24T08:11:13Z | 2012-07-18T02:53:02Z | On the Statistical Efficiency of $\ell_{1,p}$ Multi-Task Learning of
Gaussian Graphical Models | In this paper, we present $\ell_{1,p}$ multi-task structure learning for
Gaussian graphical models. We analyze the sufficient number of samples for the
correct recovery of the support union and edge signs. We also analyze the
necessary number of samples for any conceivable method by providing
information-theoretic lower bounds. We compare the statistical efficiency of
multi-task learning versus that of single-task learning. For experiments, we
use a block coordinate descent method that is provably convergent and generates
a sequence of positive definite solutions. We provide experimental validation
on synthetic data as well as on two publicly available real-world data sets,
including functional magnetic resonance imaging and gene expression data.
| [
"['Jean Honorio' 'Tommi Jaakkola' 'Dimitris Samaras']",
"Jean Honorio, Tommi Jaakkola and Dimitris Samaras"
] |
cs.LG | null | 1207.4404 | null | null | http://arxiv.org/pdf/1207.4404v1 | 2012-07-18T16:07:36Z | 2012-07-18T16:07:36Z | Better Mixing via Deep Representations | It has previously been hypothesized, and supported with some experimental
evidence, that deeper representations, when well trained, tend to do a better
job at disentangling the underlying factors of variation. We study the
following related conjecture: better representations, in the sense of better
disentangling, can be exploited to produce faster-mixing Markov chains.
Consequently, mixing would be more efficient at higher levels of
representation. To better understand why and how this is happening, we propose
a secondary conjecture: the higher-level samples fill more uniformly the space
they occupy and the high-density manifolds tend to unfold when represented at
higher levels. The paper discusses these hypotheses and tests them
experimentally through visualization and measurements of mixing and
interpolating between samples.
| [
"Yoshua Bengio, Gr\\'egoire Mesnil, Yann Dauphin and Salah Rifai",
"['Yoshua Bengio' 'Grégoire Mesnil' 'Yann Dauphin' 'Salah Rifai']"
] |
null | null | 1207.4421 | null | null | http://arxiv.org/pdf/1207.4421v1 | 2012-07-18T17:40:11Z | 2012-07-18T17:40:11Z | Stochastic optimization and sparse statistical recovery: An optimal
algorithm for high dimensions | We develop and analyze stochastic optimization algorithms for problems in which the expected loss is strongly convex, and the optimum is (approximately) sparse. Previous approaches are able to exploit only one of these two structures, yielding an $order(pdim/T)$ convergence rate for strongly convex objectives in $pdim$ dimensions, and an $order(sqrt{(spindex log pdim)/T})$ convergence rate when the optimum is $spindex$-sparse. Our algorithm is based on successively solving a series of $ell_1$-regularized optimization problems using Nesterov's dual averaging algorithm. We establish that the error of our solution after $T$ iterations is at most $order((spindex logpdim)/T)$, with natural extensions to approximate sparsity. Our results apply to locally Lipschitz losses including the logistic, exponential, hinge and least-squares losses. By recourse to statistical minimax results, we show that our convergence rates are optimal up to multiplicative constant factors. The effectiveness of our approach is also confirmed in numerical simulations, in which we compare to several baselines on a least-squares regression problem. | [
"['Alekh Agarwal' 'Sahand Negahban' 'Martin J. Wainwright']"
] |
q-bio.QM cs.LG q-bio.MN | null | 1207.4463 | null | null | http://arxiv.org/pdf/1207.4463v1 | 2012-07-18T19:45:28Z | 2012-07-18T19:45:28Z | Protein Function Prediction Based on Kernel Logistic Regression with
2-order Graphic Neighbor Information | To enhance the accuracy of protein-protein interaction function prediction, a
2-order graphic neighbor information feature extraction method based on
undirected simple graph is proposed in this paper, which extends the 1-order
graphic neighbor featureextraction method. And the chi-square test statistical
method is also involved in feature combination. To demonstrate the
effectiveness of our 2-order graphic neighbor feature, four logistic regression
models (logistic regression (abbrev. LR), diffusion kernel logistic regression
(abbrev. DKLR), polynomial kernel logistic regression (abbrev. PKLR), and
radial basis function (RBF) based kernel logistic regression (abbrev. RBF KLR))
are investigated on the two feature sets. The experimental results of protein
function prediction of Yeast Proteome Database (YPD) using the the
protein-protein interaction data of Munich Information Center for Protein
Sequences (MIPS) show that 2-order graphic neighbor information of proteins can
significantly improve the average overall percentage of protein function
prediction especially with RBF KLR. And, with a new 5-top chi-square feature
combination method, RBF KLR can achieve 99.05% average overall percentage on
2-order neighbor feature combination set.
| [
"Jingwei Liu",
"['Jingwei Liu']"
] |
stat.ML cs.LG | 10.3233/978-1-61499-096-3-180 | 1207.4597 | null | null | http://arxiv.org/abs/1207.4597v1 | 2012-07-19T09:49:54Z | 2012-07-19T09:49:54Z | Local stability of Belief Propagation algorithm with multiple fixed
points | A number of problems in statistical physics and computer science can be
expressed as the computation of marginal probabilities over a Markov random
field. Belief propagation, an iterative message-passing algorithm, computes
exactly such marginals when the underlying graph is a tree. But it has gained
its popularity as an efficient way to approximate them in the more general
case, even if it can exhibits multiple fixed points and is not guaranteed to
converge. In this paper, we express a new sufficient condition for local
stability of a belief propagation fixed point in terms of the graph structure
and the beliefs values at the fixed point. This gives credence to the usual
understanding that Belief Propagation performs better on sparse graphs.
| [
"Victorin Martin, Jean-Marc Lasgouttes and Cyril Furtlehner",
"['Victorin Martin' 'Jean-Marc Lasgouttes' 'Cyril Furtlehner']"
] |
cs.LG stat.ML | null | 1207.4676 | null | null | http://arxiv.org/pdf/1207.4676v2 | 2012-09-16T11:24:54Z | 2012-07-19T14:08:22Z | Proceedings of the 29th International Conference on Machine Learning
(ICML-12) | This is an index to the papers that appear in the Proceedings of the 29th
International Conference on Machine Learning (ICML-12). The conference was held
in Edinburgh, Scotland, June 27th - July 3rd, 2012.
| [
"['John Langford' 'Joelle Pineau']",
"John Langford and Joelle Pineau (Editors)"
] |
cs.LG math.OC stat.ML | null | 1207.4747 | null | null | http://arxiv.org/pdf/1207.4747v4 | 2013-01-14T13:26:51Z | 2012-07-19T18:02:41Z | Block-Coordinate Frank-Wolfe Optimization for Structural SVMs | We propose a randomized block-coordinate variant of the classic Frank-Wolfe
algorithm for convex optimization with block-separable constraints. Despite its
lower iteration cost, we show that it achieves a similar convergence rate in
duality gap as the full Frank-Wolfe algorithm. We also show that, when applied
to the dual structural support vector machine (SVM) objective, this yields an
online algorithm that has the same low iteration complexity as primal
stochastic subgradient methods. However, unlike stochastic subgradient methods,
the block-coordinate Frank-Wolfe algorithm allows us to compute the optimal
step-size and yields a computable duality gap guarantee. Our experiments
indicate that this simple algorithm outperforms competing structural SVM
solvers.
| [
"['Simon Lacoste-Julien' 'Martin Jaggi' 'Mark Schmidt' 'Patrick Pletscher']",
"Simon Lacoste-Julien, Martin Jaggi, Mark Schmidt, Patrick Pletscher"
] |
stat.ML cs.IT cs.LG math.IT | null | 1207.4748 | null | null | http://arxiv.org/pdf/1207.4748v1 | 2012-07-19T18:06:37Z | 2012-07-19T18:06:37Z | Hierarchical Clustering using Randomly Selected Similarities | The problem of hierarchical clustering items from pairwise similarities is
found across various scientific disciplines, from biology to networking. Often,
applications of clustering techniques are limited by the cost of obtaining
similarities between pairs of items. While prior work has been developed to
reconstruct clustering using a significantly reduced set of pairwise
similarities via adaptive measurements, these techniques are only applicable
when choice of similarities are available to the user. In this paper, we
examine reconstructing hierarchical clustering under similarity observations
at-random. We derive precise bounds which show that a significant fraction of
the hierarchical clustering can be recovered using fewer than all the pairwise
similarities. We find that the correct hierarchical clustering down to a
constant fraction of the total number of items (i.e., clusters sized O(N)) can
be found using only O(N log N) randomly selected pairwise similarities in
expectation.
| [
"Brian Eriksson",
"['Brian Eriksson']"
] |
cs.AI cs.LG math.CO stat.CO stat.ML | null | 1207.4814 | null | null | http://arxiv.org/pdf/1207.4814v1 | 2012-07-19T21:30:42Z | 2012-07-19T21:30:42Z | Automorphism Groups of Graphical Models and Lifted Variational Inference | Using the theory of group action, we first introduce the concept of the
automorphism group of an exponential family or a graphical model, thus
formalizing the general notion of symmetry of a probabilistic model. This
automorphism group provides a precise mathematical framework for lifted
inference in the general exponential family. Its group action partitions the
set of random variables and feature functions into equivalent classes (called
orbits) having identical marginals and expectations. Then the inference problem
is effectively reduced to that of computing marginals or expectations for each
class, thus avoiding the need to deal with each individual variable or feature.
We demonstrate the usefulness of this general framework in lifting two classes
of variational approximation for MAP inference: local LP relaxation and local
LP relaxation with cycle constraints; the latter yields the first lifted
inference that operate on a bound tighter than local constraints. Initial
experimental results demonstrate that lifted MAP inference with cycle
constraints achieved the state of the art performance, obtaining much better
objective function values than local approximation while remaining relatively
efficient.
| [
"['Hung Hai Bui' 'Tuyen N. Huynh' 'Sebastian Riedel']",
"Hung Hai Bui and Tuyen N. Huynh and Sebastian Riedel"
] |
cs.RO cs.AI cs.LG cs.NE | null | 1207.4931 | null | null | http://arxiv.org/pdf/1207.4931v1 | 2012-07-20T12:15:12Z | 2012-07-20T12:15:12Z | Motion Planning Of an Autonomous Mobile Robot Using Artificial Neural
Network | The paper presents the electronic design and motion planning of a robot based
on decision making regarding its straight motion and precise turn using
Artificial Neural Network (ANN). The ANN helps in learning of robot so that it
performs motion autonomously. The weights calculated are implemented in
microcontroller. The performance has been tested to be excellent.
| [
"G. N. Tripathi and V. Rihani",
"['G. N. Tripathi' 'V. Rihani']"
] |
stat.ML cs.LG | null | 1207.4992 | null | null | http://arxiv.org/pdf/1207.4992v2 | 2012-12-18T00:10:49Z | 2012-07-20T16:28:57Z | Fast nonparametric classification based on data depth | A new procedure, called DDa-procedure, is developed to solve the problem of
classifying d-dimensional objects into q >= 2 classes. The procedure is
completely nonparametric; it uses q-dimensional depth plots and a very
efficient algorithm for discrimination analysis in the depth space [0,1]^q.
Specifically, the depth is the zonoid depth, and the algorithm is the
alpha-procedure. In case of more than two classes several binary
classifications are performed and a majority rule is applied. Special
treatments are discussed for 'outsiders', that is, data having zero depth
vector. The DDa-classifier is applied to simulated as well as real data, and
the results are compared with those of similar procedures that have been
recently proposed. In most cases the new procedure has comparable error rates,
but is much faster than other classification approaches, including the SVM.
| [
"Tatjana Lange, Karl Mosler and Pavlo Mozharovskyi",
"['Tatjana Lange' 'Karl Mosler' 'Pavlo Mozharovskyi']"
] |
cs.LO cs.LG | 10.1109/LICS.2012.54 | 1207.5091 | null | null | http://arxiv.org/abs/1207.5091v1 | 2012-07-21T02:34:25Z | 2012-07-21T02:34:25Z | Learning Probabilistic Systems from Tree Samples | We consider the problem of learning a non-deterministic probabilistic system
consistent with a given finite set of positive and negative tree samples.
Consistency is defined with respect to strong simulation conformance. We
propose learning algorithms that use traditional and a new "stochastic"
state-space partitioning, the latter resulting in the minimum number of states.
We then use them to solve the problem of "active learning", that uses a
knowledgeable teacher to generate samples as counterexamples to simulation
equivalence queries. We show that the problem is undecidable in general, but
that it becomes decidable under a suitable condition on the teacher which comes
naturally from the way samples are generated from failed simulation checks. The
latter problem is shown to be undecidable if we impose an additional condition
on the learner to always conjecture a "minimum state" hypothesis. We therefore
propose a semi-algorithm using stochastic partitions. Finally, we apply the
proposed (semi-) algorithms to infer intermediate assumptions in an automated
assume-guarantee verification framework for probabilistic systems.
| [
"['Anvesh Komuravelli' 'Corina S. Pasareanu' 'Edmund M. Clarke']",
"Anvesh Komuravelli, Corina S. Pasareanu and Edmund M. Clarke"
] |
stat.ML cs.LG stat.ME | null | 1207.5136 | null | null | http://arxiv.org/pdf/1207.5136v1 | 2012-07-21T13:31:56Z | 2012-07-21T13:31:56Z | Causal Inference on Time Series using Structural Equation Models | Causal inference uses observations to infer the causal structure of the data
generating system. We study a class of functional models that we call Time
Series Models with Independent Noise (TiMINo). These models require independent
residual time series, whereas traditional methods like Granger causality
exploit the variance of residuals. There are two main contributions: (1)
Theoretical: By restricting the model class (e.g. to additive noise) we can
provide a more general identifiability result than existing ones. This result
incorporates lagged and instantaneous effects that can be nonlinear and do not
need to be faithful, and non-instantaneous feedbacks between the time series.
(2) Practical: If there are no feedback loops between time series, we propose
an algorithm based on non-linear independence tests of time series. When the
data are causally insufficient, or the data generating process does not satisfy
the model assumptions, this algorithm may still give partial results, but
mostly avoids incorrect answers. An extension to (non-instantaneous) feedbacks
is possible, but not discussed. It outperforms existing methods on artificial
and real data. Code can be provided upon request.
| [
"['Jonas Peters' 'Dominik Janzing' 'Bernhard Schölkopf']",
"Jonas Peters, Dominik Janzing and Bernhard Sch\\\"olkopf"
] |
cs.AI cs.LG stat.ML | null | 1207.5208 | null | null | http://arxiv.org/pdf/1207.5208v1 | 2012-07-22T09:34:49Z | 2012-07-22T09:34:49Z | Meta-Learning of Exploration/Exploitation Strategies: The Multi-Armed
Bandit Case | The exploration/exploitation (E/E) dilemma arises naturally in many subfields
of Science. Multi-armed bandit problems formalize this dilemma in its canonical
form. Most current research in this field focuses on generic solutions that can
be applied to a wide range of problems. However, in practice, it is often the
case that a form of prior information is available about the specific class of
target problems. Prior knowledge is rarely used in current solutions due to the
lack of a systematic approach to incorporate it into the E/E strategy.
To address a specific class of E/E problems, we propose to proceed in three
steps: (i) model prior knowledge in the form of a probability distribution over
the target class of E/E problems; (ii) choose a large hypothesis space of
candidate E/E strategies; and (iii), solve an optimization problem to find a
candidate E/E strategy of maximal average performance over a sample of problems
drawn from the prior distribution.
We illustrate this meta-learning approach with two different hypothesis
spaces: one where E/E strategies are numerically parameterized and another
where E/E strategies are represented as small symbolic formulas. We propose
appropriate optimization algorithms for both cases. Our experiments, with
two-armed Bernoulli bandit problems and various playing budgets, show that the
meta-learnt E/E strategies outperform generic strategies of the literature
(UCB1, UCB1-Tuned, UCB-v, KL-UCB and epsilon greedy); they also evaluate the
robustness of the learnt E/E strategies, by tests carried out on arms whose
rewards follow a truncated Gaussian distribution.
| [
"Francis Maes and Damien Ernst and Louis Wehenkel",
"['Francis Maes' 'Damien Ernst' 'Louis Wehenkel']"
] |
cs.LG stat.ML | null | 1207.5259 | null | null | http://arxiv.org/pdf/1207.5259v3 | 2013-03-29T21:47:06Z | 2012-07-22T21:01:09Z | Optimal discovery with probabilistic expert advice: finite time analysis
and macroscopic optimality | We consider an original problem that arises from the issue of security
analysis of a power system and that we name optimal discovery with
probabilistic expert advice. We address it with an algorithm based on the
optimistic paradigm and on the Good-Turing missing mass estimator. We prove two
different regret bounds on the performance of this algorithm under weak
assumptions on the probabilistic experts. Under more restrictive hypotheses, we
also prove a macroscopic optimality result, comparing the algorithm both with
an oracle strategy and with uniform sampling. Finally, we provide numerical
experiments illustrating these theoretical findings.
| [
"['Sebastien Bubeck' 'Damien Ernst' 'Aurelien Garivier']",
"Sebastien Bubeck and Damien Ernst and Aurelien Garivier"
] |
cs.IT cs.LG cs.NI math.IT | null | 1207.5342 | null | null | http://arxiv.org/pdf/1207.5342v1 | 2012-07-23T10:22:56Z | 2012-07-23T10:22:56Z | A Robust Signal Classification Scheme for Cognitive Radio | This paper presents a robust signal classification scheme for achieving
comprehensive spectrum sensing of multiple coexisting wireless systems. It is
built upon a group of feature-based signal detection algorithms enhanced by the
proposed dimension cancelation (DIC) method for mitigating the noise
uncertainty problem. The classification scheme is implemented on our testbed
consisting real-world wireless devices. The simulation and experimental
performances agree with each other well and shows the effectiveness and
robustness of the proposed scheme.
| [
"['Hanwen Cao' 'Jürgen Peissig']",
"Hanwen Cao and J\\\"urgen Peissig"
] |
cs.LG stat.ML | null | 1207.5437 | null | null | http://arxiv.org/pdf/1207.5437v2 | 2013-03-17T10:56:35Z | 2012-07-23T16:20:05Z | Generalization Bounds for Metric and Similarity Learning | Recently, metric learning and similarity learning have attracted a large
amount of interest. Many models and optimisation algorithms have been proposed.
However, there is relatively little work on the generalization analysis of such
methods. In this paper, we derive novel generalization bounds of metric and
similarity learning. In particular, we first show that the generalization
analysis reduces to the estimation of the Rademacher average over
"sums-of-i.i.d." sample-blocks related to the specific matrix norm. Then, we
derive generalization bounds for metric/similarity learning with different
matrix-norm regularisers by estimating their specific Rademacher complexities.
Our analysis indicates that sparse metric/similarity learning with $L^1$-norm
regularisation could lead to significantly better bounds than those with
Frobenius-norm regularisation. Our novel generalization analysis develops and
refines the techniques of U-statistics and Rademacher complexity analysis.
| [
"['Qiong Cao' 'Zheng-Chu Guo' 'Yiming Ying']",
"Qiong Cao, Zheng-Chu Guo and Yiming Ying"
] |
cs.AI cs.LG | null | 1207.5536 | null | null | http://arxiv.org/pdf/1207.5536v1 | 2012-07-23T21:13:40Z | 2012-07-23T21:13:40Z | MCTS Based on Simple Regret | UCT, a state-of-the art algorithm for Monte Carlo tree search (MCTS) in games
and Markov decision processes, is based on UCB, a sampling policy for the
Multi-armed Bandit problem (MAB) that minimizes the cumulative regret. However,
search differs from MAB in that in MCTS it is usually only the final "arm pull"
(the actual move selection) that collects a reward, rather than all "arm
pulls". Therefore, it makes more sense to minimize the simple regret, as
opposed to the cumulative regret. We begin by introducing policies for
multi-armed bandits with lower finite-time and asymptotic simple regret than
UCB, using it to develop a two-stage scheme (SR+CR) for MCTS which outperforms
UCT empirically.
Optimizing the sampling process is itself a metareasoning problem, a solution
of which can use value of information (VOI) techniques. Although the theory of
VOI for search exists, applying it to MCTS is non-trivial, as typical myopic
assumptions fail. Lacking a complete working VOI theory for MCTS, we
nevertheless propose a sampling scheme that is "aware" of VOI, achieving an
algorithm that in empirical evaluation outperforms both UCT and the other
proposed algorithms.
| [
"['David Tolpin' 'Solomon Eyal Shimony']",
"David Tolpin and Solomon Eyal Shimony"
] |
cs.LG stat.ML | null | 1207.5554 | null | null | http://arxiv.org/pdf/1207.5554v3 | 2012-09-21T22:51:40Z | 2012-07-23T22:39:51Z | Bellman Error Based Feature Generation using Random Projections on
Sparse Spaces | We address the problem of automatic generation of features for value function
approximation. Bellman Error Basis Functions (BEBFs) have been shown to improve
the error of policy evaluation with function approximation, with a convergence
rate similar to that of value iteration. We propose a simple, fast and robust
algorithm based on random projections to generate BEBFs for sparse feature
spaces. We provide a finite sample analysis of the proposed method, and prove
that projections logarithmic in the dimension of the original space are enough
to guarantee contraction in the error. Empirical results demonstrate the
strength of this method.
| [
"['Mahdi Milani Fard' 'Yuri Grinberg' 'Amir-massoud Farahmand'\n 'Joelle Pineau' 'Doina Precup']",
"Mahdi Milani Fard, Yuri Grinberg, Amir-massoud Farahmand, Joelle\n Pineau, Doina Precup"
] |
cs.AI cs.LG | null | 1207.5589 | null | null | http://arxiv.org/pdf/1207.5589v1 | 2012-07-24T04:55:02Z | 2012-07-24T04:55:02Z | VOI-aware MCTS | UCT, a state-of-the art algorithm for Monte Carlo tree search (MCTS) in games
and Markov decision processes, is based on UCB1, a sampling policy for the
Multi-armed Bandit problem (MAB) that minimizes the cumulative regret. However,
search differs from MAB in that in MCTS it is usually only the final "arm pull"
(the actual move selection) that collects a reward, rather than all "arm
pulls". In this paper, an MCTS sampling policy based on Value of Information
(VOI) estimates of rollouts is suggested. Empirical evaluation of the policy
and comparison to UCB1 and UCT is performed on random MAB instances as well as
on Computer Go.
| [
"['David Tolpin' 'Solomon Eyal Shimony']",
"David Tolpin and Solomon Eyal Shimony"
] |
cs.CV cs.LG cs.NE | null | 1207.5774 | null | null | http://arxiv.org/pdf/1207.5774v3 | 2012-07-27T08:24:51Z | 2012-07-22T16:30:07Z | A New Training Algorithm for Kanerva's Sparse Distributed Memory | The Sparse Distributed Memory proposed by Pentii Kanerva (SDM in short) was
thought to be a model of human long term memory. The architecture of the SDM
permits to store binary patterns and to retrieve them using partially matching
patterns. However Kanerva's model is especially efficient only in handling
random data. The purpose of this article is to introduce a new approach of
training Kanerva's SDM that can handle efficiently non-random data, and to
provide it the capability to recognize inverted patterns. This approach uses a
signal model which is different from the one proposed for different purposes by
Hely, Willshaw and Hayes in [4]. This article additionally suggests a different
way of creating hard locations in the memory despite the Kanerva's static
model.
| [
"['Lou Marvin Caraig']",
"Lou Marvin Caraig"
] |
stat.ME cs.LG math.ST stat.ML stat.TH | 10.1214/13-AOS1140 | 1207.6076 | null | null | http://arxiv.org/abs/1207.6076v3 | 2013-11-12T12:22:53Z | 2012-07-25T18:17:20Z | Equivalence of distance-based and RKHS-based statistics in hypothesis
testing | We provide a unifying framework linking two classes of statistics used in
two-sample and independence testing: on the one hand, the energy distances and
distance covariances from the statistics literature; on the other, maximum mean
discrepancies (MMD), that is, distances between embeddings of distributions to
reproducing kernel Hilbert spaces (RKHS), as established in machine learning.
In the case where the energy distance is computed with a semimetric of negative
type, a positive definite kernel, termed distance kernel, may be defined such
that the MMD corresponds exactly to the energy distance. Conversely, for any
positive definite kernel, we can interpret the MMD as energy distance with
respect to some negative-type semimetric. This equivalence readily extends to
distance covariance using kernels on the product space. We determine the class
of probability distributions for which the test statistics are consistent
against all alternatives. Finally, we investigate the performance of the family
of distance kernels in two-sample and independence tests: we show in particular
that the energy distance most commonly employed in statistics is just one
member of a parametric family of kernels, and that other choices from this
family can yield more powerful tests.
| [
"['Dino Sejdinovic' 'Bharath Sriperumbudur' 'Arthur Gretton'\n 'Kenji Fukumizu']",
"Dino Sejdinovic, Bharath Sriperumbudur, Arthur Gretton, Kenji Fukumizu"
] |
stat.ML cs.IR cs.LG | 10.1561/2200000044 | 1207.6083 | null | null | http://arxiv.org/abs/1207.6083v4 | 2013-01-10T20:43:53Z | 2012-07-25T18:45:43Z | Determinantal point processes for machine learning | Determinantal point processes (DPPs) are elegant probabilistic models of
repulsion that arise in quantum physics and random matrix theory. In contrast
to traditional structured models like Markov random fields, which become
intractable and hard to approximate in the presence of negative correlations,
DPPs offer efficient and exact algorithms for sampling, marginalization,
conditioning, and other inference tasks. We provide a gentle introduction to
DPPs, focusing on the intuitions, algorithms, and extensions that are most
relevant to the machine learning community, and show how DPPs can be applied to
real-world applications like finding diverse sets of high-quality search
results, building informative summaries by selecting diverse sentences from
documents, modeling non-overlapping human poses in images or video, and
automatically building timelines of important news stories.
| [
"['Alex Kulesza' 'Ben Taskar']",
"Alex Kulesza, Ben Taskar"
] |
cs.CR cs.LG | 10.1109/TIFS.2012.2225048 | 1207.6231 | null | null | http://arxiv.org/abs/1207.6231v2 | 2012-10-08T21:32:42Z | 2012-07-26T10:34:19Z | Touchalytics: On the Applicability of Touchscreen Input as a Behavioral
Biometric for Continuous Authentication | We investigate whether a classifier can continuously authenticate users based
on the way they interact with the touchscreen of a smart phone. We propose a
set of 30 behavioral touch features that can be extracted from raw touchscreen
logs and demonstrate that different users populate distinct subspaces of this
feature space. In a systematic experiment designed to test how this behavioral
pattern exhibits consistency over time, we collected touch data from users
interacting with a smart phone using basic navigation maneuvers, i.e., up-down
and left-right scrolling. We propose a classification framework that learns the
touch behavior of a user during an enrollment phase and is able to accept or
reject the current user by monitoring interaction with the touch screen. The
classifier achieves a median equal error rate of 0% for intra-session
authentication, 2%-3% for inter-session authentication and below 4% when the
authentication test was carried out one week after the enrollment phase. While
our experimental findings disqualify this method as a standalone authentication
mechanism for long-term authentication, it could be implemented as a means to
extend screen-lock time or as a part of a multi-modal biometric authentication
system.
| [
"['Mario Frank' 'Ralf Biedert' 'Eugene Ma' 'Ivan Martinovic' 'Dawn Song']",
"Mario Frank, Ralf Biedert, Eugene Ma, Ivan Martinovic, Dawn Song"
] |
cs.AI cs.DB cs.LG | null | 1207.6253 | null | null | http://arxiv.org/pdf/1207.6253v1 | 2012-07-26T12:33:46Z | 2012-07-26T12:33:46Z | On When and How to use SAT to Mine Frequent Itemsets | A new stream of research was born in the last decade with the goal of mining
itemsets of interest using Constraint Programming (CP). This has promoted a
natural way to combine complex constraints in a highly flexible manner.
Although CP state-of-the-art solutions formulate the task using Boolean
variables, the few attempts to adopt propositional Satisfiability (SAT)
provided an unsatisfactory performance. This work deepens the study on when and
how to use SAT for the frequent itemset mining (FIM) problem by defining
different encodings with multiple task-driven enumeration options and search
strategies. Although for the majority of the scenarios SAT-based solutions
appear to be non-competitive with CP peers, results show a variety of
interesting cases where SAT encodings are the best option.
| [
"['Rui Henriques' 'Inês Lynce' 'Vasco Manquinho']",
"Rui Henriques and In\\^es Lynce and Vasco Manquinho"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.