categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG | null | 1209.3056 | null | null | http://arxiv.org/pdf/1209.3056v1 | 2012-09-13T22:47:07Z | 2012-09-13T22:47:07Z | Parametric Local Metric Learning for Nearest Neighbor Classification | We study the problem of learning local metrics for nearest neighbor
classification. Most previous works on local metric learning learn a number of
local unrelated metrics. While this "independence" approach delivers an
increased flexibility its downside is the considerable risk of overfitting. We
present a new parametric local metric learning method in which we learn a
smooth metric matrix function over the data manifold. Using an approximation
error bound of the metric matrix function we learn local metrics as linear
combinations of basis metrics defined on anchor points over different regions
of the instance space. We constrain the metric matrix function by imposing on
the linear combinations manifold regularization which makes the learned metric
matrix function vary smoothly along the geodesics of the data manifold. Our
metric learning method has excellent performance both in terms of predictive
power and scalability. We experimented with several large-scale classification
problems, tens of thousands of instances, and compared it with several state of
the art metric learning methods, both global and local, as well as to SVM with
automatic kernel selection, all of which it outperforms in a significant
manner.
| [
"['Jun Wang' 'Adam Woznica' 'Alexandros Kalousis']",
"Jun Wang, Adam Woznica, Alexandros Kalousis"
] |
cs.ET cs.LG cs.NE physics.optics | null | 1209.3129 | null | null | http://arxiv.org/pdf/1209.3129v1 | 2012-09-14T08:56:19Z | 2012-09-14T08:56:19Z | Analog readout for optical reservoir computers | Reservoir computing is a new, powerful and flexible machine learning
technique that is easily implemented in hardware. Recently, by using a
time-multiplexed architecture, hardware reservoir computers have reached
performance comparable to digital implementations. Operating speeds allowing
for real time information operation have been reached using optoelectronic
systems. At present the main performance bottleneck is the readout layer which
uses slow, digital postprocessing. We have designed an analog readout suitable
for time-multiplexed optoelectronic reservoir computers, capable of working in
real time. The readout has been built and tested experimentally on a standard
benchmark task. Its performance is better than non-reservoir methods, with
ample room for further improvement. The present work thereby overcomes one of
the major limitations for the future development of hardware reservoir
computers.
| [
"['Anteo Smerieri' 'François Duport' 'Yvan Paquot' 'Benjamin Schrauwen'\n 'Marc Haelterman' 'Serge Massar']",
"Anteo Smerieri, Fran\\c{c}ois Duport, Yvan Paquot, Benjamin Schrauwen,\n Marc Haelterman, Serge Massar"
] |
cs.LG cs.DS stat.ML | null | 1209.3352 | null | null | http://arxiv.org/pdf/1209.3352v4 | 2014-02-03T07:09:03Z | 2012-09-15T03:27:11Z | Thompson Sampling for Contextual Bandits with Linear Payoffs | Thompson Sampling is one of the oldest heuristics for multi-armed bandit
problems. It is a randomized algorithm based on Bayesian ideas, and has
recently generated significant interest after several studies demonstrated it
to have better empirical performance compared to the state-of-the-art methods.
However, many questions regarding its theoretical performance remained open. In
this paper, we design and analyze a generalization of Thompson Sampling
algorithm for the stochastic contextual multi-armed bandit problem with linear
payoff functions, when the contexts are provided by an adaptive adversary. This
is among the most important and widely studied versions of the contextual
bandits problem. We provide the first theoretical guarantees for the contextual
version of Thompson Sampling. We prove a high probability regret bound of
$\tilde{O}(d^{3/2}\sqrt{T})$ (or $\tilde{O}(d\sqrt{T \log(N)})$), which is the
best regret bound achieved by any computationally efficient algorithm available
for this problem in the current literature, and is within a factor of
$\sqrt{d}$ (or $\sqrt{\log(N)}$) of the information-theoretic lower bound for
this problem.
| [
"Shipra Agrawal, Navin Goyal",
"['Shipra Agrawal' 'Navin Goyal']"
] |
cs.LG cs.DS stat.ML | null | 1209.3353 | null | null | http://arxiv.org/pdf/1209.3353v1 | 2012-09-15T03:41:18Z | 2012-09-15T03:41:18Z | Further Optimal Regret Bounds for Thompson Sampling | Thompson Sampling is one of the oldest heuristics for multi-armed bandit
problems. It is a randomized algorithm based on Bayesian ideas, and has
recently generated significant interest after several studies demonstrated it
to have better empirical performance compared to the state of the art methods.
In this paper, we provide a novel regret analysis for Thompson Sampling that
simultaneously proves both the optimal problem-dependent bound of
$(1+\epsilon)\sum_i \frac{\ln T}{\Delta_i}+O(\frac{N}{\epsilon^2})$ and the
first near-optimal problem-independent bound of $O(\sqrt{NT\ln T})$ on the
expected regret of this algorithm. Our near-optimal problem-independent bound
solves a COLT 2012 open problem of Chapelle and Li. The optimal
problem-dependent regret bound for this problem was first proven recently by
Kaufmann et al. [ALT 2012]. Our novel martingale-based analysis techniques are
conceptually simple, easily extend to distributions other than the Beta
distribution, and also extend to the more general contextual bandits setting
[Manuscript, Agrawal and Goyal, 2012].
| [
"Shipra Agrawal, Navin Goyal",
"['Shipra Agrawal' 'Navin Goyal']"
] |
cs.CV cs.CY cs.LG | null | 1209.3433 | null | null | http://arxiv.org/pdf/1209.3433v1 | 2012-09-15T20:57:51Z | 2012-09-15T20:57:51Z | A Hajj And Umrah Location Classification System For Video Crowded Scenes | In this paper, a new automatic system for classifying ritual locations in
diverse Hajj and Umrah video scenes is investigated. This challenging subject
has mostly been ignored in the past due to several problems one of which is the
lack of realistic annotated video datasets. HUER Dataset is defined to model
six different Hajj and Umrah ritual locations[26].
The proposed Hajj and Umrah ritual location classifying system consists of
four main phases: Preprocessing, segmentation, feature extraction, and location
classification phases. The shot boundary detection and background/foregroud
segmentation algorithms are applied to prepare the input video scenes into the
KNN, ANN, and SVM classifiers. The system improves the state of art results on
Hajj and Umrah location classifications, and successfully recognizes the six
Hajj rituals with more than 90% accuracy. The various demonstrated experiments
show the promising results.
| [
"Hossam M. Zawbaa, Salah A. Aly, Adnan A. Gutub",
"['Hossam M. Zawbaa' 'Salah A. Aly' 'Adnan A. Gutub']"
] |
cs.LG cs.DB | null | 1209.3686 | null | null | http://arxiv.org/pdf/1209.3686v4 | 2014-12-20T08:56:15Z | 2012-09-17T15:21:06Z | Active Learning for Crowd-Sourced Databases | Crowd-sourcing has become a popular means of acquiring labeled data for a
wide variety of tasks where humans are more accurate than computers, e.g.,
labeling images, matching objects, or analyzing sentiment. However, relying
solely on the crowd is often impractical even for data sets with thousands of
items, due to time and cost constraints of acquiring human input (which cost
pennies and minutes per label). In this paper, we propose algorithms for
integrating machine learning into crowd-sourced databases, with the goal of
allowing crowd-sourcing applications to scale, i.e., to handle larger datasets
at lower costs. The key observation is that, in many of the above tasks, humans
and machine learning algorithms can be complementary, as humans are often more
accurate but slow and expensive, while algorithms are usually less accurate,
but faster and cheaper.
Based on this observation, we present two new active learning algorithms to
combine humans and algorithms together in a crowd-sourced database. Our
algorithms are based on the theory of non-parametric bootstrap, which makes our
results applicable to a broad class of machine learning models. Our results, on
three real-life datasets collected with Amazon's Mechanical Turk, and on 15
well-known UCI data sets, show that our methods on average ask humans to label
one to two orders of magnitude fewer items to achieve the same accuracy as a
baseline that labels random images, and two to eight times fewer questions than
previous active learning schemes.
| [
"Barzan Mozafari, Purnamrita Sarkar, Michael J. Franklin, Michael I.\n Jordan, Samuel Madden",
"['Barzan Mozafari' 'Purnamrita Sarkar' 'Michael J. Franklin'\n 'Michael I. Jordan' 'Samuel Madden']"
] |
cs.LG cs.AI cs.DS | null | 1209.3694 | null | null | http://arxiv.org/pdf/1209.3694v1 | 2012-09-17T15:43:11Z | 2012-09-17T15:43:11Z | Submodularity in Batch Active Learning and Survey Problems on Gaussian
Random Fields | Many real-world datasets can be represented in the form of a graph whose edge
weights designate similarities between instances. A discrete Gaussian random
field (GRF) model is a finite-dimensional Gaussian process (GP) whose prior
covariance is the inverse of a graph Laplacian. Minimizing the trace of the
predictive covariance Sigma (V-optimality) on GRFs has proven successful in
batch active learning classification problems with budget constraints. However,
its worst-case bound has been missing. We show that the V-optimality on GRFs as
a function of the batch query set is submodular and hence its greedy selection
algorithm guarantees an (1-1/e) approximation ratio. Moreover, GRF models have
the absence-of-suppressor (AofS) condition. For active survey problems, we
propose a similar survey criterion which minimizes 1'(Sigma)1. In practice,
V-optimality criterion performs better than GPs with mutual information gain
criteria and allows nonuniform costs for different nodes.
| [
"['Yifei Ma' 'Roman Garnett' 'Jeff Schneider']",
"Yifei Ma, Roman Garnett, Jeff Schneider"
] |
stat.ML cs.LG | null | 1209.3761 | null | null | http://arxiv.org/pdf/1209.3761v1 | 2012-09-17T19:52:38Z | 2012-09-17T19:52:38Z | Generalized Canonical Correlation Analysis for Disparate Data Fusion | Manifold matching works to identify embeddings of multiple disparate data
spaces into the same low-dimensional space, where joint inference can be
pursued. It is an enabling methodology for fusion and inference from multiple
and massive disparate data sources. In this paper we focus on a method called
Canonical Correlation Analysis (CCA) and its generalization Generalized
Canonical Correlation Analysis (GCCA), which belong to the more general Reduced
Rank Regression (RRR) framework. We present an efficiency investigation of CCA
and GCCA under different training conditions for a particular text document
classification task.
| [
"Ming Sun, Carey E. Priebe, Minh Tang",
"['Ming Sun' 'Carey E. Priebe' 'Minh Tang']"
] |
cs.AI cs.LG | null | 1209.3818 | null | null | http://arxiv.org/pdf/1209.3818v4 | 2013-04-01T23:58:47Z | 2012-09-18T00:13:53Z | Evolution and the structure of learning agents | This paper presents the thesis that all learning agents of finite information
size are limited by their informational structure in what goals they can
efficiently learn to achieve in a complex environment. Evolutionary change is
critical for creating the required structure for all learning agents in any
complex environment. The thesis implies that there is no efficient universal
learning algorithm. An agent can go past the learning limits imposed by its
structure only by slow evolutionary change or blind search which in a very
complex environment can only give an agent an inefficient universal learning
capability that can work only in evolutionary timescales or improbable luck.
| [
"Alok Raj",
"['Alok Raj']"
] |
stat.ML cs.HC cs.LG | 10.1109/TBME.2013.2253608 | 1209.4115 | null | null | http://arxiv.org/abs/1209.4115v2 | 2013-04-03T17:26:03Z | 2012-09-18T22:37:10Z | Transferring Subspaces Between Subjects in Brain-Computer Interfacing | Compensating changes between a subjects' training and testing session in
Brain Computer Interfacing (BCI) is challenging but of great importance for a
robust BCI operation. We show that such changes are very similar between
subjects, thus can be reliably estimated using data from other users and
utilized to construct an invariant feature space. This novel approach to
learning from other subjects aims to reduce the adverse effects of common
non-stationarities, but does not transfer discriminative information. This is
an important conceptual difference to standard multi-subject methods that e.g.
improve the covariance matrix estimation by shrinking it towards the average of
other users or construct a global feature space. These methods do not reduces
the shift between training and test data and may produce poor results when
subjects have very different signal characteristics. In this paper we compare
our approach to two state-of-the-art multi-subject methods on toy data and two
data sets of EEG recordings from subjects performing motor imagery. We show
that it can not only achieve a significant increase in performance, but also
that the extracted change patterns allow for a neurophysiologically meaningful
interpretation.
| [
"Wojciech Samek, Frank C. Meinecke, Klaus-Robert M\\\"uller",
"['Wojciech Samek' 'Frank C. Meinecke' 'Klaus-Robert Müller']"
] |
stat.ML cs.LG stat.CO | null | 1209.4129 | null | null | http://arxiv.org/pdf/1209.4129v3 | 2013-10-11T19:23:38Z | 2012-09-19T01:27:40Z | Comunication-Efficient Algorithms for Statistical Optimization | We analyze two communication-efficient algorithms for distributed statistical
optimization on large-scale data sets. The first algorithm is a standard
averaging method that distributes the $N$ data samples evenly to $\nummac$
machines, performs separate minimization on each subset, and then averages the
estimates. We provide a sharp analysis of this average mixture algorithm,
showing that under a reasonable set of conditions, the combined parameter
achieves mean-squared error that decays as $\order(N^{-1}+(N/m)^{-2})$.
Whenever $m \le \sqrt{N}$, this guarantee matches the best possible rate
achievable by a centralized algorithm having access to all $\totalnumobs$
samples. The second algorithm is a novel method, based on an appropriate form
of bootstrap subsampling. Requiring only a single round of communication, it
has mean-squared error that decays as $\order(N^{-1} + (N/m)^{-3})$, and so is
more robust to the amount of parallelization. In addition, we show that a
stochastic gradient-based method attains mean-squared error decaying as
$O(N^{-1} + (N/ m)^{-3/2})$, easing computation at the expense of penalties in
the rate of convergence. We also provide experimental evaluation of our
methods, investigating their performance both on simulated data and on a
large-scale regression problem from the internet search domain. In particular,
we show that our methods can be used to efficiently solve an advertisement
prediction problem from the Chinese SoSo Search Engine, which involves logistic
regression with $N \approx 2.4 \times 10^8$ samples and $d \approx 740,000$
covariates.
| [
"['Yuchen Zhang' 'John C. Duchi' 'Martin Wainwright']",
"Yuchen Zhang and John C. Duchi and Martin Wainwright"
] |
cs.LG stat.ML | null | 1209.4825 | null | null | http://arxiv.org/pdf/1209.4825v2 | 2013-06-08T14:23:04Z | 2012-09-21T14:14:35Z | Efficient Regularized Least-Squares Algorithms for Conditional Ranking
on Relational Data | In domains like bioinformatics, information retrieval and social network
analysis, one can find learning tasks where the goal consists of inferring a
ranking of objects, conditioned on a particular target object. We present a
general kernel framework for learning conditional rankings from various types
of relational data, where rankings can be conditioned on unseen data objects.
We propose efficient algorithms for conditional ranking by optimizing squared
regression and ranking loss functions. We show theoretically, that learning
with the ranking loss is likely to generalize better than with the regression
loss. Further, we prove that symmetry or reciprocity properties of relations
can be efficiently enforced in the learned models. Experiments on synthetic and
real-world data illustrate that the proposed methods deliver state-of-the-art
performance in terms of predictive power and computational efficiency.
Moreover, we also show empirically that incorporating symmetry or reciprocity
properties can improve the generalization performance.
| [
"['Tapio Pahikkala' 'Antti Airola' 'Michiel Stock' 'Bernard De Baets'\n 'Willem Waegeman']",
"Tapio Pahikkala, Antti Airola, Michiel Stock, Bernard De Baets, Willem\n Waegeman"
] |
cs.CG cs.LG | null | 1209.4893 | null | null | http://arxiv.org/pdf/1209.4893v2 | 2012-10-10T22:22:46Z | 2012-09-21T19:55:53Z | On the Sensitivity of Shape Fitting Problems | In this article, we study shape fitting problems, $\epsilon$-coresets, and
total sensitivity. We focus on the $(j,k)$-projective clustering problems,
including $k$-median/$k$-means, $k$-line clustering, $j$-subspace
approximation, and the integer $(j,k)$-projective clustering problem. We derive
upper bounds of total sensitivities for these problems, and obtain
$\epsilon$-coresets using these upper bounds. Using a dimension-reduction type
argument, we are able to greatly simplify earlier results on total sensitivity
for the $k$-median/$k$-means clustering problems, and obtain
positively-weighted $\epsilon$-coresets for several variants of the
$(j,k)$-projective clustering problem. We also extend an earlier result on
$\epsilon$-coresets for the integer $(j,k)$-projective clustering problem in
fixed dimension to the case of high dimension.
| [
"['Kasturi Varadarajan' 'Xin Xiao']",
"Kasturi Varadarajan and Xin Xiao"
] |
stat.ML cs.LG stat.ME | null | 1209.4951 | null | null | http://arxiv.org/pdf/1209.4951v3 | 2013-08-01T17:44:53Z | 2012-09-22T01:50:43Z | An efficient model-free estimation of multiclass conditional probability | Conventional multiclass conditional probability estimation methods, such as
Fisher's discriminate analysis and logistic regression, often require
restrictive distributional model assumption. In this paper, a model-free
estimation method is proposed to estimate multiclass conditional probability
through a series of conditional quantile regression functions. Specifically,
the conditional class probability is formulated as difference of corresponding
cumulative distribution functions, where the cumulative distribution functions
can be converted from the estimated conditional quantile regression functions.
The proposed estimation method is also efficient as its computation cost does
not increase exponentially with the number of classes. The theoretical and
numerical studies demonstrate that the proposed estimation method is highly
competitive against the existing competitors, especially when the number of
classes is relatively large.
| [
"Tu Xu and Junhui Wang",
"['Tu Xu' 'Junhui Wang']"
] |
cs.LG stat.ML | null | 1209.5019 | null | null | http://arxiv.org/pdf/1209.5019v1 | 2012-09-22T21:01:06Z | 2012-09-22T21:01:06Z | A Bayesian Nonparametric Approach to Image Super-resolution | Super-resolution methods form high-resolution images from low-resolution
images. In this paper, we develop a new Bayesian nonparametric model for
super-resolution. Our method uses a beta-Bernoulli process to learn a set of
recurring visual patterns, called dictionary elements, from the data. Because
it is nonparametric, the number of elements found is also determined from the
data. We test the results on both benchmark and natural images, comparing with
several other models from the research literature. We perform large-scale human
evaluation experiments to assess the visual quality of the results. In a first
implementation, we use Gibbs sampling to approximate the posterior. However,
this algorithm is not feasible for large-scale data. To circumvent this, we
then develop an online variational Bayes (VB) algorithm. This algorithm finds
high quality dictionaries in a fraction of the time needed by the Gibbs
sampler.
| [
"['Gungor Polatkan' 'Mingyuan Zhou' 'Lawrence Carin' 'David Blei'\n 'Ingrid Daubechies']",
"Gungor Polatkan and Mingyuan Zhou and Lawrence Carin and David Blei\n and Ingrid Daubechies"
] |
cs.LG | null | 1209.5038 | null | null | http://arxiv.org/pdf/1209.5038v1 | 2012-09-23T07:50:42Z | 2012-09-23T07:50:42Z | Fast Randomized Model Generation for Shapelet-Based Time Series
Classification | Time series classification is a field which has drawn much attention over the
past decade. A new approach for classification of time series uses
classification trees based on shapelets. A shapelet is a subsequence extracted
from one of the time series in the dataset. A disadvantage of this approach is
the time required for building the shapelet-based classification tree. The
search for the best shapelet requires examining all subsequences of all lengths
from all time series in the training set.
A key goal of this work was to find an evaluation order of the shapelets
space which enables fast convergence to an accurate model. The comparative
analysis we conducted clearly indicates that a random evaluation order yields
the best results. Our empirical analysis of the distribution of high-quality
shapelets within the shapelets space provides insights into why randomized
shapelets sampling is superior to alternative evaluation orders.
We present an algorithm for randomized model generation for shapelet-based
classification that converges extremely quickly to a model with surprisingly
high accuracy after evaluating only an exceedingly small fraction of the
shapelets space.
| [
"['Daniel Gordon' 'Danny Hendler' 'Lior Rokach']",
"Daniel Gordon, Danny Hendler, Lior Rokach"
] |
cs.AI cs.LG | null | 1209.5251 | null | null | http://arxiv.org/pdf/1209.5251v1 | 2012-09-24T12:54:18Z | 2012-09-24T12:54:18Z | On Move Pattern Trends in a Large Go Games Corpus | We process a large corpus of game records of the board game of Go and propose
a way of extracting summary information on played moves. We then apply several
basic data-mining methods on the summary information to identify the most
differentiating features within the summary information, and discuss their
correspondence with traditional Go knowledge. We show statistically significant
mappings of the features to player attributes such as playing strength or
informally perceived "playing style" (e.g. territoriality or aggressivity),
describe accurate classifiers for these attributes, and propose applications
including seeding real-work ranks of internet players, aiding in Go study and
tuning of Go-playing programs, or contribution to Go-theoretical discussion on
the scope of "playing style".
| [
"Petr Baudi\\v{s}, Josef Moud\\v{r}\\'ik",
"['Petr Baudiš' 'Josef Moudřík']"
] |
cs.LG | null | 1209.5260 | null | null | http://arxiv.org/pdf/1209.5260v2 | 2019-12-15T15:57:22Z | 2012-09-24T13:23:39Z | Towards Ultrahigh Dimensional Feature Selection for Big Data | In this paper, we present a new adaptive feature scaling scheme for
ultrahigh-dimensional feature selection on Big Data. To solve this problem
effectively, we first reformulate it as a convex semi-infinite programming
(SIP) problem and then propose an efficient \emph{feature generating paradigm}.
In contrast with traditional gradient-based approaches that conduct
optimization on all input features, the proposed method iteratively activates a
group of features and solves a sequence of multiple kernel learning (MKL)
subproblems of much reduced scale. To further speed up the training, we propose
to solve the MKL subproblems in their primal forms through a modified
accelerated proximal gradient approach. Due to such an optimization scheme,
some efficient cache techniques are also developed. The feature generating
paradigm can guarantee that the solution converges globally under mild
conditions and achieve lower feature selection bias. Moreover, the proposed
method can tackle two challenging tasks in feature selection: 1) group-based
feature selection with complex structures and 2) nonlinear feature selection
with explicit feature mappings. Comprehensive experiments on a wide range of
synthetic and real-world datasets containing tens of million data points with
$O(10^{14})$ features demonstrate the competitive performance of the proposed
method over state-of-the-art feature selection methods in terms of
generalization performance and training efficiency.
| [
"['Mingkui Tan' 'Ivor W. Tsang' 'Li Wang']",
"Mingkui Tan and Ivor W. Tsang and Li Wang"
] |
cs.LG | null | 1209.5335 | null | null | http://arxiv.org/pdf/1209.5335v1 | 2012-09-24T16:59:12Z | 2012-09-24T16:59:12Z | BPRS: Belief Propagation Based Iterative Recommender System | In this paper we introduce the first application of the Belief Propagation
(BP) algorithm in the design of recommender systems. We formulate the
recommendation problem as an inference problem and aim to compute the marginal
probability distributions of the variables which represent the ratings to be
predicted. However, computing these marginal probability functions is
computationally prohibitive for large-scale systems. Therefore, we utilize the
BP algorithm to efficiently compute these functions. Recommendations for each
active user are then iteratively computed by probabilistic message passing. As
opposed to the previous recommender algorithms, BPRS does not require solving
the recommendation problem for all the users if it wishes to update the
recommendations for only a single active. Further, BPRS computes the
recommendations for each user with linear complexity and without requiring a
training period. Via computer simulations (using the 100K MovieLens dataset),
we verify that BPRS iteratively reduces the error in the predicted ratings of
the users until it converges. Finally, we confirm that BPRS is comparable to
the state of art methods such as Correlation-based neighborhood model (CorNgbr)
and Singular Value Decomposition (SVD) in terms of rating and precision
accuracy. Therefore, we believe that the BP-based recommendation algorithm is a
new promising approach which offers a significant advantage on scalability
while providing competitive accuracy for the recommender systems.
| [
"['Erman Ayday' 'Arash Einolghozati' 'Faramarz Fekri']",
"Erman Ayday, Arash Einolghozati, Faramarz Fekri"
] |
stat.ML cs.LG stat.AP | null | 1209.5350 | null | null | http://arxiv.org/pdf/1209.5350v3 | 2013-05-24T18:25:32Z | 2012-09-24T18:11:02Z | Learning Topic Models and Latent Bayesian Networks Under Expansion
Constraints | Unsupervised estimation of latent variable models is a fundamental problem
central to numerous applications of machine learning and statistics. This work
presents a principled approach for estimating broad classes of such models,
including probabilistic topic models and latent linear Bayesian networks, using
only second-order observed moments. The sufficient conditions for
identifiability of these models are primarily based on weak expansion
constraints on the topic-word matrix, for topic models, and on the directed
acyclic graph, for Bayesian networks. Because no assumptions are made on the
distribution among the latent variables, the approach can handle arbitrary
correlations among the topics or latent factors. In addition, a tractable
learning method via $\ell_1$ optimization is proposed and studied in numerical
experiments.
| [
"Animashree Anandkumar, Daniel Hsu, Adel Javanmard, Sham M. Kakade",
"['Animashree Anandkumar' 'Daniel Hsu' 'Adel Javanmard' 'Sham M. Kakade']"
] |
stat.ML cs.LG | null | 1209.5467 | null | null | http://arxiv.org/pdf/1209.5467v4 | 2013-05-08T00:54:19Z | 2012-09-25T01:33:01Z | Minimizing inter-subject variability in fNIRS based Brain Computer
Interfaces via multiple-kernel support vector learning | Brain signal variability in the measurements obtained from different subjects
during different sessions significantly deteriorates the accuracy of most
brain-computer interface (BCI) systems. Moreover these variabilities, also
known as inter-subject or inter-session variabilities, require lengthy
calibration sessions before the BCI system can be used. Furthermore, the
calibration session has to be repeated for each subject independently and
before use of the BCI due to the inter-session variability. In this study, we
present an algorithm in order to minimize the above-mentioned variabilities and
to overcome the time-consuming and usually error-prone calibration time. Our
algorithm is based on linear programming support-vector machines and their
extensions to a multiple kernel learning framework. We tackle the inter-subject
or -session variability in the feature spaces of the classifiers. This is done
by incorporating each subject- or session-specific feature spaces into much
richer feature spaces with a set of optimal decision boundaries. Each decision
boundary represents the subject- or a session specific spatio-temporal
variabilities of neural signals. Consequently, a single classifier with
multiple feature spaces will generalize well to new unseen test patterns even
without the calibration steps. We demonstrate that classifiers maintain good
performances even under the presence of a large degree of BCI variability. The
present study analyzes BCI variability related to oxy-hemoglobin neural signals
measured using a functional near-infrared spectroscopy.
| [
"['Berdakh Abibullaev' 'Jinung An' 'Seung-Hyun Lee' 'Sang-Hyeon Jin'\n 'Jeon-Il Moon']",
"Berdakh Abibullaev, Jinung An, Seung-Hyun Lee, Sang-Hyeon Jin, Jeon-Il\n Moon"
] |
stat.ML cs.LG | null | 1209.5477 | null | null | http://arxiv.org/pdf/1209.5477v2 | 2012-09-26T05:15:07Z | 2012-09-25T02:54:49Z | Optimal Weighting of Multi-View Data with Low Dimensional Hidden States | In Natural Language Processing (NLP) tasks, data often has the following two
properties: First, data can be chopped into multi-views which has been
successfully used for dimension reduction purposes. For example, in topic
classification, every paper can be chopped into the title, the main text and
the references. However, it is common that some of the views are less noisier
than other views for supervised learning problems. Second, unlabeled data are
easy to obtain while labeled data are relatively rare. For example, articles
occurred on New York Times in recent 10 years are easy to grab but having them
classified as 'Politics', 'Finance' or 'Sports' need human labor. Hence less
noisy features are preferred before running supervised learning methods. In
this paper we propose an unsupervised algorithm which optimally weights
features from different views when these views are generated from a low
dimensional hidden state, which occurs in widely used models like Mixture
Gaussian Model, Hidden Markov Model (HMM) and Latent Dirichlet Allocation
(LDA).
| [
"Yichao Lu and Dean P. Foster",
"['Yichao Lu' 'Dean P. Foster']"
] |
q-bio.NC cs.LG stat.ML | null | 1209.5549 | null | null | http://arxiv.org/pdf/1209.5549v1 | 2012-09-25T09:23:41Z | 2012-09-25T09:23:41Z | Towards a learning-theoretic analysis of spike-timing dependent
plasticity | This paper suggests a learning-theoretic perspective on how synaptic
plasticity benefits global brain functioning. We introduce a model, the
selectron, that (i) arises as the fast time constant limit of leaky
integrate-and-fire neurons equipped with spiking timing dependent plasticity
(STDP) and (ii) is amenable to theoretical analysis. We show that the selectron
encodes reward estimates into spikes and that an error bound on spikes is
controlled by a spiking margin and the sum of synaptic weights. Moreover, the
efficacy of spikes (their usefulness to other reward maximizing selectrons)
also depends on total synaptic strength. Finally, based on our analysis, we
propose a regularized version of STDP, and show the regularization improves the
robustness of neuronal learning when faced with multiple stimuli.
| [
"David Balduzzi and Michel Besserve",
"['David Balduzzi' 'Michel Besserve']"
] |
cs.LG cs.SI stat.ML | null | 1209.5561 | null | null | http://arxiv.org/pdf/1209.5561v1 | 2012-09-25T09:59:56Z | 2012-09-25T09:59:56Z | Supervised Blockmodelling | Collective classification models attempt to improve classification
performance by taking into account the class labels of related instances.
However, they tend not to learn patterns of interactions between classes and/or
make the assumption that instances of the same class link to each other
(assortativity assumption). Blockmodels provide a solution to these issues,
being capable of modelling assortative and disassortative interactions, and
learning the pattern of interactions in the form of a summary network. The
Supervised Blockmodel provides good classification performance using link
structure alone, whilst simultaneously providing an interpretable summary of
network interactions to allow a better understanding of the data. This work
explores three variants of supervised blockmodels of varying complexity and
tests them on four structurally different real world networks.
| [
"Leto Peel",
"['Leto Peel']"
] |
cs.AI cs.LG | 10.1016/j.ijar.2013.04.003 | 1209.5601 | null | null | http://arxiv.org/abs/1209.5601v1 | 2012-09-25T13:21:40Z | 2012-09-25T13:21:40Z | Feature selection with test cost constraint | Feature selection is an important preprocessing step in machine learning and
data mining. In real-world applications, costs, including money, time and other
resources, are required to acquire the features. In some cases, there is a test
cost constraint due to limited resources. We shall deliberately select an
informative and cheap feature subset for classification. This paper proposes
the feature selection with test cost constraint problem for this issue. The new
problem has a simple form while described as a constraint satisfaction problem
(CSP). Backtracking is a general algorithm for CSP, and it is efficient in
solving the new problem on medium-sized data. As the backtracking algorithm is
not scalable to large datasets, a heuristic algorithm is also developed.
Experimental results show that the heuristic algorithm can find the optimal
solution in most cases. We also redefine some existing feature selection
problems in rough sets, especially in decision-theoretic rough sets, from the
viewpoint of CSP. These new definitions provide insight to some new research
directions.
| [
"Fan Min, Qinghua Hu, William Zhu",
"['Fan Min' 'Qinghua Hu' 'William Zhu']"
] |
cs.LG cs.IR | null | 1209.5833 | null | null | http://arxiv.org/pdf/1209.5833v2 | 2012-10-11T06:21:09Z | 2012-09-26T05:26:58Z | Locality-Sensitive Hashing with Margin Based Feature Selection | We propose a learning method with feature selection for Locality-Sensitive
Hashing. Locality-Sensitive Hashing converts feature vectors into bit arrays.
These bit arrays can be used to perform similarity searches and personal
authentication. The proposed method uses bit arrays longer than those used in
the end for similarity and other searches and by learning selects the bits that
will be used. We demonstrated this method can effectively perform optimization
for cases such as fingerprint images with a large number of labels and
extremely few data that share the same labels, as well as verifying that it is
also effective for natural images, handwritten digits, and speech features.
| [
"Makiko Konoshima and Yui Noma",
"['Makiko Konoshima' 'Yui Noma']"
] |
cs.LG stat.ML | null | 1209.5991 | null | null | http://arxiv.org/pdf/1209.5991v1 | 2012-09-26T16:31:32Z | 2012-09-26T16:31:32Z | Subset Selection for Gaussian Markov Random Fields | Given a Gaussian Markov random field, we consider the problem of selecting a
subset of variables to observe which minimizes the total expected squared
prediction error of the unobserved variables. We first show that finding an
exact solution is NP-hard even for a restricted class of Gaussian Markov random
fields, called Gaussian free fields, which arise in semi-supervised learning
and computer vision. We then give a simple greedy approximation algorithm for
Gaussian free fields on arbitrary graphs. Finally, we give a message passing
algorithm for general Gaussian Markov random fields on bounded tree-width
graphs.
| [
"['Satyaki Mahalanabis' 'Daniel Stefankovic']",
"Satyaki Mahalanabis, Daniel Stefankovic"
] |
cs.LG cs.IR stat.ML | null | 1209.6001 | null | null | http://arxiv.org/pdf/1209.6001v1 | 2012-09-26T16:41:59Z | 2012-09-26T16:41:59Z | Bayesian Mixture Models for Frequent Itemset Discovery | In binary-transaction data-mining, traditional frequent itemset mining often
produces results which are not straightforward to interpret. To overcome this
problem, probability models are often used to produce more compact and
conclusive results, albeit with some loss of accuracy. Bayesian statistics have
been widely used in the development of probability models in machine learning
in recent years and these methods have many advantages, including their
abilities to avoid overfitting. In this paper, we develop two Bayesian mixture
models with the Dirichlet distribution prior and the Dirichlet process (DP)
prior to improve the previous non-Bayesian mixture model developed for
transaction dataset mining. We implement the inference of both mixture models
using two methods: a collapsed Gibbs sampling scheme and a variational
approximation algorithm. Experiments in several benchmark problems have shown
that both mixture models achieve better performance than a non-Bayesian mixture
model. The variational algorithm is the faster of the two approaches while the
Gibbs sampling method achieves a more accurate results. The Dirichlet process
mixture model can automatically grow to a proper complexity for a better
approximation. Once the model is built, it can be very fast to query and run
analysis on (typically 10 times faster than Eclat, as we will show in the
experiment section). However, these approaches also show that mixture models
underestimate the probabilities of frequent itemsets. Consequently, these
models have a higher sensitivity but a lower specificity.
| [
"['Ruefei He' 'Jonathan Shapiro']",
"Ruefei He and Jonathan Shapiro"
] |
stat.ML cs.LG stat.AP | null | 1209.6004 | null | null | http://arxiv.org/pdf/1209.6004v1 | 2012-09-26T17:00:21Z | 2012-09-26T17:00:21Z | The Issue-Adjusted Ideal Point Model | We develop a model of issue-specific voting behavior. This model can be used
to explore lawmakers' personal voting patterns of voting by issue area,
providing an exploratory window into how the language of the law is correlated
with political support. We derive approximate posterior inference algorithms
based on variational methods. Across 12 years of legislative data, we
demonstrate both improvement in heldout prediction performance and the model's
utility in interpreting an inherently multi-dimensional space.
| [
"['Sean M. Gerrish' 'David M. Blei']",
"Sean M. Gerrish and David M. Blei"
] |
cs.LG cs.DB cs.IR | null | 1209.6070 | null | null | http://arxiv.org/pdf/1209.6070v1 | 2012-09-26T20:30:02Z | 2012-09-26T20:30:02Z | Movie Popularity Classification based on Inherent Movie Attributes using
C4.5,PART and Correlation Coefficient | Abundance of movie data across the internet makes it an obvious candidate for
machine learning and knowledge discovery. But most researches are directed
towards bi-polar classification of movie or generation of a movie
recommendation system based on reviews given by viewers on various internet
sites. Classification of movie popularity based solely on attributes of a movie
i.e. actor, actress, director rating, language, country and budget etc. has
been less highlighted due to large number of attributes that are associated
with each movie and their differences in dimensions. In this paper, we propose
classification scheme of pre-release movie popularity based on inherent
attributes using C4.5 and PART classifier algorithm and define the relation
between attributes of post release movies using correlation coefficient.
| [
"['Khalid Ibnal Asad' 'Tanvir Ahmed' 'Md. Saiedur Rahman']",
"Khalid Ibnal Asad, Tanvir Ahmed, Md. Saiedur Rahman"
] |
cs.LG | null | 1209.6329 | null | null | http://arxiv.org/pdf/1209.6329v1 | 2012-09-27T18:57:26Z | 2012-09-27T18:57:26Z | More Is Better: Large Scale Partially-supervised Sentiment
Classification - Appendix | We describe a bootstrapping algorithm to learn from partially labeled data,
and the results of an empirical study for using it to improve performance of
sentiment classification using up to 15 million unlabeled Amazon product
reviews. Our experiments cover semi-supervised learning, domain adaptation and
weakly supervised learning. In some cases our methods were able to reduce test
error by more than half using such large amount of data.
NOTICE: This is only the supplementary material.
| [
"Yoav Haimovitch, Koby Crammer, Shie Mannor",
"['Yoav Haimovitch' 'Koby Crammer' 'Shie Mannor']"
] |
stat.ML cs.LG | null | 1209.6342 | null | null | http://arxiv.org/pdf/1209.6342v1 | 2012-09-27T19:43:44Z | 2012-09-27T19:43:44Z | Sparse Ising Models with Covariates | There has been a lot of work fitting Ising models to multivariate binary data
in order to understand the conditional dependency relationships between the
variables. However, additional covariates are frequently recorded together with
the binary data, and may influence the dependence relationships. Motivated by
such a dataset on genomic instability collected from tumor samples of several
types, we propose a sparse covariate dependent Ising model to study both the
conditional dependency within the binary data and its relationship with the
additional covariates. This results in subject-specific Ising models, where the
subject's covariates influence the strength of association between the genes.
As in all exploratory data analysis, interpretability of results is important,
and we use L1 penalties to induce sparsity in the fitted graphs and in the
number of selected covariates. Two algorithms to fit the model are proposed and
compared on a set of simulated data, and asymptotic results are established.
The results on the tumor dataset and their biological significance are
discussed in detail.
| [
"['Jie Cheng' 'Elizaveta Levina' 'Pei Wang' 'Ji Zhu']",
"Jie Cheng, Elizaveta Levina, Pei Wang and Ji Zhu"
] |
cs.LG math.OC | null | 1209.6393 | null | null | http://arxiv.org/pdf/1209.6393v1 | 2012-09-27T23:03:53Z | 2012-09-27T23:03:53Z | Learning Robust Low-Rank Representations | In this paper we present a comprehensive framework for learning robust
low-rank representations by combining and extending recent ideas for learning
fast sparse coding regressors with structured non-convex optimization
techniques. This approach connects robust principal component analysis (RPCA)
with dictionary learning techniques and allows its approximation via trainable
encoders. We propose an efficient feed-forward architecture derived from an
optimization algorithm designed to exactly solve robust low dimensional
projections. This architecture, in combination with different training
objective functions, allows the regressors to be used as online approximants of
the exact offline RPCA problem or as RPCA-based neural networks. Simple
modifications of these encoders can handle challenging extensions, such as the
inclusion of geometric data transformations. We present several examples with
real data from image, audio, and video processing. When used to approximate
RPCA, our basic implementation shows several orders of magnitude speedup
compared to the exact solvers with almost no performance degradation. We show
the strength of the inclusion of learning to the RPCA approach on a music
source separation application, where the encoders outperform the exact RPCA
algorithms, which are already reported to produce state-of-the-art results on a
benchmark database. Our preliminary implementation on an iPad shows
faster-than-real-time performance with minimal latency.
| [
"['Pablo Sprechmann' 'Alex M. Bronstein' 'Guillermo Sapiro']",
"Pablo Sprechmann, Alex M. Bronstein, Guillermo Sapiro"
] |
cs.LG | null | 1209.6409 | null | null | http://arxiv.org/pdf/1209.6409v1 | 2012-09-28T01:46:47Z | 2012-09-28T01:46:47Z | A Deterministic Analysis of an Online Convex Mixture of Expert
Algorithms | We analyze an online learning algorithm that adaptively combines outputs of
two constituent algorithms (or the experts) running in parallel to model an
unknown desired signal. This online learning algorithm is shown to achieve (and
in some cases outperform) the mean-square error (MSE) performance of the best
constituent algorithm in the mixture in the steady-state. However, the MSE
analysis of this algorithm in the literature uses approximations and relies on
statistical models on the underlying signals and systems. Hence, such an
analysis may not be useful or valid for signals generated by various real life
systems that show high degrees of nonstationarity, limit cycles and, in many
cases, that are even chaotic. In this paper, we produce results in an
individual sequence manner. In particular, we relate the time-accumulated
squared estimation error of this online algorithm at any time over any interval
to the time accumulated squared estimation error of the optimal convex mixture
of the constituent algorithms directly tuned to the underlying signal in a
deterministic sense without any statistical assumptions. In this sense, our
analysis provides the transient, steady-state and tracking behavior of this
algorithm in a strong sense without any approximations in the derivations or
statistical assumptions on the underlying signals such that our results are
guaranteed to hold. We illustrate the introduced results through examples.
| [
"['Mehmet A. Donmez' 'Sait Tunc' 'Suleyman S. Kozat']",
"Mehmet A. Donmez, Sait Tunc, Suleyman S. Kozat"
] |
cs.LG cs.IT math.IT stat.ML | null | 1209.6419 | null | null | http://arxiv.org/pdf/1209.6419v1 | 2012-09-28T04:12:14Z | 2012-09-28T04:12:14Z | Partial Gaussian Graphical Model Estimation | This paper studies the partial estimation of Gaussian graphical models from
high-dimensional empirical observations. We derive a convex formulation for
this problem using $\ell_1$-regularized maximum-likelihood estimation, which
can be solved via a block coordinate descent algorithm. Statistical estimation
performance can be established for our method. The proposed approach has
competitive empirical performance compared to existing methods, as demonstrated
by various experiments on synthetic and real datasets.
| [
"Xiao-Tong Yuan and Tong Zhang",
"['Xiao-Tong Yuan' 'Tong Zhang']"
] |
cs.LG cs.CE | null | 1209.6425 | null | null | http://arxiv.org/pdf/1209.6425v3 | 2013-06-20T05:41:39Z | 2012-09-28T04:59:33Z | Gene selection with guided regularized random forest | The regularized random forest (RRF) was recently proposed for feature
selection by building only one ensemble. In RRF the features are evaluated on a
part of the training data at each tree node. We derive an upper bound for the
number of distinct Gini information gain values in a node, and show that many
features can share the same information gain at a node with a small number of
instances and a large number of features. Therefore, in a node with a small
number of instances, RRF is likely to select a feature not strongly relevant.
Here an enhanced RRF, referred to as the guided RRF (GRRF), is proposed. In
GRRF, the importance scores from an ordinary random forest (RF) are used to
guide the feature selection process in RRF. Experiments on 10 gene data sets
show that the accuracy performance of GRRF is, in general, more robust than RRF
when their parameters change. GRRF is computationally efficient, can select
compact feature subsets, and has competitive accuracy performance, compared to
RRF, varSelRF and LASSO logistic regression (with evaluations from an RF
classifier). Also, RF applied to the features selected by RRF with the minimal
regularization outperforms RF applied to all the features for most of the data
sets considered here. Therefore, if accuracy is considered more important than
the size of the feature subset, RRF with the minimal regularization may be
considered. We use the accuracy performance of RF, a strong classifier, to
evaluate feature selection methods, and illustrate that weak classifiers are
less capable of capturing the information contained in a feature subset. Both
RRF and GRRF were implemented in the "RRF" R package available at CRAN, the
official R package archive.
| [
"Houtao Deng and George Runger",
"['Houtao Deng' 'George Runger']"
] |
cs.CV cs.LG | null | 1209.6525 | null | null | http://arxiv.org/pdf/1209.6525v1 | 2012-09-28T14:09:30Z | 2012-09-28T14:09:30Z | A Complete System for Candidate Polyps Detection in Virtual Colonoscopy | Computer tomographic colonography, combined with computer-aided detection, is
a promising emerging technique for colonic polyp analysis. We present a
complete pipeline for polyp detection, starting with a simple colon
segmentation technique that enhances polyps, followed by an adaptive-scale
candidate polyp delineation and classification based on new texture and
geometric features that consider both the information in the candidate polyp
location and its immediate surrounding area. The proposed system is tested with
ground truth data, including flat and small polyps which are hard to detect
even with optical colonoscopy. For polyps larger than 6mm in size we achieve
100% sensitivity with just 0.9 false positives per case, and for polyps larger
than 3mm in size we achieve 93% sensitivity with 2.8 false positives per case.
| [
"Marcelo Fiori, Pablo Mus\\'e, Guillermo Sapiro",
"['Marcelo Fiori' 'Pablo Musé' 'Guillermo Sapiro']"
] |
cs.AI cs.LG stat.ML | null | 1209.6561 | null | null | http://arxiv.org/pdf/1209.6561v2 | 2013-07-31T19:57:02Z | 2012-09-28T16:06:09Z | Scoring and Searching over Bayesian Networks with Causal and Associative
Priors | A significant theoretical advantage of search-and-score methods for learning
Bayesian Networks is that they can accept informative prior beliefs for each
possible network, thus complementing the data. In this paper, a method is
presented for assigning priors based on beliefs on the presence or absence of
certain paths in the true network. Such beliefs correspond to knowledge about
the possible causal and associative relations between pairs of variables. This
type of knowledge naturally arises from prior experimental and observational
data, among others. In addition, a novel search-operator is proposed to take
advantage of such prior knowledge. Experiments show that, using path beliefs
improves the learning of the skeleton, as well as the edge directions in the
network.
| [
"Giorgos Borboudakis and Ioannis Tsamardinos",
"['Giorgos Borboudakis' 'Ioannis Tsamardinos']"
] |
math.OC cs.LG stat.CO stat.ML | null | 1210.0066 | null | null | http://arxiv.org/pdf/1210.0066v1 | 2012-09-29T01:42:57Z | 2012-09-29T01:42:57Z | Iterative Reweighted Minimization Methods for $l_p$ Regularized
Unconstrained Nonlinear Programming | In this paper we study general $l_p$ regularized unconstrained minimization
problems. In particular, we derive lower bounds for nonzero entries of first-
and second-order stationary points, and hence also of local minimizers of the
$l_p$ minimization problems. We extend some existing iterative reweighted $l_1$
(IRL1) and $l_2$ (IRL2) minimization methods to solve these problems and
proposed new variants for them in which each subproblem has a closed form
solution. Also, we provide a unified convergence analysis for these methods. In
addition, we propose a novel Lipschitz continuous $\epsilon$-approximation to
$\|x\|^p_p$. Using this result, we develop new IRL1 methods for the $l_p$
minimization problems and showed that any accumulation point of the sequence
generated by these methods is a first-order stationary point, provided that the
approximation parameter $\epsilon$ is below a computable threshold value. This
is a remarkable result since all existing iterative reweighted minimization
methods require that $\epsilon$ be dynamically updated and approach zero. Our
computational results demonstrate that the new IRL1 method is generally more
stable than the existing IRL1 methods [21,18] in terms of objective function
value and CPU time.
| [
"Zhaosong Lu",
"['Zhaosong Lu']"
] |
cs.AI cs.LG | null | 1210.0077 | null | null | http://arxiv.org/pdf/1210.0077v1 | 2012-09-29T04:58:22Z | 2012-09-29T04:58:22Z | Optimistic Agents are Asymptotically Optimal | We use optimism to introduce generic asymptotically optimal reinforcement
learning agents. They achieve, with an arbitrary finite or compact class of
environments, asymptotically optimal behavior. Furthermore, in the finite
deterministic case we provide finite error bounds.
| [
"Peter Sunehag and Marcus Hutter",
"['Peter Sunehag' 'Marcus Hutter']"
] |
cs.LG | null | 1210.0473 | null | null | http://arxiv.org/pdf/1210.0473v1 | 2012-10-01T17:08:25Z | 2012-10-01T17:08:25Z | Memory Constraint Online Multitask Classification | We investigate online kernel algorithms which simultaneously process multiple
classification tasks while a fixed constraint is imposed on the size of their
active sets. We focus in particular on the design of algorithms that can
efficiently deal with problems where the number of tasks is extremely high and
the task data are large scale. Two new projection-based algorithms are
introduced to efficiently tackle those issues while presenting different trade
offs on how the available memory is managed with respect to the prior
information about the learning tasks. Theoretically sound budget algorithms are
devised by coupling the Randomized Budget Perceptron and the Forgetron
algorithms with the multitask kernel. We show how the two seemingly contrasting
properties of learning from multiple tasks and keeping a constant memory
footprint can be balanced, and how the sharing of the available space among
different tasks is automatically taken care of. We propose and discuss new
insights on the multitask kernel. Experiments show that online kernel multitask
algorithms running on a budget can efficiently tackle real world learning
problems involving multiple tasks.
| [
"Giovanni Cavallanti, Nicol\\`o Cesa-Bianchi",
"['Giovanni Cavallanti' 'Nicolò Cesa-Bianchi']"
] |
cs.LG cs.DS | 10.1007/s00453-015-0017-7 | 1210.0508 | null | null | http://arxiv.org/abs/1210.0508v5 | 2017-01-20T08:00:44Z | 2012-10-01T19:13:59Z | Inference algorithms for pattern-based CRFs on sequence data | We consider Conditional Random Fields (CRFs) with pattern-based potentials
defined on a chain. In this model the energy of a string (labeling) $x_1...x_n$
is the sum of terms over intervals $[i,j]$ where each term is non-zero only if
the substring $x_i...x_j$ equals a prespecified pattern $\alpha$. Such CRFs can
be naturally applied to many sequence tagging problems.
We present efficient algorithms for the three standard inference tasks in a
CRF, namely computing (i) the partition function, (ii) marginals, and (iii)
computing the MAP. Their complexities are respectively $O(n L)$, $O(n L
\ell_{max})$ and $O(n L \min\{|D|,\log (\ell_{max}+1)\})$ where $L$ is the
combined length of input patterns, $\ell_{max}$ is the maximum length of a
pattern, and $D$ is the input alphabet. This improves on the previous
algorithms of (Ye et al., 2009) whose complexities are respectively $O(n L
|D|)$, $O(n |\Gamma| L^2 \ell_{max}^2)$ and $O(n L |D|)$, where $|\Gamma|$ is
the number of input patterns.
In addition, we give an efficient algorithm for sampling. Finally, we
consider the case of non-positive weights. (Komodakis & Paragios, 2009) gave an
$O(n L)$ algorithm for computing the MAP. We present a modification that has
the same worst-case complexity but can beat it in the best case.
| [
"['Rustem Takhanov' 'Vladimir Kolmogorov']",
"Rustem Takhanov and Vladimir Kolmogorov"
] |
cs.IT cs.LG math.IT stat.ML | null | 1210.0563 | null | null | http://arxiv.org/pdf/1210.0563v1 | 2012-10-01T20:28:09Z | 2012-10-01T20:28:09Z | Sparse LMS via Online Linearized Bregman Iteration | We propose a version of least-mean-square (LMS) algorithm for sparse system
identification. Our algorithm called online linearized Bregman iteration (OLBI)
is derived from minimizing the cumulative prediction error squared along with
an l1-l2 norm regularizer. By systematically treating the non-differentiable
regularizer we arrive at a simple two-step iteration. We demonstrate that OLBI
is bias free and compare its operation with existing sparse LMS algorithms by
rederiving them in the online convex optimization framework. We perform
convergence analysis of OLBI for white input signals and derive theoretical
expressions for both the steady state and instantaneous mean square deviations
(MSD). We demonstrate numerically that OLBI improves the performance of LMS
type algorithms for signals generated from sparse tap weights.
| [
"['Tao Hu' 'Dmitri B. Chklovskii']",
"Tao Hu and Dmitri B. Chklovskii"
] |
cs.LG stat.ML | null | 1210.0645 | null | null | http://arxiv.org/pdf/1210.0645v5 | 2013-05-20T22:11:10Z | 2012-10-02T04:22:50Z | Nonparametric Unsupervised Classification | Unsupervised classification methods learn a discriminative classifier from
unlabeled data, which has been proven to be an effective way of simultaneously
clustering the data and training a classifier from the data. Various
unsupervised classification methods obtain appealing results by the classifiers
learned in an unsupervised manner. However, existing methods do not consider
the misclassification error of the unsupervised classifiers except unsupervised
SVM, so the performance of the unsupervised classifiers is not fully evaluated.
In this work, we study the misclassification error of two popular classifiers,
i.e. the nearest neighbor classifier (NN) and the plug-in classifier, in the
setting of unsupervised classification.
| [
"['Yingzhen Yang' 'Thomas S. Huang']",
"Yingzhen Yang, Thomas S. Huang"
] |
stat.ML cs.LG | null | 1210.0685 | null | null | http://arxiv.org/pdf/1210.0685v1 | 2012-10-02T07:48:08Z | 2012-10-02T07:48:08Z | Local stability and robustness of sparse dictionary learning in the
presence of noise | A popular approach within the signal processing and machine learning
communities consists in modelling signals as sparse linear combinations of
atoms selected from a learned dictionary. While this paradigm has led to
numerous empirical successes in various fields ranging from image to audio
processing, there have only been a few theoretical arguments supporting these
evidences. In particular, sparse coding, or sparse dictionary learning, relies
on a non-convex procedure whose local minima have not been fully analyzed yet.
In this paper, we consider a probabilistic model of sparse signals, and show
that, with high probability, sparse coding admits a local minimum around the
reference dictionary generating the signals. Our study takes into account the
case of over-complete dictionaries and noisy signals, thus extending previous
work limited to noiseless settings and/or under-complete dictionaries. The
analysis we conduct is non-asymptotic and makes it possible to understand how
the key quantities of the problem, such as the coherence or the level of noise,
can scale with respect to the dimension of the signals, the number of atoms,
the sparsity and the number of observations.
| [
"Rodolphe Jenatton (CMAP), R\\'emi Gribonval (INRIA - IRISA), Francis\n Bach (LIENS, INRIA Paris - Rocquencourt)",
"['Rodolphe Jenatton' 'Rémi Gribonval' 'Francis Bach']"
] |
q-bio.QM cs.AI cs.CE cs.LG | 10.1007/978-3-642-33636-2_20 | 1210.0690 | null | null | http://arxiv.org/abs/1210.0690v2 | 2012-12-22T07:39:43Z | 2012-10-02T07:52:52Z | Revisiting the Training of Logic Models of Protein Signaling Networks
with a Formal Approach based on Answer Set Programming | A fundamental question in systems biology is the construction and training to
data of mathematical models. Logic formalisms have become very popular to model
signaling networks because their simplicity allows us to model large systems
encompassing hundreds of proteins. An approach to train (Boolean) logic models
to high-throughput phospho-proteomics data was recently introduced and solved
using optimization heuristics based on stochastic methods. Here we demonstrate
how this problem can be solved using Answer Set Programming (ASP), a
declarative problem solving paradigm, in which a problem is encoded as a
logical program such that its answer sets represent solutions to the problem.
ASP has significant improvements over heuristic methods in terms of efficiency
and scalability, it guarantees global optimality of solutions as well as
provides a complete set of solutions. We illustrate the application of ASP with
in silico cases based on realistic networks and data.
| [
"Santiago Videla (INRIA - IRISA), Carito Guziolowski (IRCCyN), Federica\n Eduati (DEI, EBI), Sven Thiele (INRIA - IRISA), Niels Grabe, Julio\n Saez-Rodriguez (EBI), Anne Siegel (INRIA - IRISA)",
"['Santiago Videla' 'Carito Guziolowski' 'Federica Eduati' 'Sven Thiele'\n 'Niels Grabe' 'Julio Saez-Rodriguez' 'Anne Siegel']"
] |
cs.LG | null | 1210.0699 | null | null | http://arxiv.org/pdf/1210.0699v1 | 2012-10-02T08:40:46Z | 2012-10-02T08:40:46Z | TV-SVM: Total Variation Support Vector Machine for Semi-Supervised Data
Classification | We introduce semi-supervised data classification algorithms based on total
variation (TV), Reproducing Kernel Hilbert Space (RKHS), support vector machine
(SVM), Cheeger cut, labeled and unlabeled data points. We design binary and
multi-class semi-supervised classification algorithms. We compare the TV-based
classification algorithms with the related Laplacian-based algorithms, and show
that TV classification perform significantly better when the number of labeled
data is small.
| [
"Xavier Bresson and Ruiliang Zhang",
"['Xavier Bresson' 'Ruiliang Zhang']"
] |
stat.ML cs.LG q-bio.QM | null | 1210.0734 | null | null | http://arxiv.org/pdf/1210.0734v1 | 2012-10-02T11:34:57Z | 2012-10-02T11:34:57Z | Evaluation of linear classifiers on articles containing pharmacokinetic
evidence of drug-drug interactions | Background. Drug-drug interaction (DDI) is a major cause of morbidity and
mortality. [...] Biomedical literature mining can aid DDI research by
extracting relevant DDI signals from either the published literature or large
clinical databases. However, though drug interaction is an ideal area for
translational research, the inclusion of literature mining methodologies in DDI
workflows is still very preliminary. One area that can benefit from literature
mining is the automatic identification of a large number of potential DDIs,
whose pharmacological mechanisms and clinical significance can then be studied
via in vitro pharmacology and in populo pharmaco-epidemiology. Experiments. We
implemented a set of classifiers for identifying published articles relevant to
experimental pharmacokinetic DDI evidence. These documents are important for
identifying causal mechanisms behind putative drug-drug interactions, an
important step in the extraction of large numbers of potential DDIs. We
evaluate performance of several linear classifiers on PubMed abstracts, under
different feature transformation and dimensionality reduction methods. In
addition, we investigate the performance benefits of including various
publicly-available named entity recognition features, as well as a set of
internally-developed pharmacokinetic dictionaries. Results. We found that
several classifiers performed well in distinguishing relevant and irrelevant
abstracts. We found that the combination of unigram and bigram textual features
gave better performance than unigram features alone, and also that
normalization transforms that adjusted for feature frequency and document
length improved classification. For some classifiers, such as linear
discriminant analysis (LDA), proper dimensionality reduction had a large impact
on performance. Finally, the inclusion of NER features and dictionaries was
found not to help classification.
| [
"Artemy Kolchinsky, An\\'alia Louren\\c{c}o, Lang Li, Luis M. Rocha",
"['Artemy Kolchinsky' 'Anália Lourenço' 'Lang Li' 'Luis M. Rocha']"
] |
stat.ML cs.IR cs.LG | 10.1016/j.jvcir.2011.10.009 | 1210.0758 | null | null | http://arxiv.org/abs/1210.0758v1 | 2012-10-02T13:04:49Z | 2012-10-02T13:04:49Z | A fast compression-based similarity measure with applications to
content-based image retrieval | Compression-based similarity measures are effectively employed in
applications on diverse data types with a basically parameter-free approach.
Nevertheless, there are problems in applying these techniques to
medium-to-large datasets which have been seldom addressed. This paper proposes
a similarity measure based on compression with dictionaries, the Fast
Compression Distance (FCD), which reduces the complexity of these methods,
without degradations in performance. On its basis a content-based color image
retrieval system is defined, which can be compared to state-of-the-art methods
based on invariant color features. Through the FCD a better understanding of
compression-based techniques is achieved, by performing experiments on datasets
which are larger than the ones analyzed so far in literature.
| [
"Daniele Cerra and Mihai Datcu",
"['Daniele Cerra' 'Mihai Datcu']"
] |
cs.LG stat.ML | null | 1210.0762 | null | null | http://arxiv.org/pdf/1210.0762v1 | 2012-10-02T13:17:33Z | 2012-10-02T13:17:33Z | Graph-Based Approaches to Clustering Network-Constrained Trajectory Data | Even though clustering trajectory data attracted considerable attention in
the last few years, most of prior work assumed that moving objects can move
freely in an euclidean space and did not consider the eventual presence of an
underlying road network and its influence on evaluating the similarity between
trajectories. In this paper, we present two approaches to clustering
network-constrained trajectory data. The first approach discovers clusters of
trajectories that traveled along the same parts of the road network. The second
approach is segment-oriented and aims to group together road segments based on
trajectories that they have in common. Both approaches use a graph model to
depict the interactions between observations w.r.t. their similarity and
cluster this similarity graph using a community detection algorithm. We also
present experimental results obtained on synthetic data to showcase our
propositions.
| [
"['Mohamed Khalil El Mahrsi' 'Fabrice Rossi']",
"Mohamed Khalil El Mahrsi (LTCI), Fabrice Rossi (SAMM)"
] |
cs.IT cs.LG math.IT stat.ML | 10.1016/j.dsp.2012.04.018 | 1210.0824 | null | null | http://arxiv.org/abs/1210.0824v1 | 2012-10-02T16:08:53Z | 2012-10-02T16:08:53Z | Distributed High Dimensional Information Theoretical Image Registration
via Random Projections | Information theoretical measures, such as entropy, mutual information, and
various divergences, exhibit robust characteristics in image registration
applications. However, the estimation of these quantities is computationally
intensive in high dimensions. On the other hand, consistent estimation from
pairwise distances of the sample points is possible, which suits random
projection (RP) based low dimensional embeddings. We adapt the RP technique to
this task by means of a simple ensemble method. To the best of our knowledge,
this is the first distributed, RP based information theoretical image
registration approach. The efficiency of the method is demonstrated through
numerical examples.
| [
"Zoltan Szabo, Andras Lorincz",
"['Zoltan Szabo' 'Andras Lorincz']"
] |
cs.LG cs.DS math.ST stat.TH | null | 1210.0864 | null | null | http://arxiv.org/pdf/1210.0864v1 | 2012-10-02T18:07:13Z | 2012-10-02T18:07:13Z | Learning mixtures of structured distributions over discrete domains | Let $\mathfrak{C}$ be a class of probability distributions over the discrete
domain $[n] = \{1,...,n\}.$ We show that if $\mathfrak{C}$ satisfies a rather
general condition -- essentially, that each distribution in $\mathfrak{C}$ can
be well-approximated by a variable-width histogram with few bins -- then there
is a highly efficient (both in terms of running time and sample complexity)
algorithm that can learn any mixture of $k$ unknown distributions from
$\mathfrak{C}.$
We analyze several natural types of distributions over $[n]$, including
log-concave, monotone hazard rate and unimodal distributions, and show that
they have the required structural property of being well-approximated by a
histogram with few bins. Applying our general algorithm, we obtain
near-optimally efficient algorithms for all these mixture learning problems.
| [
"Siu-on Chan, Ilias Diakonikolas, Rocco A. Servedio, Xiaorui Sun",
"['Siu-on Chan' 'Ilias Diakonikolas' 'Rocco A. Servedio' 'Xiaorui Sun']"
] |
cs.SI cs.LG | null | 1210.0954 | null | null | http://arxiv.org/pdf/1210.0954v1 | 2012-10-03T01:34:00Z | 2012-10-03T01:34:00Z | Learning from Collective Intelligence in Groups | Collective intelligence, which aggregates the shared information from large
crowds, is often negatively impacted by unreliable information sources with the
low quality data. This becomes a barrier to the effective use of collective
intelligence in a variety of applications. In order to address this issue, we
propose a probabilistic model to jointly assess the reliability of sources and
find the true data. We observe that different sources are often not independent
of each other. Instead, sources are prone to be mutually influenced, which
makes them dependent when sharing information with each other. High dependency
between sources makes collective intelligence vulnerable to the overuse of
redundant (and possibly incorrect) information from the dependent sources.
Thus, we reveal the latent group structure among dependent sources, and
aggregate the information at the group level rather than from individual
sources directly. This can prevent the collective intelligence from being
inappropriately dominated by dependent sources. We will also explicitly reveal
the reliability of groups, and minimize the negative impacts of unreliable
groups. Experimental results on real-world data sets show the effectiveness of
the proposed approach with respect to existing algorithms.
| [
"Guo-Jun Qi, Charu Aggarwal, Pierre Moulin, Thomas Huang",
"['Guo-Jun Qi' 'Charu Aggarwal' 'Pierre Moulin' 'Thomas Huang']"
] |
cs.RO cs.LG | null | 1210.1104 | null | null | http://arxiv.org/pdf/1210.1104v1 | 2012-10-03T13:36:32Z | 2012-10-03T13:36:32Z | Sensory Anticipation of Optical Flow in Mobile Robotics | In order to anticipate dangerous events, like a collision, an agent needs to
make long-term predictions. However, those are challenging due to uncertainties
in internal and external variables and environment dynamics. A sensorimotor
model is acquired online by the mobile robot using a state-of-the-art method
that learns the optical flow distribution in images, both in space and time.
The learnt model is used to anticipate the optical flow up to a given time
horizon and to predict an imminent collision by using reinforcement learning.
We demonstrate that multi-modal predictions reduce to simpler distributions
once actions are taken into account.
| [
"Arturo Ribes, Jes\\'us Cerquides, Yiannis Demiris and Ram\\'on L\\'opez\n de M\\'antaras",
"['Arturo Ribes' 'Jesús Cerquides' 'Yiannis Demiris'\n 'Ramón López de Mántaras']"
] |
stat.ML cs.LG | null | 1210.1121 | null | null | http://arxiv.org/pdf/1210.1121v1 | 2012-10-03T14:26:59Z | 2012-10-03T14:26:59Z | Smooth Sparse Coding via Marginal Regression for Learning Sparse
Representations | We propose and analyze a novel framework for learning sparse representations,
based on two statistical techniques: kernel smoothing and marginal regression.
The proposed approach provides a flexible framework for incorporating feature
similarity or temporal information present in data sets, via non-parametric
kernel smoothing. We provide generalization bounds for dictionary learning
using smooth sparse coding and show how the sample complexity depends on the L1
norm of kernel function used. Furthermore, we propose using marginal regression
for obtaining sparse codes, which significantly improves the speed and allows
one to scale to large dictionary sizes easily. We demonstrate the advantages of
the proposed approach, both in terms of accuracy and speed by extensive
experimentation on several real data sets. In addition, we demonstrate how the
proposed approach could be used for improving semi-supervised sparse coding.
| [
"['Krishnakumar Balasubramanian' 'Kai Yu' 'Guy Lebanon']",
"Krishnakumar Balasubramanian, Kai Yu, Guy Lebanon"
] |
null | null | 1210.1161 | null | null | http://arxiv.org/pdf/1210.1161v1 | 2012-10-03T16:12:07Z | 2012-10-03T16:12:07Z | Feature Subset Selection for Software Cost Modelling and Estimation | Feature selection has been recently used in the area of software engineering for improving the accuracy and robustness of software cost models. The idea behind selecting the most informative subset of features from a pool of available cost drivers stems from the hypothesis that reducing the dimensionality of datasets will significantly minimise the complexity and time required to reach to an estimation using a particular modelling technique. This work investigates the appropriateness of attributes, obtained from empirical project databases and aims to reduce the cost drivers used while preserving performance. Finding suitable subset selections that may cater improved predictions may be considered as a pre-processing step of a particular technique employed for cost estimation (filter or wrapper) or an internal (embedded) step to minimise the fitting error. This paper compares nine relatively popular feature selection methods and uses the empirical values of selected attributes recorded in the ISBSG and Desharnais datasets to estimate software development effort. | [
"['Efi Papatheocharous' 'Harris Papadopoulos' 'Andreas S. Andreou']"
] |
stat.ML cs.LG | null | 1210.1190 | null | null | http://arxiv.org/pdf/1210.1190v1 | 2012-10-03T18:37:47Z | 2012-10-03T18:37:47Z | Fast Conical Hull Algorithms for Near-separable Non-negative Matrix
Factorization | The separability assumption (Donoho & Stodden, 2003; Arora et al., 2012)
turns non-negative matrix factorization (NMF) into a tractable problem.
Recently, a new class of provably-correct NMF algorithms have emerged under
this assumption. In this paper, we reformulate the separable NMF problem as
that of finding the extreme rays of the conical hull of a finite set of
vectors. From this geometric perspective, we derive new separable NMF
algorithms that are highly scalable and empirically noise robust, and have
several other favorable properties in relation to existing methods. A parallel
implementation of our algorithm demonstrates high scalability on shared- and
distributed-memory machines.
| [
"['Abhishek Kumar' 'Vikas Sindhwani' 'Prabhanjan Kambadur']",
"Abhishek Kumar, Vikas Sindhwani, Prabhanjan Kambadur"
] |
cs.LG stat.ML | null | 1210.1258 | null | null | http://arxiv.org/pdf/1210.1258v1 | 2012-10-03T23:30:24Z | 2012-10-03T23:30:24Z | Unfolding Latent Tree Structures using 4th Order Tensors | Discovering the latent structure from many observed variables is an important
yet challenging learning task. Existing approaches for discovering latent
structures often require the unknown number of hidden states as an input. In
this paper, we propose a quartet based approach which is \emph{agnostic} to
this number. The key contribution is a novel rank characterization of the
tensor associated with the marginal distribution of a quartet. This
characterization allows us to design a \emph{nuclear norm} based test for
resolving quartet relations. We then use the quartet test as a subroutine in a
divide-and-conquer algorithm for recovering the latent tree structure. Under
mild conditions, the algorithm is consistent and its error probability decays
exponentially with increasing sample size. We demonstrate that the proposed
approach compares favorably to alternatives. In a real world stock dataset, it
also discovers meaningful groupings of variables, and produces a model that
fits the data better.
| [
"Mariya Ishteva, Haesun Park, Le Song",
"['Mariya Ishteva' 'Haesun Park' 'Le Song']"
] |
cs.LG cs.AI | null | 1210.1317 | null | null | http://arxiv.org/pdf/1210.1317v1 | 2012-10-04T07:17:37Z | 2012-10-04T07:17:37Z | Learning Heterogeneous Similarity Measures for Hybrid-Recommendations in
Meta-Mining | The notion of meta-mining has appeared recently and extends the traditional
meta-learning in two ways. First it does not learn meta-models that provide
support only for the learning algorithm selection task but ones that support
the whole data-mining process. In addition it abandons the so called black-box
approach to algorithm description followed in meta-learning. Now in addition to
the datasets, algorithms also have descriptors, workflows as well. For the
latter two these descriptions are semantic, describing properties of the
algorithms. With the availability of descriptors both for datasets and data
mining workflows the traditional modelling techniques followed in
meta-learning, typically based on classification and regression algorithms, are
no longer appropriate. Instead we are faced with a problem the nature of which
is much more similar to the problems that appear in recommendation systems. The
most important meta-mining requirements are that suggestions should use only
datasets and workflows descriptors and the cold-start problem, e.g. providing
workflow suggestions for new datasets.
In this paper we take a different view on the meta-mining modelling problem
and treat it as a recommender problem. In order to account for the meta-mining
specificities we derive a novel metric-based-learning recommender approach. Our
method learns two homogeneous metrics, one in the dataset and one in the
workflow space, and a heterogeneous one in the dataset-workflow space. All
learned metrics reflect similarities established from the dataset-workflow
preference matrix. We demonstrate our method on meta-mining over biological
(microarray datasets) problems. The application of our method is not limited to
the meta-mining problem, its formulations is general enough so that it can be
applied on problems with similar requirements.
| [
"Phong Nguyen, Jun Wang, Melanie Hilario and Alexandros Kalousis",
"['Phong Nguyen' 'Jun Wang' 'Melanie Hilario' 'Alexandros Kalousis']"
] |
cs.LG cs.DM stat.ML | null | 1210.1461 | null | null | http://arxiv.org/pdf/1210.1461v1 | 2012-10-04T14:23:34Z | 2012-10-04T14:23:34Z | A Scalable CUR Matrix Decomposition Algorithm: Lower Time Complexity and
Tighter Bound | The CUR matrix decomposition is an important extension of Nystr\"{o}m
approximation to a general matrix. It approximates any data matrix in terms of
a small number of its columns and rows. In this paper we propose a novel
randomized CUR algorithm with an expected relative-error bound. The proposed
algorithm has the advantages over the existing relative-error CUR algorithms
that it possesses tighter theoretical bound and lower time complexity, and that
it can avoid maintaining the whole data matrix in main memory. Finally,
experiments on several real-world datasets demonstrate significant improvement
over the existing relative-error algorithms.
| [
"['Shusen Wang' 'Zhihua Zhang' 'Jian Li']",
"Shusen Wang, Zhihua Zhang, Jian Li"
] |
cs.LG cs.AI stat.ME stat.ML | null | 1210.1766 | null | null | http://arxiv.org/pdf/1210.1766v3 | 2014-02-12T06:31:12Z | 2012-10-05T14:10:20Z | Bayesian Inference with Posterior Regularization and applications to
Infinite Latent SVMs | Existing Bayesian models, especially nonparametric Bayesian methods, rely on
specially conceived priors to incorporate domain knowledge for discovering
improved latent representations. While priors can affect posterior
distributions through Bayes' rule, imposing posterior regularization is
arguably more direct and in some cases more natural and general. In this paper,
we present regularized Bayesian inference (RegBayes), a novel computational
framework that performs posterior inference with a regularization term on the
desired post-data posterior distribution under an information theoretical
formulation. RegBayes is more flexible than the procedure that elicits expert
knowledge via priors, and it covers both directed Bayesian networks and
undirected Markov networks whose Bayesian formulation results in hybrid chain
graph models. When the regularization is induced from a linear operator on the
posterior distributions, such as the expectation operator, we present a general
convex-analysis theorem to characterize the solution of RegBayes. Furthermore,
we present two concrete examples of RegBayes, infinite latent support vector
machines (iLSVM) and multi-task infinite latent support vector machines
(MT-iLSVM), which explore the large-margin idea in combination with a
nonparametric Bayesian model for discovering predictive latent features for
classification and multi-task learning, respectively. We present efficient
inference methods and report empirical studies on several benchmark datasets,
which appear to demonstrate the merits inherited from both large-margin
learning and Bayesian nonparametrics. Such results were not available until
now, and contribute to push forward the interface between these two important
subfields, which have been largely treated as isolated in the community.
| [
"Jun Zhu, Ning Chen, and Eric P. Xing",
"['Jun Zhu' 'Ning Chen' 'Eric P. Xing']"
] |
stat.ML cs.AI cs.LG | null | 1210.1928 | null | null | http://arxiv.org/pdf/1210.1928v3 | 2013-09-05T03:42:50Z | 2012-10-06T08:11:01Z | Information fusion in multi-task Gaussian processes | This paper evaluates heterogeneous information fusion using multi-task
Gaussian processes in the context of geological resource modeling.
Specifically, it empirically demonstrates that information integration across
heterogeneous information sources leads to superior estimates of all the
quantities being modeled, compared to modeling them individually. Multi-task
Gaussian processes provide a powerful approach for simultaneous modeling of
multiple quantities of interest while taking correlations between these
quantities into consideration. Experiments are performed on large scale real
sensor data.
| [
"['Shrihari Vasudevan' 'Arman Melkumyan' 'Steven Scheding']",
"Shrihari Vasudevan and Arman Melkumyan and Steven Scheding"
] |
stat.ML cs.LG | 10.1587/transinf.E96.D.1513 | 1210.1960 | null | null | http://arxiv.org/abs/1210.1960v1 | 2012-10-06T14:16:33Z | 2012-10-06T14:16:33Z | Feature Selection via L1-Penalized Squared-Loss Mutual Information | Feature selection is a technique to screen out less important features. Many
existing supervised feature selection algorithms use redundancy and relevancy
as the main criteria to select features. However, feature interaction,
potentially a key characteristic in real-world problems, has not received much
attention. As an attempt to take feature interaction into account, we propose
L1-LSMI, an L1-regularization based algorithm that maximizes a squared-loss
variant of mutual information between selected features and outputs. Numerical
results show that L1-LSMI performs well in handling redundancy, detecting
non-linear dependency, and considering feature interaction.
| [
"Wittawat Jitkrittum, Hirotaka Hachiya, Masashi Sugiyama",
"['Wittawat Jitkrittum' 'Hirotaka Hachiya' 'Masashi Sugiyama']"
] |
math.LO cs.LG cs.LO | null | 1210.2051 | null | null | http://arxiv.org/pdf/1210.2051v2 | 2013-02-09T23:01:08Z | 2012-10-07T13:21:17Z | Anomalous Vacillatory Learning | In 1986, Osherson, Stob and Weinstein asked whether two variants of anomalous
vacillatory learning, TxtFex^*_* and TxtFext^*_*, could be distinguished. In
both, a machine is permitted to vacillate between a finite number of hypotheses
and to make a finite number of errors. TxtFext^*_*-learning requires that
hypotheses output infinitely often must describe the same finite variant of the
correct set, while TxtFex^*_*-learning permits the learner to vacillate between
finitely many different finite variants of the correct set. In this paper we
show that TxtFex^*_* \neq TxtFext^*_*, thereby answering the question posed by
Osherson, \textit{et al}. We prove this in a strong way by exhibiting a family
in TxtFex^*_2 \setminus {TxtFext}^*_*.
| [
"['Achilles Beros']",
"Achilles Beros"
] |
stat.ML cs.IT cs.LG math.IT | null | 1210.2085 | null | null | http://arxiv.org/pdf/1210.2085v2 | 2013-10-10T17:53:36Z | 2012-10-07T18:27:03Z | Privacy Aware Learning | We study statistical risk minimization problems under a privacy model in
which the data is kept confidential even from the learner. In this local
privacy framework, we establish sharp upper and lower bounds on the convergence
rates of statistical estimation procedures. As a consequence, we exhibit a
precise tradeoff between the amount of privacy the data preserves and the
utility, as measured by convergence rate, of any statistical estimator or
learning procedure.
| [
"['John C. Duchi' 'Michael I. Jordan' 'Martin J. Wainwright']",
"John C. Duchi and Michael I. Jordan and Martin J. Wainwright"
] |
cs.LG cs.CV | null | 1210.2162 | null | null | http://arxiv.org/pdf/1210.2162v1 | 2012-10-08T07:15:57Z | 2012-10-08T07:15:57Z | Semisupervised Classifier Evaluation and Recalibration | How many labeled examples are needed to estimate a classifier's performance
on a new dataset? We study the case where data is plentiful, but labels are
expensive. We show that by making a few reasonable assumptions on the structure
of the data, it is possible to estimate performance curves, with confidence
bounds, using a small number of ground truth labels. Our approach, which we
call Semisupervised Performance Evaluation (SPE), is based on a generative
model for the classifier's confidence scores. In addition to estimating the
performance of classifiers on new datasets, SPE can be used to recalibrate a
classifier by re-estimating the class-conditional confidence distributions.
| [
"Peter Welinder and Max Welling and Pietro Perona",
"['Peter Welinder' 'Max Welling' 'Pietro Perona']"
] |
cs.LG cs.AI cs.SI physics.soc-ph | null | 1210.2164 | null | null | http://arxiv.org/pdf/1210.2164v3 | 2012-12-21T05:48:55Z | 2012-10-08T07:24:38Z | ET-LDA: Joint Topic Modeling For Aligning, Analyzing and Sensemaking of
Public Events and Their Twitter Feeds | Social media channels such as Twitter have emerged as popular platforms for
crowds to respond to public events such as speeches, sports and debates. While
this promises tremendous opportunities to understand and make sense of the
reception of an event from the social media, the promises come entwined with
significant technical challenges. In particular, given an event and an
associated large scale collection of tweets, we need approaches to effectively
align tweets and the parts of the event they refer to. This in turn raises
questions about how to segment the event into smaller yet meaningful parts, and
how to figure out whether a tweet is a general one about the entire event or
specific one aimed at a particular segment of the event. In this work, we
present ET-LDA, an effective method for aligning an event and its tweets
through joint statistical modeling of topical influences from the events and
their associated tweets. The model enables the automatic segmentation of the
events and the characterization of tweets into two categories: (1) episodic
tweets that respond specifically to the content in the segments of the events,
and (2) steady tweets that respond generally about the events. We present an
efficient inference method for this model, and a comprehensive evaluation of
its effectiveness over existing methods. In particular, through a user study,
we demonstrate that users find the topics, the segments, the alignment, and the
episodic tweets discovered by ET-LDA to be of higher quality and more
interesting as compared to the state-of-the-art, with improvements in the range
of 18-41%.
| [
"Yuheng Hu, Ajita John, Fei Wang, Doree Duncan Seligmann, Subbarao\n Kambhampati",
"['Yuheng Hu' 'Ajita John' 'Fei Wang' 'Doree Duncan Seligmann'\n 'Subbarao Kambhampati']"
] |
cs.LG | 10.1109/TKDE.2015.2492565 | 1210.2179 | null | null | http://arxiv.org/abs/1210.2179v3 | 2015-12-07T13:49:04Z | 2012-10-08T08:17:18Z | Fast Online EM for Big Topic Modeling | The expectation-maximization (EM) algorithm can compute the
maximum-likelihood (ML) or maximum a posterior (MAP) point estimate of the
mixture models or latent variable models such as latent Dirichlet allocation
(LDA), which has been one of the most popular probabilistic topic modeling
methods in the past decade. However, batch EM has high time and space
complexities to learn big LDA models from big data streams. In this paper, we
present a fast online EM (FOEM) algorithm that infers the topic distribution
from the previously unseen documents incrementally with constant memory
requirements. Within the stochastic approximation framework, we show that FOEM
can converge to the local stationary point of the LDA's likelihood function. By
dynamic scheduling for the fast speed and parameter streaming for the low
memory usage, FOEM is more efficient for some lifelong topic modeling tasks
than the state-of-the-art online LDA algorithms to handle both big data and big
models (aka, big topic modeling) on just a PC.
| [
"Jia Zeng, Zhi-Qiang Liu and Xiao-Qin Cao",
"['Jia Zeng' 'Zhi-Qiang Liu' 'Xiao-Qin Cao']"
] |
cs.DC cs.LG stat.ML | null | 1210.2289 | null | null | http://arxiv.org/pdf/1210.2289v1 | 2012-10-08T14:14:13Z | 2012-10-08T14:14:13Z | A Fast Distributed Proximal-Gradient Method | We present a distributed proximal-gradient method for optimizing the average
of convex functions, each of which is the private local objective of an agent
in a network with time-varying topology. The local objectives have distinct
differentiable components, but they share a common nondifferentiable component,
which has a favorable structure suitable for effective computation of the
proximal operator. In our method, each agent iteratively updates its estimate
of the global minimum by optimizing its local objective function, and
exchanging estimates with others via communication in the network. Using
Nesterov-type acceleration techniques and multiple communication steps per
iteration, we show that this method converges at the rate 1/k (where k is the
number of communication rounds between the agents), which is faster than the
convergence rate of the existing distributed methods for solving this problem.
The superior convergence rate of our method is also verified by numerical
experiments.
| [
"Annie I. Chen and Asuman Ozdaglar",
"['Annie I. Chen' 'Asuman Ozdaglar']"
] |
cs.LG | null | 1210.2346 | null | null | http://arxiv.org/pdf/1210.2346v2 | 2013-08-30T16:07:47Z | 2012-10-08T17:19:43Z | Blending Learning and Inference in Structured Prediction | In this paper we derive an efficient algorithm to learn the parameters of
structured predictors in general graphical models. This algorithm blends the
learning and inference tasks, which results in a significant speedup over
traditional approaches, such as conditional random fields and structured
support vector machines. For this purpose we utilize the structures of the
predictors to describe a low dimensional structured prediction task which
encourages local consistencies within the different structures while learning
the parameters of the model. Convexity of the learning task provides the means
to enforce the consistencies between the different parts. The
inference-learning blending algorithm that we propose is guaranteed to converge
to the optimum of the low dimensional primal and dual programs. Unlike many of
the existing approaches, the inference-learning blending allows us to learn
efficiently high-order graphical models, over regions of any size, and very
large number of parameters. We demonstrate the effectiveness of our approach,
while presenting state-of-the-art results in stereo estimation, semantic
segmentation, shape reconstruction, and indoor scene understanding.
| [
"['Tamir Hazan' 'Alexander Schwing' 'David McAllester' 'Raquel Urtasun']",
"Tamir Hazan, Alexander Schwing, David McAllester and Raquel Urtasun"
] |
cs.DS cs.CR cs.LG math.PR | null | 1210.2381 | null | null | http://arxiv.org/pdf/1210.2381v1 | 2012-10-08T19:01:53Z | 2012-10-08T19:01:53Z | The Power of Linear Reconstruction Attacks | We consider the power of linear reconstruction attacks in statistical data
privacy, showing that they can be applied to a much wider range of settings
than previously understood. Linear attacks have been studied before (Dinur and
Nissim PODS'03, Dwork, McSherry and Talwar STOC'07, Kasiviswanathan, Rudelson,
Smith and Ullman STOC'10, De TCC'12, Muthukrishnan and Nikolov STOC'12) but
have so far been applied only in settings with releases that are obviously
linear.
Consider a database curator who manages a database of sensitive information
but wants to release statistics about how a sensitive attribute (say, disease)
in the database relates to some nonsensitive attributes (e.g., postal code,
age, gender, etc). We show one can mount linear reconstruction attacks based on
any release that gives: a) the fraction of records that satisfy a given
non-degenerate boolean function. Such releases include contingency tables
(previously studied by Kasiviswanathan et al., STOC'10) as well as more complex
outputs like the error rate of classifiers such as decision trees; b) any one
of a large class of M-estimators (that is, the output of empirical risk
minimization algorithms), including the standard estimators for linear and
logistic regression.
We make two contributions: first, we show how these types of releases can be
transformed into a linear format, making them amenable to existing
polynomial-time reconstruction algorithms. This is already perhaps surprising,
since many of the above releases (like M-estimators) are obtained by solving
highly nonlinear formulations. Second, we show how to analyze the resulting
attacks under various distributional assumptions on the data. Specifically, we
consider a setting in which the same statistic (either a) or b) above) is
released about how the sensitive attribute relates to all subsets of size k
(out of a total of d) nonsensitive boolean attributes.
| [
"['Shiva Prasad Kasiviswanathan' 'Mark Rudelson' 'Adam Smith']",
"Shiva Prasad Kasiviswanathan, Mark Rudelson, Adam Smith"
] |
cs.IT cs.LG math.IT math.PR | 10.1109/LSP.2012.2235830 | 1210.2613 | null | null | http://arxiv.org/abs/1210.2613v2 | 2012-12-18T16:48:18Z | 2012-10-09T14:30:51Z | Measuring the Influence of Observations in HMMs through the
Kullback-Leibler Distance | We measure the influence of individual observations on the sequence of the
hidden states of the Hidden Markov Model (HMM) by means of the Kullback-Leibler
distance (KLD). Namely, we consider the KLD between the conditional
distribution of the hidden states' chain given the complete sequence of
observations and the conditional distribution of the hidden chain given all the
observations but the one under consideration. We introduce a linear complexity
algorithm for computing the influence of all the observations. As an
illustration, we investigate the application of our algorithm to the problem of
detecting outliers in HMM data series.
| [
"Vittorio Perduca, Gregory Nuel",
"['Vittorio Perduca' 'Gregory Nuel']"
] |
cs.LG cs.AI | 10.1007/s10115-012-0577-7 | 1210.2640 | null | null | http://arxiv.org/abs/1210.2640v1 | 2012-10-09T15:25:01Z | 2012-10-09T15:25:01Z | Multi-view constrained clustering with an incomplete mapping between
views | Multi-view learning algorithms typically assume a complete bipartite mapping
between the different views in order to exchange information during the
learning process. However, many applications provide only a partial mapping
between the views, creating a challenge for current methods. To address this
problem, we propose a multi-view algorithm based on constrained clustering that
can operate with an incomplete mapping. Given a set of pairwise constraints in
each view, our approach propagates these constraints using a local similarity
measure to those instances that can be mapped to the other views, allowing the
propagated constraints to be transferred across views via the partial mapping.
It uses co-EM to iteratively estimate the propagation within each view based on
the current clustering model, transfer the constraints across views, and then
update the clustering model. By alternating the learning process between views,
this approach produces a unified clustering model that is consistent with all
views. We show that this approach significantly improves clustering performance
over several other methods for transferring constraints and allows multi-view
clustering to be reliably applied when given a limited mapping between the
views. Our evaluation reveals that the propagated constraints have high
precision with respect to the true clusters in the data, explaining their
benefit to clustering performance in both single- and multi-view learning
scenarios.
| [
"Eric Eaton, Marie desJardins, Sara Jacob",
"['Eric Eaton' 'Marie desJardins' 'Sara Jacob']"
] |
stat.ML cs.LG | null | 1210.2771 | null | null | http://arxiv.org/pdf/1210.2771v3 | 2013-04-22T17:56:54Z | 2012-10-09T22:17:42Z | Cost-Sensitive Tree of Classifiers | Recently, machine learning algorithms have successfully entered large-scale
real-world industrial applications (e.g. search engines and email spam
filters). Here, the CPU cost during test time must be budgeted and accounted
for. In this paper, we address the challenge of balancing the test-time cost
and the classifier accuracy in a principled fashion. The test-time cost of a
classifier is often dominated by the computation required for feature
extraction-which can vary drastically across eatures. We decrease this
extraction time by constructing a tree of classifiers, through which test
inputs traverse along individual paths. Each path extracts different features
and is optimized for a specific sub-partition of the input space. By only
computing features for inputs that benefit from them the most, our cost
sensitive tree of classifiers can match the high accuracies of the current
state-of-the-art at a small fraction of the computational cost.
| [
"Zhixiang Xu, Matt J. Kusner, Kilian Q. Weinberger, Minmin Chen",
"['Zhixiang Xu' 'Matt J. Kusner' 'Kilian Q. Weinberger' 'Minmin Chen']"
] |
cs.AI cs.DB cs.LG cs.LO | null | 1210.2984 | null | null | http://arxiv.org/pdf/1210.2984v2 | 2012-10-29T18:25:34Z | 2012-10-10T16:56:41Z | Learning Onto-Relational Rules with Inductive Logic Programming | Rules complement and extend ontologies on the Semantic Web. We refer to these
rules as onto-relational since they combine DL-based ontology languages and
Knowledge Representation formalisms supporting the relational data model within
the tradition of Logic Programming and Deductive Databases. Rule authoring is a
very demanding Knowledge Engineering task which can be automated though
partially by applying Machine Learning algorithms. In this chapter we show how
Inductive Logic Programming (ILP), born at the intersection of Machine Learning
and Logic Programming and considered as a major approach to Relational
Learning, can be adapted to Onto-Relational Learning. For the sake of
illustration, we provide details of a specific Onto-Relational Learning
solution to the problem of learning rule-based definitions of DL concepts and
roles with ILP.
| [
"['Francesca A. Lisi']",
"Francesca A. Lisi"
] |
cs.DB cs.LG | 10.5121/ijdkp.2012.2503 | 1210.3139 | null | null | http://arxiv.org/abs/1210.3139v1 | 2012-10-11T06:43:56Z | 2012-10-11T06:43:56Z | A Benchmark to Select Data Mining Based Classification Algorithms For
Business Intelligence And Decision Support Systems | DSS serve the management, operations, and planning levels of an organization
and help to make decisions, which may be rapidly changing and not easily
specified in advance. Data mining has a vital role to extract important
information to help in decision making of a decision support system.
Integration of data mining and decision support systems (DSS) can lead to the
improved performance and can enable the tackling of new types of problems.
Artificial Intelligence methods are improving the quality of decision support,
and have become embedded in many applications ranges from ant locking
automobile brakes to these days interactive search engines. It provides various
machine learning techniques to support data mining. The classification is one
of the main and valuable tasks of data mining. Several types of classification
algorithms have been suggested, tested and compared to determine the future
trends based on unseen data. There has been no single algorithm found to be
superior over all others for all data sets. The objective of this paper is to
compare various classification algorithms that have been frequently used in
data mining for decision support systems. Three decision trees based
algorithms, one artificial neural network, one statistical, one support vector
machines with and without ada boost and one clustering algorithm are tested and
compared on four data sets from different domains in terms of predictive
accuracy, error rate, classification index, comprehensibility and training
time. Experimental results demonstrate that Genetic Algorithm (GA) and support
vector machines based algorithms are better in terms of predictive accuracy.
SVM without adaboost shall be the first choice in context of speed and
predictive accuracy. Adaboost improves the accuracy of SVM but on the cost of
large training time.
| [
"Pardeep Kumar, Nitin, Vivek Kumar Sehgal and Durg Singh Chauhan",
"['Pardeep Kumar' 'Nitin' 'Vivek Kumar Sehgal' 'Durg Singh Chauhan']"
] |
stat.ML cs.CV cs.LG | null | 1210.3288 | null | null | http://arxiv.org/pdf/1210.3288v1 | 2012-10-11T16:30:15Z | 2012-10-11T16:30:15Z | Unsupervised Detection and Tracking of Arbitrary Objects with Dependent
Dirichlet Process Mixtures | This paper proposes a technique for the unsupervised detection and tracking
of arbitrary objects in videos. It is intended to reduce the need for detection
and localization methods tailored to specific object types and serve as a
general framework applicable to videos with varied objects, backgrounds, and
image qualities. The technique uses a dependent Dirichlet process mixture
(DDPM) known as the Generalized Polya Urn (GPUDDPM) to model image pixel data
that can be easily and efficiently extracted from the regions in a video that
represent objects. This paper describes a specific implementation of the model
using spatial and color pixel data extracted via frame differencing and gives
two algorithms for performing inference in the model to accomplish detection
and tracking. This technique is demonstrated on multiple synthetic and
benchmark video datasets that illustrate its ability to, without modification,
detect and track objects with diverse physical characteristics moving over
non-uniform backgrounds and through occlusion.
| [
"['Willie Neiswanger' 'Frank Wood']",
"Willie Neiswanger, Frank Wood"
] |
cs.LG q-bio.PE q-bio.QM stat.ML | null | 1210.3384 | null | null | http://arxiv.org/pdf/1210.3384v4 | 2013-11-02T21:38:34Z | 2012-10-11T22:20:33Z | Inferring clonal evolution of tumors from single nucleotide somatic
mutations | High-throughput sequencing allows the detection and quantification of
frequencies of somatic single nucleotide variants (SNV) in heterogeneous tumor
cell populations. In some cases, the evolutionary history and population
frequency of the subclonal lineages of tumor cells present in the sample can be
reconstructed from these SNV frequency measurements. However, automated methods
to do this reconstruction are not available and the conditions under which
reconstruction is possible have not been described.
We describe the conditions under which the evolutionary history can be
uniquely reconstructed from SNV frequencies from single or multiple samples
from the tumor population and we introduce a new statistical model, PhyloSub,
that infers the phylogeny and genotype of the major subclonal lineages
represented in the population of cancer cells. It uses a Bayesian nonparametric
prior over trees that groups SNVs into major subclonal lineages and
automatically estimates the number of lineages and their ancestry. We sample
from the joint posterior distribution over trees to identify evolutionary
histories and cell population frequencies that have the highest probability of
generating the observed SNV frequency data. When multiple phylogenies are
consistent with a given set of SNV frequencies, PhyloSub represents the
uncertainty in the tumor phylogeny using a partial order plot. Experiments on a
simulated dataset and two real datasets comprising tumor samples from acute
myeloid leukemia and chronic lymphocytic leukemia patients demonstrate that
PhyloSub can infer both linear (or chain) and branching lineages and its
inferences are in good agreement with ground truth, where it is available.
| [
"Wei Jiao, Shankar Vembu, Amit G. Deshwar, Lincoln Stein, Quaid Morris",
"['Wei Jiao' 'Shankar Vembu' 'Amit G. Deshwar' 'Lincoln Stein'\n 'Quaid Morris']"
] |
stat.AP cs.LG q-bio.GN q-bio.MN stat.ML | null | 1210.3456 | null | null | http://arxiv.org/pdf/1210.3456v2 | 2014-06-30T10:16:51Z | 2012-10-12T09:03:14Z | Bayesian Analysis for miRNA and mRNA Interactions Using Expression Data | MicroRNAs (miRNAs) are small RNA molecules composed of 19-22 nt, which play
important regulatory roles in post-transcriptional gene regulation by
inhibiting the translation of the mRNA into proteins or otherwise cleaving the
target mRNA. Inferring miRNA targets provides useful information for
understanding the roles of miRNA in biological processes that are potentially
involved in complex diseases. Statistical methodologies for point estimation,
such as the Least Absolute Shrinkage and Selection Operator (LASSO) algorithm,
have been proposed to identify the interactions of miRNA and mRNA based on
sequence and expression data. In this paper, we propose using the Bayesian
LASSO (BLASSO) and the non-negative Bayesian LASSO (nBLASSO) to analyse the
interactions between miRNA and mRNA using expression data. The proposed
Bayesian methods explore the posterior distributions for those parameters
required to model the miRNA-mRNA interactions. These approaches can be used to
observe the inferred effects of the miRNAs on the targets by plotting the
posterior distributions of those parameters. For comparison purposes, the Least
Squares Regression (LSR), Ridge Regression (RR), LASSO, non-negative LASSO
(nLASSO), and the proposed Bayesian approaches were applied to four public
datasets. We concluded that nLASSO and nBLASSO perform best in terms of
sensitivity and specificity. Compared to the point estimate algorithms, which
only provide single estimates for those parameters, the Bayesian methods are
more meaningful and provide credible intervals, which take into account the
uncertainty of the inferred interactions of the miRNA and mRNA. Furthermore,
Bayesian methods naturally provide statistical significance to select
convincing inferred interactions, while point estimate algorithms require a
manually chosen threshold, which is less meaningful, to choose the possible
interactions.
| [
"['Mingjun Zhong' 'Rong Liu' 'Bo Liu']",
"Mingjun Zhong, Rong Liu, Bo Liu"
] |
cs.CL cs.IR cs.LG | null | 1210.3926 | null | null | http://arxiv.org/pdf/1210.3926v2 | 2012-10-31T16:14:35Z | 2012-10-15T07:36:57Z | Learning Attitudes and Attributes from Multi-Aspect Reviews | The majority of online reviews consist of plain-text feedback together with a
single numeric score. However, there are multiple dimensions to products and
opinions, and understanding the `aspects' that contribute to users' ratings may
help us to better understand their individual preferences. For example, a
user's impression of an audiobook presumably depends on aspects such as the
story and the narrator, and knowing their opinions on these aspects may help us
to recommend better products. In this paper, we build models for rating systems
in which such dimensions are explicit, in the sense that users leave separate
ratings for each aspect of a product. By introducing new corpora consisting of
five million reviews, rated with between three and six aspects, we evaluate our
models on three prediction tasks: First, we use our model to uncover which
parts of a review discuss which of the rated aspects. Second, we use our model
to summarize reviews, which for us means finding the sentences that best
explain a user's rating. Finally, since aspect ratings are optional in many of
the datasets we consider, we use our model to recover those ratings that are
missing from a user's evaluation. Our model matches state-of-the-art approaches
on existing small-scale datasets, while scaling to the real-world datasets we
introduce. Moreover, our model is able to `disentangle' content and sentiment
words: we automatically learn content words that are indicative of a particular
aspect as well as the aspect-specific sentiment words that are indicative of a
particular rating.
| [
"Julian McAuley, Jure Leskovec, Dan Jurafsky",
"['Julian McAuley' 'Jure Leskovec' 'Dan Jurafsky']"
] |
cs.LG stat.ML | null | 1210.4006 | null | null | http://arxiv.org/pdf/1210.4006v1 | 2012-10-15T12:43:03Z | 2012-10-15T12:43:03Z | The Perturbed Variation | We introduce a new discrepancy score between two distributions that gives an
indication on their similarity. While much research has been done to determine
if two samples come from exactly the same distribution, much less research
considered the problem of determining if two finite samples come from similar
distributions. The new score gives an intuitive interpretation of similarity;
it optimally perturbs the distributions so that they best fit each other. The
score is defined between distributions, and can be efficiently estimated from
samples. We provide convergence bounds of the estimated score, and develop
hypothesis testing procedures that test if two data sets come from similar
distributions. The statistical power of this procedures is presented in
simulations. We also compare the score's capacity to detect similarity with
that of other known measures on real data.
| [
"Maayan Harel and Shie Mannor",
"['Maayan Harel' 'Shie Mannor']"
] |
cs.NA cs.CV cs.DS cs.LG math.OC | null | 1210.4081 | null | null | http://arxiv.org/pdf/1210.4081v1 | 2012-10-15T15:55:34Z | 2012-10-15T15:55:34Z | Getting Feasible Variable Estimates From Infeasible Ones: MRF Local
Polytope Study | This paper proposes a method for construction of approximate feasible primal
solutions from dual ones for large-scale optimization problems possessing
certain separability properties. Whereas infeasible primal estimates can
typically be produced from (sub-)gradients of the dual function, it is often
not easy to project them to the primal feasible set, since the projection
itself has a complexity comparable to the complexity of the initial problem. We
propose an alternative efficient method to obtain feasibility and show that its
properties influencing the convergence to the optimum are similar to the
properties of the Euclidean projection. We apply our method to the local
polytope relaxation of inference problems for Markov Random Fields and
demonstrate its superiority over existing methods.
| [
"Bogdan Savchynskyy and Stefan Schmidt",
"['Bogdan Savchynskyy' 'Stefan Schmidt']"
] |
cs.LG cs.AI stat.ML | null | 1210.4184 | null | null | http://arxiv.org/pdf/1210.4184v1 | 2012-10-15T20:14:23Z | 2012-10-15T20:14:23Z | The Kernel Pitman-Yor Process | In this work, we propose the kernel Pitman-Yor process (KPYP) for
nonparametric clustering of data with general spatial or temporal
interdependencies. The KPYP is constructed by first introducing an infinite
sequence of random locations. Then, based on the stick-breaking construction of
the Pitman-Yor process, we define a predictor-dependent random probability
measure by considering that the discount hyperparameters of the
Beta-distributed random weights (stick variables) of the process are not
uniform among the weights, but controlled by a kernel function expressing the
proximity between the location assigned to each weight and the given
predictors.
| [
"Sotirios P. Chatzis and Dimitrios Korkinof and Yiannis Demiris",
"['Sotirios P. Chatzis' 'Dimitrios Korkinof' 'Yiannis Demiris']"
] |
stat.ML cs.LG | null | 1210.4276 | null | null | http://arxiv.org/pdf/1210.4276v1 | 2012-10-16T07:31:59Z | 2012-10-16T07:31:59Z | Semi-Supervised Classification Through the Bag-of-Paths Group
Betweenness | This paper introduces a novel, well-founded, betweenness measure, called the
Bag-of-Paths (BoP) betweenness, as well as its extension, the BoP group
betweenness, to tackle semisupervised classification problems on weighted
directed graphs. The objective of semi-supervised classification is to assign a
label to unlabeled nodes using the whole topology of the graph and the labeled
nodes at our disposal. The BoP betweenness relies on a bag-of-paths framework
assigning a Boltzmann distribution on the set of all possible paths through the
network such that long (high-cost) paths have a low probability of being picked
from the bag, while short (low-cost) paths have a high probability of being
picked. Within that context, the BoP betweenness of node j is defined as the
sum of the a posteriori probabilities that node j lies in-between two arbitrary
nodes i, k, when picking a path starting in i and ending in k. Intuitively, a
node typically receives a high betweenness if it has a large probability of
appearing on paths connecting two arbitrary nodes of the network. This quantity
can be computed in closed form by inverting a n x n matrix where n is the
number of nodes. For the group betweenness, the paths are constrained to start
and end in nodes within the same class, therefore defining a group betweenness
for each class. Unlabeled nodes are then classified according to the class
showing the highest group betweenness. Experiments on various real-world data
sets show that BoP group betweenness outperforms all the tested state
of-the-art methods. The benefit of the BoP betweenness is particularly
noticeable when only a few labeled nodes are available.
| [
"Bertrand Lebichot, Ilkka Kivim\\\"aki, Kevin Fran\\c{c}oisse and Marco\n Saerens",
"['Bertrand Lebichot' 'Ilkka Kivimäki' 'Kevin Françoisse' 'Marco Saerens']"
] |
stat.ML cs.LG | null | 1210.4347 | null | null | http://arxiv.org/pdf/1210.4347v1 | 2012-10-16T10:26:29Z | 2012-10-16T10:26:29Z | Hilbert Space Embedding for Dirichlet Process Mixtures | This paper proposes a Hilbert space embedding for Dirichlet Process mixture
models via a stick-breaking construction of Sethuraman. Although Bayesian
nonparametrics offers a powerful approach to construct a prior that avoids the
need to specify the model size/complexity explicitly, an exact inference is
often intractable. On the other hand, frequentist approaches such as kernel
machines, which suffer from the model selection/comparison problems, often
benefit from efficient learning algorithms. This paper discusses the
possibility to combine the best of both worlds by using the Dirichlet Process
mixture model as a case study.
| [
"Krikamol Muandet",
"['Krikamol Muandet']"
] |
stat.ML cs.LG | null | 1210.4460 | null | null | http://arxiv.org/pdf/1210.4460v4 | 2014-05-11T11:47:07Z | 2012-10-16T15:54:36Z | Fast SVM-based Feature Elimination Utilizing Data Radius, Hard-Margin,
Soft-Margin | Margin maximization in the hard-margin sense, proposed as feature elimination
criterion by the MFE-LO method, is combined here with data radius utilization
to further aim to lower generalization error, as several published bounds and
bound-related formulations pertaining to lowering misclassification risk (or
error) pertain to radius e.g. product of squared radius and weight vector
squared norm. Additionally, we propose additional novel feature elimination
criteria that, while instead being in the soft-margin sense, too can utilize
data radius, utilizing previously published bound-related formulations for
approaching radius for the soft-margin sense, whereby e.g. a focus was on the
principle stated therein as "finding a bound whose minima are in a region with
small leave-one-out values may be more important than its tightness". These
additional criteria we propose combine radius utilization with a novel and
computationally low-cost soft-margin light classifier retraining approach we
devise named QP1; QP1 is the soft-margin alternative to the hard-margin LO. We
correct an error in the MFE-LO description, find MFE-LO achieves the highest
generalization accuracy among the previously published margin-based feature
elimination (MFE) methods, discuss some limitations of MFE-LO, and find our
novel methods herein outperform MFE-LO, attain lower test set classification
error rate. On several datasets that each both have a large number of features
and fall into the `large features few samples' dataset category, and on
datasets with lower (low-to-intermediate) number of features, our novel methods
give promising results. Especially, among our methods the tunable ones, that do
not employ (the non-tunable) LO approach, can be tuned more aggressively in the
future than herein, to aim to demonstrate for them even higher performance than
herein.
| [
"['Yaman Aksu']",
"Yaman Aksu"
] |
cs.CV cs.LG cs.MM | null | 1210.4481 | null | null | http://arxiv.org/pdf/1210.4481v1 | 2012-10-08T06:35:04Z | 2012-10-08T06:35:04Z | Epitome for Automatic Image Colorization | Image colorization adds color to grayscale images. It not only increases the
visual appeal of grayscale images, but also enriches the information contained
in scientific images that lack color information. Most existing methods of
colorization require laborious user interaction for scribbles or image
segmentation. To eliminate the need for human labor, we develop an automatic
image colorization method using epitome. Built upon a generative graphical
model, epitome is a condensed image appearance and shape model which also
proves to be an effective summary of color information for the colorization
task. We train the epitome from the reference images and perform inference in
the epitome to colorize grayscale images, rendering better colorization results
than previous method in our experiments.
| [
"['Yingzhen Yang' 'Xinqi Chu' 'Tian-Tsong Ng' 'Alex Yong-Sang Chia'\n 'Shuicheng Yan' 'Thomas S. Huang']",
"Yingzhen Yang, Xinqi Chu, Tian-Tsong Ng, Alex Yong-Sang Chia,\n Shuicheng Yan, Thomas S. Huang"
] |
cs.LG | null | 1210.4601 | null | null | http://arxiv.org/pdf/1210.4601v1 | 2012-10-17T00:22:31Z | 2012-10-17T00:22:31Z | A Direct Approach to Multi-class Boosting and Extensions | Boosting methods combine a set of moderately accurate weaklearners to form a
highly accurate predictor. Despite the practical importance of multi-class
boosting, it has received far less attention than its binary counterpart. In
this work, we propose a fully-corrective multi-class boosting formulation which
directly solves the multi-class problem without dividing it into multiple
binary classification problems. In contrast, most previous multi-class boosting
algorithms decompose a multi-boost problem into multiple binary boosting
problems. By explicitly deriving the Lagrange dual of the primal optimization
problem, we are able to construct a column generation-based fully-corrective
approach to boosting which directly optimizes multi-class classification
performance. The new approach not only updates all weak learners' coefficients
at every iteration, but does so in a manner flexible enough to accommodate
various loss functions and regularizations. For example, it enables us to
introduce structural sparsity through mixed-norm regularization to promote
group sparsity and feature sharing. Boosting with shared features is
particularly beneficial in complex prediction problems where features can be
expensive to compute. Our experiments on various data sets demonstrate that our
direct multi-class boosting generalizes as well as, or better than, a range of
competing multi-class boosting methods. The end result is a highly effective
and compact ensemble classifier which can be trained in a distributed fashion.
| [
"Chunhua Shen, Sakrapee Paisitkriangkrai, Anton van den Hengel",
"['Chunhua Shen' 'Sakrapee Paisitkriangkrai' 'Anton van den Hengel']"
] |
cs.LG cs.GT cs.MA math.DS stat.ML | null | 1210.4657 | null | null | http://arxiv.org/pdf/1210.4657v1 | 2012-10-17T07:51:56Z | 2012-10-17T07:51:56Z | Mean-Field Learning: a Survey | In this paper we study iterative procedures for stationary equilibria in
games with large number of players. Most of learning algorithms for games with
continuous action spaces are limited to strict contraction best reply maps in
which the Banach-Picard iteration converges with geometrical convergence rate.
When the best reply map is not a contraction, Ishikawa-based learning is
proposed. The algorithm is shown to behave well for Lipschitz continuous and
pseudo-contractive maps. However, the convergence rate is still unsatisfactory.
Several acceleration techniques are presented. We explain how cognitive users
can improve the convergence rate based only on few number of measurements. The
methodology provides nice properties in mean field games where the payoff
function depends only on own-action and the mean of the mean-field (first
moment mean-field games). A learning framework that exploits the structure of
such games, called, mean-field learning, is proposed. The proposed mean-field
learning framework is suitable not only for games but also for non-convex
global optimization problems. Then, we introduce mean-field learning without
feedback and examine the convergence to equilibria in beauty contest games,
which have interesting applications in financial markets. Finally, we provide a
fully distributed mean-field learning and its speedup versions for satisfactory
solution in wireless networks. We illustrate the convergence rate improvement
with numerical examples.
| [
"['Hamidou Tembine' 'Raul Tempone' 'Pedro Vilanova']",
"Hamidou Tembine, Raul Tempone and Pedro Vilanova"
] |
q-bio.NC cs.IT cs.LG math.IT | null | 1210.4695 | null | null | http://arxiv.org/pdf/1210.4695v1 | 2012-10-17T11:12:02Z | 2012-10-17T11:12:02Z | Regulating the information in spikes: a useful bias | The bias/variance tradeoff is fundamental to learning: increasing a model's
complexity can improve its fit on training data, but potentially worsens
performance on future samples. Remarkably, however, the human brain
effortlessly handles a wide-range of complex pattern recognition tasks. On the
basis of these conflicting observations, it has been argued that useful biases
in the form of "generic mechanisms for representation" must be hardwired into
cortex (Geman et al).
This note describes a useful bias that encourages cooperative learning which
is both biologically plausible and rigorously justified.
| [
"['David Balduzzi']",
"David Balduzzi"
] |
stat.ML cs.LG | null | 1210.4792 | null | null | http://arxiv.org/pdf/1210.4792v2 | 2013-03-08T02:19:07Z | 2012-10-17T16:57:48Z | Scalable Matrix-valued Kernel Learning for High-dimensional Nonlinear
Multivariate Regression and Granger Causality | We propose a general matrix-valued multiple kernel learning framework for
high-dimensional nonlinear multivariate regression problems. This framework
allows a broad class of mixed norm regularizers, including those that induce
sparsity, to be imposed on a dictionary of vector-valued Reproducing Kernel
Hilbert Spaces. We develop a highly scalable and eigendecomposition-free
algorithm that orchestrates two inexact solvers for simultaneously learning
both the input and output components of separable matrix-valued kernels. As a
key application enabled by our framework, we show how high-dimensional causal
inference tasks can be naturally cast as sparse function estimation problems,
leading to novel nonlinear extensions of a class of Graphical Granger Causality
techniques. Our algorithmic developments and extensive empirical studies are
complemented by theoretical analyses in terms of Rademacher generalization
bounds.
| [
"Vikas Sindhwani and Minh Ha Quang and Aurelie C. Lozano",
"['Vikas Sindhwani' 'Minh Ha Quang' 'Aurelie C. Lozano']"
] |
cs.LG stat.ML | null | 1210.4839 | null | null | http://arxiv.org/pdf/1210.4839v1 | 2012-10-16T17:32:09Z | 2012-10-16T17:32:09Z | Leveraging Side Observations in Stochastic Bandits | This paper considers stochastic bandits with side observations, a model that
accounts for both the exploration/exploitation dilemma and relationships
between arms. In this setting, after pulling an arm i, the decision maker also
observes the rewards for some other actions related to i. We will see that this
model is suited to content recommendation in social networks, where users'
reactions may be endorsed or not by their friends. We provide efficient
algorithms based on upper confidence bounds (UCBs) to leverage this additional
information and derive new bounds improving on standard regret guarantees. We
also evaluate these policies in the context of movie recommendation in social
networks: experiments on real datasets show substantial learning rate speedups
ranging from 2.2x to 14x on dense networks.
| [
"Stephane Caron, Branislav Kveton, Marc Lelarge, Smriti Bhagat",
"['Stephane Caron' 'Branislav Kveton' 'Marc Lelarge' 'Smriti Bhagat']"
] |
cs.AI cs.LG stat.ML | null | 1210.4841 | null | null | http://arxiv.org/pdf/1210.4841v1 | 2012-10-16T17:32:34Z | 2012-10-16T17:32:34Z | An Efficient Message-Passing Algorithm for the M-Best MAP Problem | Much effort has been directed at algorithms for obtaining the highest
probability configuration in a probabilistic random field model known as the
maximum a posteriori (MAP) inference problem. In many situations, one could
benefit from having not just a single solution, but the top M most probable
solutions known as the M-Best MAP problem. In this paper, we propose an
efficient message-passing based algorithm for solving the M-Best MAP problem.
Specifically, our algorithm solves the recently proposed Linear Programming
(LP) formulation of M-Best MAP [7], while being orders of magnitude faster than
a generic LP-solver. Our approach relies on studying a particular partial
Lagrangian relaxation of the M-Best MAP LP which exposes a natural
combinatorial structure of the problem that we exploit.
| [
"Dhruv Batra",
"['Dhruv Batra']"
] |
cs.GT cs.LG | null | 1210.4843 | null | null | http://arxiv.org/pdf/1210.4843v1 | 2012-10-16T17:34:04Z | 2012-10-16T17:34:04Z | Deterministic MDPs with Adversarial Rewards and Bandit Feedback | We consider a Markov decision process with deterministic state transition
dynamics, adversarially generated rewards that change arbitrarily from round to
round, and a bandit feedback model in which the decision maker only observes
the rewards it receives. In this setting, we present a novel and efficient
online decision making algorithm named MarcoPolo. Under mild assumptions on the
structure of the transition dynamics, we prove that MarcoPolo enjoys a regret
of O(T^(3/4)sqrt(log(T))) against the best deterministic policy in hindsight.
Specifically, our analysis does not rely on the stringent unichain assumption,
which dominates much of the previous work on this topic.
| [
"Raman Arora, Ofer Dekel, Ambuj Tewari",
"['Raman Arora' 'Ofer Dekel' 'Ambuj Tewari']"
] |
cs.LG stat.ML | null | 1210.4846 | null | null | http://arxiv.org/pdf/1210.4846v1 | 2012-10-16T17:34:45Z | 2012-10-16T17:34:45Z | Variational Dual-Tree Framework for Large-Scale Transition Matrix
Approximation | In recent years, non-parametric methods utilizing random walks on graphs have
been used to solve a wide range of machine learning problems, but in their
simplest form they do not scale well due to the quadratic complexity. In this
paper, a new dual-tree based variational approach for approximating the
transition matrix and efficiently performing the random walk is proposed. The
approach exploits a connection between kernel density estimation, mixture
modeling, and random walk on graphs in an optimization of the transition matrix
for the data graph that ties together edge transitions probabilities that are
similar. Compared to the de facto standard approximation method based on
k-nearestneighbors, we demonstrate order of magnitudes speedup without
sacrificing accuracy for Label Propagation tasks on benchmark data sets in
semi-supervised learning.
| [
"Saeed Amizadeh, Bo Thiesson, Milos Hauskrecht",
"['Saeed Amizadeh' 'Bo Thiesson' 'Milos Hauskrecht']"
] |
cs.LG cs.IR stat.ML | null | 1210.4850 | null | null | http://arxiv.org/pdf/1210.4850v1 | 2012-10-16T17:35:39Z | 2012-10-16T17:35:39Z | Markov Determinantal Point Processes | A determinantal point process (DPP) is a random process useful for modeling
the combinatorial problem of subset selection. In particular, DPPs encourage a
random subset Y to contain a diverse set of items selected from a base set Y.
For example, we might use a DPP to display a set of news headlines that are
relevant to a user's interests while covering a variety of topics. Suppose,
however, that we are asked to sequentially select multiple diverse sets of
items, for example, displaying new headlines day-by-day. We might want these
sets to be diverse not just individually but also through time, offering
headlines today that are unlike the ones shown yesterday. In this paper, we
construct a Markov DPP (M-DPP) that models a sequence of random sets {Yt}. The
proposed M-DPP defines a stationary process that maintains DPP margins.
Crucially, the induced union process Zt = Yt u Yt-1 is also marginally
DPP-distributed. Jointly, these properties imply that the sequence of random
sets are encouraged to be diverse both at a given time step as well as across
time steps. We describe an exact, efficient sampling procedure, and a method
for incrementally learning a quality measure over items in the base set Y based
on external preferences. We apply the M-DPP to the task of sequentially
displaying diverse and relevant news articles to a user with topic preferences.
| [
"['Raja Hafiz Affandi' 'Alex Kulesza' 'Emily B. Fox']",
"Raja Hafiz Affandi, Alex Kulesza, Emily B. Fox"
] |
cs.LG stat.ML | null | 1210.4851 | null | null | http://arxiv.org/pdf/1210.4851v1 | 2012-10-16T17:35:52Z | 2012-10-16T17:35:52Z | Learning to Rank With Bregman Divergences and Monotone Retargeting | This paper introduces a novel approach for learning to rank (LETOR) based on
the notion of monotone retargeting. It involves minimizing a divergence between
all monotonic increasing transformations of the training scores and a
parameterized prediction function. The minimization is both over the
transformations as well as over the parameters. It is applied to Bregman
divergences, a large class of "distance like" functions that were recently
shown to be the unique class that is statistically consistent with the
normalized discounted gain (NDCG) criterion [19]. The algorithm uses
alternating projection style updates, in which one set of simultaneous
projections can be computed independent of the Bregman divergence and the other
reduces to parameter estimation of a generalized linear model. This results in
easily implemented, efficiently parallelizable algorithm for the LETOR task
that enjoys global optimum guarantees under mild conditions. We present
empirical results on benchmark datasets showing that this approach can
outperform the state of the art NDCG consistent techniques.
| [
"['Sreangsu Acharyya' 'Oluwasanmi Koyejo' 'Joydeep Ghosh']",
"Sreangsu Acharyya, Oluwasanmi Koyejo, Joydeep Ghosh"
] |
cs.LG cs.CV stat.ML | null | 1210.4855 | null | null | http://arxiv.org/pdf/1210.4855v1 | 2012-10-16T17:37:29Z | 2012-10-16T17:37:29Z | A Slice Sampler for Restricted Hierarchical Beta Process with
Applications to Shared Subspace Learning | Hierarchical beta process has found interesting applications in recent years.
In this paper we present a modified hierarchical beta process prior with
applications to hierarchical modeling of multiple data sources. The novel use
of the prior over a hierarchical factor model allows factors to be shared
across different sources. We derive a slice sampler for this model, enabling
tractable inference even when the likelihood and the prior over parameters are
non-conjugate. This allows the application of the model in much wider contexts
without restrictions. We present two different data generative models a linear
GaussianGaussian model for real valued data and a linear Poisson-gamma model
for count data. Encouraging transfer learning results are shown for two real
world applications text modeling and content based image retrieval.
| [
"['Sunil Kumar Gupta' 'Dinh Q. Phung' 'Svetha Venkatesh']",
"Sunil Kumar Gupta, Dinh Q. Phung, Svetha Venkatesh"
] |
cs.LG stat.ML | null | 1210.4856 | null | null | http://arxiv.org/pdf/1210.4856v1 | 2012-10-16T17:37:41Z | 2012-10-16T17:37:41Z | Exploiting compositionality to explore a large space of model structures | The recent proliferation of richly structured probabilistic models raises the
question of how to automatically determine an appropriate model for a dataset.
We investigate this question for a space of matrix decomposition models which
can express a variety of widely used models from unsupervised learning. To
enable model selection, we organize these models into a context-free grammar
which generates a wide variety of structures through the compositional
application of a few simple rules. We use our grammar to generically and
efficiently infer latent components and estimate predictive likelihood for
nearly 2500 structures using a small toolbox of reusable algorithms. Using a
greedy search over our grammar, we automatically choose the decomposition
structure from raw data by evaluating only a small fraction of all models. The
proposed method typically finds the correct structure for synthetic data and
backs off gracefully to simpler models under heavy noise. It learns sensible
structures for datasets as diverse as image patches, motion capture, 20
Questions, and U.S. Senate votes, all using exactly the same code.
| [
"['Roger Grosse' 'Ruslan R Salakhutdinov' 'William T. Freeman'\n 'Joshua B. Tenenbaum']",
"Roger Grosse, Ruslan R Salakhutdinov, William T. Freeman, Joshua B.\n Tenenbaum"
] |
cs.LG cs.GT stat.ML | null | 1210.4859 | null | null | http://arxiv.org/pdf/1210.4859v1 | 2012-10-16T17:38:13Z | 2012-10-16T17:38:13Z | Mechanism Design for Cost Optimal PAC Learning in the Presence of
Strategic Noisy Annotators | We consider the problem of Probably Approximate Correct (PAC) learning of a
binary classifier from noisy labeled examples acquired from multiple annotators
(each characterized by a respective classification noise rate). First, we
consider the complete information scenario, where the learner knows the noise
rates of all the annotators. For this scenario, we derive sample complexity
bound for the Minimum Disagreement Algorithm (MDA) on the number of labeled
examples to be obtained from each annotator. Next, we consider the incomplete
information scenario, where each annotator is strategic and holds the
respective noise rate as a private information. For this scenario, we design a
cost optimal procurement auction mechanism along the lines of Myerson's optimal
auction design framework in a non-trivial manner. This mechanism satisfies
incentive compatibility property, thereby facilitating the learner to elicit
true noise rates of all the annotators.
| [
"Dinesh Garg, Sourangshu Bhattacharya, S. Sundararajan, Shirish Shevade",
"['Dinesh Garg' 'Sourangshu Bhattacharya' 'S. Sundararajan'\n 'Shirish Shevade']"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.