text
stringlengths 29
3.31k
| label
listlengths 1
11
|
---|---|
Despite recent attempts for solving the person re-identification problem, it
remains a challenging task since a person's appearance can vary significantly
when large variations in view angle, human pose, and illumination are involved.
In this paper, we propose a novel approach based on using a gradient-based
attention mechanism in deep convolution neural network for solving the person
re-identification problem. Our model learns to focus selectively on parts of
the input image for which the networks' output is most sensitive to and
processes them with high resolution while perceiving the surrounding image in
low resolution. Extensive comparative evaluations demonstrate that the proposed
method outperforms state-of-the-art approaches on the challenging CUHK01,
CUHK03, and Market 1501 datasets. | [
"cs.CV"
]
|
The difficulty of annotating training data is a major obstacle to using CNNs
for low-level tasks in video. Synthetic data often does not generalize to real
videos, while unsupervised methods require heuristic losses. Proxy tasks can
overcome these issues, and start by training a network for a task for which
annotation is easier or which can be trained unsupervised. The trained network
is then fine-tuned for the original task using small amounts of ground truth
data. Here, we investigate frame interpolation as a proxy task for optical
flow. Using real movies, we train a CNN unsupervised for temporal
interpolation. Such a network implicitly estimates motion, but cannot handle
untextured regions. By fine-tuning on small amounts of ground truth flow, the
network can learn to fill in homogeneous regions and compute full optical flow
fields. Using this unsupervised pre-training, our network outperforms similar
architectures that were trained supervised using synthetic optical flow. | [
"cs.CV"
]
|
We present Accel, a novel semantic video segmentation system that achieves
high accuracy at low inference cost by combining the predictions of two network
branches: (1) a reference branch that extracts high-detail features on a
reference keyframe, and warps these features forward using frame-to-frame
optical flow estimates, and (2) an update branch that computes features of
adjustable quality on the current frame, performing a temporal update at each
video frame. The modularity of the update branch, where feature subnetworks of
varying layer depth can be inserted (e.g. ResNet-18 to ResNet-101), enables
operation over a new, state-of-the-art accuracy-throughput trade-off spectrum.
Over this curve, Accel models achieve both higher accuracy and faster inference
times than the closest comparable single-frame segmentation networks. In
general, Accel significantly outperforms previous work on efficient semantic
video segmentation, correcting warping-related error that compounds on datasets
with complex dynamics. Accel is end-to-end trainable and highly modular: the
reference network, the optical flow network, and the update network can each be
selected independently, depending on application requirements, and then jointly
fine-tuned. The result is a robust, general system for fast, high-accuracy
semantic segmentation on video. | [
"cs.CV",
"cs.LG"
]
|
We propose three improvements to vision transformers (ViT) to reduce the
number of trainable parameters without compromising classification accuracy. We
address two shortcomings of the early ViT architectures -- quadratic bottleneck
of the attention mechanism and the lack of an inductive bias in their
architectures that rely on unrolling the two-dimensional image structure.
Linear attention mechanisms overcome the bottleneck of quadratic complexity,
which restricts application of transformer models in vision tasks. We modify
the ViT architecture to work on longer sequence data by replacing the quadratic
attention with efficient transformers, such as Performer, Linformer and
Nystr\"omformer of linear complexity creating Vision X-formers (ViX). We show
that all three versions of ViX may be more accurate than ViT for image
classification while using far fewer parameters and computational resources. We
also compare their performance with FNet and multi-layer perceptron (MLP)
mixer. We further show that replacing the initial linear embedding layer by
convolutional layers in ViX further increases their performance. Furthermore,
our tests on recent vision transformer models, such as LeViT, Convolutional
vision Transformer (CvT), Compact Convolutional Transformer (CCT) and
Pooling-based Vision Transformer (PiT) show that replacing the attention with
Nystr\"omformer or Performer saves GPU usage and memory without deteriorating
the classification accuracy. We also show that replacing the standard learnable
1D position embeddings in ViT with Rotary Position Embedding (RoPE) give
further improvements in accuracy. Incorporating these changes can democratize
transformers by making them accessible to those with limited data and computing
resources. | [
"cs.CV",
"cs.AI",
"cs.CC",
"cs.LG",
"I.4.0; I.4.1; I.4.7; I.4.8; I.4.9; I.4.10; I.2.10; I.5.1; I.5.2;\n I.5.4"
]
|
We show that viewing graphs as sets of node features and incorporating
structural and positional information into a transformer architecture is able
to outperform representations learned with classical graph neural networks
(GNNs). Our model, GraphiT, encodes such information by (i) leveraging relative
positional encoding strategies in self-attention scores based on positive
definite kernels on graphs, and (ii) enumerating and encoding local
sub-structures such as paths of short length. We thoroughly evaluate these two
ideas on many classification and regression tasks, demonstrating the
effectiveness of each of them independently, as well as their combination. In
addition to performing well on standard benchmarks, our model also admits
natural visualization mechanisms for interpreting graph motifs explaining the
predictions, making it a potentially strong candidate for scientific
applications where interpretation is important. Code available at
https://github.com/inria-thoth/GraphiT. | [
"cs.LG"
]
|
Gradient-based algorithms are effective for many machine learning tasks, but
despite ample recent effort and some progress, it often remains unclear why
they work in practice in optimising high-dimensional non-convex functions and
why they find good minima instead of being trapped in spurious ones.
Here we present a quantitative theory explaining this behaviour in a spiked
matrix-tensor model.
Our framework is based on the Kac-Rice analysis of stationary points and a
closed-form analysis of gradient-flow originating from statistical physics. We
show that there is a well defined region of parameters where the gradient-flow
algorithm finds a good global minimum despite the presence of exponentially
many spurious local minima.
We show that this is achieved by surfing on saddles that have strong negative
direction towards the global minima, a phenomenon that is connected to a
BBP-type threshold in the Hessian describing the critical points of the
landscapes. | [
"cs.LG",
"cond-mat.dis-nn",
"math.ST",
"stat.ML",
"stat.TH"
]
|
We propose a new learning paradigm called Deep Memory. It has the potential
to completely revolutionize the Machine Learning field. Surprisingly, this
paradigm has not been reinvented yet, unlike Deep Learning. At the core of this
approach is the \textit{Learning By Heart} principle, well studied in primary
schools all over the world.
Inspired by poem recitation, or by $\pi$ decimal memorization, we propose a
concrete algorithm that mimics human behavior. We implement this paradigm on
the task of generative modeling, and apply to images, natural language and even
the $\pi$ decimals as long as one can print them as text. The proposed
algorithm even generated this paper, in a one-shot learning setting. In
carefully designed experiments, we show that the generated samples are
indistinguishable from the training examples, as measured by any statistical
tests or metrics. | [
"cs.LG"
]
|
Graph Attention Networks (GATs) are the state-of-the-art neural architecture
for representation learning with graphs. GATs learn attention functions that
assign weights to nodes so that different nodes have different influences in
the feature aggregation steps. In practice, however, induced attention
functions are prone to over-fitting due to the increasing number of parameters
and the lack of direct supervision on attention weights. GATs also suffer from
over-smoothing at the decision boundary of nodes. Here we propose a framework
to address their weaknesses via margin-based constraints on attention during
training. We first theoretically demonstrate the over-smoothing behavior of
GATs and then develop an approach using constraint on the attention weights
according to the class boundary and feature aggregation pattern. Furthermore,
to alleviate the over-fitting problem, we propose additional constraints on the
graph structure. Extensive experiments and ablation studies on common benchmark
datasets demonstrate the effectiveness of our method, which leads to
significant improvements over the previous state-of-the-art graph attention
methods on all datasets. | [
"cs.LG",
"stat.ML"
]
|
Differentially private stochastic gradient descent (DPSGD) is a variation of
stochastic gradient descent based on the Differential Privacy (DP) paradigm
which can mitigate privacy threats arising from the presence of sensitive
information in training data. One major drawback of training deep neural
networks with DPSGD is a reduction in the model's accuracy. In this paper, we
propose an alternative method for preserving data privacy based on introducing
noise through learnable probability distributions, which leads to a significant
improvement in the utility of the resulting private models. We also demonstrate
that normalization layers have a large beneficial impact on the performance of
deep neural networks with noisy parameters. In particular, we show that
contrary to general belief, a large amount of random noise can be added to the
weights of neural networks without harming the performance, once the networks
are augmented with normalization layers. We hypothesize that this robustness is
a consequence of the scale invariance property of normalization operators.
Building on these observations, we propose a new algorithmic technique for
training deep neural networks under very low privacy budgets by sampling
weights from Gaussian distributions and utilizing batch or layer normalization
techniques to prevent performance degradation. Our method outperforms previous
approaches, including DPSGD, by a substantial margin on a comprehensive set of
experiments on Computer Vision and Natural Language Processing tasks. In
particular, we obtain a 20 percent accuracy improvement over DPSGD on the MNIST
and CIFAR10 datasets with DP-privacy budgets of $\varepsilon = 0.05$ and
$\varepsilon = 2.0$, respectively. Our code is available online:
https://github.com/uds-lsv/SIDP. | [
"cs.LG",
"cs.CR",
"stat.ML"
]
|
Dynamical systems comprised of autonomous agents arise in many relevant
problems such as multi-agent robotics, smart grids, or smart cities.
Controlling these systems is of paramount importance to guarantee a successful
deployment. Optimal centralized controllers are readily available but face
limitations in terms of scalability and practical implementation. Optimal
decentralized controllers, on the other hand, are difficult to find. In this
paper, we propose a framework using graph neural networks (GNNs) to learn
decentralized controllers from data. While GNNs are naturally distributed
architectures, making them perfectly suited for the task, we adapt them to
handle delayed communications as well. Furthermore, they are equivariant and
stable, leading to good scalability and transferability properties. The problem
of flocking is explored to illustrate the potential of GNNs in learning
decentralized controllers. | [
"cs.LG",
"cs.SY",
"eess.SY",
"stat.ML"
]
|
Recently, prediction markets have shown considerable promise for developing
flexible mechanisms for machine learning. In this paper, agents with isoelastic
utilities are considered. It is shown that the costs associated with
homogeneous markets of agents with isoelastic utilities produce equilibrium
prices corresponding to alpha-mixtures, with a particular form of mixing
component relating to each agent's wealth. We also demonstrate that wealth
accumulation for logarithmic and other isoelastic agents (through payoffs on
prediction of training targets) can implement both Bayesian model updates and
mixture weight updates by imposing different market payoff structures. An
iterative algorithm is given for market equilibrium computation. We demonstrate
that inhomogeneous markets of agents with isoelastic utilities outperform state
of the art aggregate classifiers such as random forests, as well as single
classifiers (neural networks, decision trees) on a number of machine learning
benchmarks, and show that isoelastic combination methods are generally better
than their logarithmic counterparts. | [
"cs.LG",
"cs.GT",
"stat.ML"
]
|
Convolutional Neural Networks (CNNs) are successfully used for the important
automotive visual perception tasks including object recognition, motion and
depth estimation, visual SLAM, etc. However, these tasks are typically
independently explored and modeled. In this paper, we propose a joint
multi-task network design for learning several tasks simultaneously. Our main
motivation is the computational efficiency achieved by sharing the expensive
initial convolutional layers between all tasks. Indeed, the main bottleneck in
automated driving systems is the limited processing power available on
deployment hardware. There is also some evidence for other benefits in
improving accuracy for some tasks and easing development effort. It also offers
scalability to add more tasks leveraging existing features and achieving better
generalization. We survey various CNN based solutions for visual perception
tasks in automated driving. Then we propose a unified CNN model for the
important tasks and discuss several advanced optimization and architecture
design techniques to improve the baseline model. The paper is partly review and
partly positional with demonstration of several preliminary results promising
for future research. We first demonstrate results of multi-stream learning and
auxiliary learning which are important ingredients to scale to a large
multi-task model. Finally, we implement a two-stream three-task network which
performs better in many cases compared to their corresponding single-task
models, while maintaining network size. | [
"cs.CV",
"cs.LG",
"cs.RO",
"stat.ML"
]
|
This paper presents regional attraction of line segment maps, and hereby
poses the problem of line segment detection (LSD) as a problem of region
coloring. Given a line segment map, the proposed regional attraction first
establishes the relationship between line segments and regions in the image
lattice. Based on this, the line segment map is equivalently transformed to an
attraction field map (AFM), which can be remapped to a set of line segments
without loss of information. Accordingly, we develop an end-to-end framework to
learn attraction field maps for raw input images, followed by a squeeze module
to detect line segments. Apart from existing works, the proposed detector
properly handles the local ambiguity and does not rely on the accurate
identification of edge pixels. Comprehensive experiments on the Wireframe
dataset and the YorkUrban dataset demonstrate the superiority of our method. In
particular, we achieve an F-measure of 0.831 on the Wireframe dataset,
advancing the state-of-the-art performance by 10.3 percent. | [
"cs.CV"
]
|
We present a novel approach for the prediction of anticancer compound
sensitivity by means of multi-modal attention-based neural networks (PaccMann).
In our approach, we integrate three key pillars of drug sensitivity, namely,
the molecular structure of compounds, transcriptomic profiles of cancer cells
as well as prior knowledge about interactions among proteins within cells. Our
models ingest a drug-cell pair consisting of SMILES encoding of a compound and
the gene expression profile of a cancer cell and predicts an IC50 sensitivity
value. Gene expression profiles are encoded using an attention-based encoding
mechanism that assigns high weights to the most informative genes. We present
and study three encoders for SMILES string of compounds: 1) bidirectional
recurrent 2) convolutional 3) attention-based encoders. We compare our devised
models against a baseline model that ingests engineered fingerprints to
represent the molecular structure. We demonstrate that using our
attention-based encoders, we can surpass the baseline model. The use of
attention-based encoders enhance interpretability and enable us to identify
genes, bonds and atoms that were used by the network to make a prediction. | [
"cs.LG",
"q-bio.MN",
"q-bio.QM"
]
|
We propose a novel approach to multimodal sentiment analysis using deep
neural networks combining visual analysis and natural language processing. Our
goal is different than the standard sentiment analysis goal of predicting
whether a sentence expresses positive or negative sentiment; instead, we aim to
infer the latent emotional state of the user. Thus, we focus on predicting the
emotion word tags attached by users to their Tumblr posts, treating these as
"self-reported emotions." We demonstrate that our multimodal model combining
both text and image features outperforms separate models based solely on either
images or text. Our model's results are interpretable, automatically yielding
sensible word lists associated with emotions. We explore the structure of
emotions implied by our model and compare it to what has been posited in the
psychology literature, and validate our model on a set of images that have been
used in psychology studies. Finally, our work also provides a useful tool for
the growing academic study of images - both photographs and memes - on social
networks. | [
"stat.ML",
"cs.LG",
"stat.AP"
]
|
We propose the general construction formula of shape-color primitives by
using partial differentials of each color channel in this paper. By using all
kinds of shape-color primitives, shape-color differential moment invariants can
be constructed very easily, which are invariant to the shape affine and color
affine transforms. 50 instances of SCDMIs are obtained finally. In experiments,
several commonly used color descriptors and SCDMIs are used in image
classification and retrieval of color images, respectively. By comparing the
experimental results, we find that SCDMIs get better results. | [
"cs.CV"
]
|
Arbitrary style transfer is the task of synthesis of an image that has never
been seen before, using two given images: content image and style image. The
content image forms the structure, the basic geometric lines and shapes of the
resulting image, while the style image sets the color and texture of the
result. The word "arbitrary" in this context means the absence of any one
pre-learned style. So, for example, convolutional neural networks capable of
transferring a new style only after training or retraining on a new amount of
data are not con-sidered to solve such a problem, while networks based on the
attention mech-anism that are capable of performing such a transformation
without retraining - yes. An original image can be, for example, a photograph,
and a style image can be a painting of a famous artist. The resulting image in
this case will be the scene depicted in the original photograph, made in the
stylie of this picture. Recent arbitrary style transfer algorithms make it
possible to achieve good re-sults in this task, however, in processing portrait
images of people, the result of such algorithms is either unacceptable due to
excessive distortion of facial features, or weakly expressed, not bearing the
characteristic features of a style image. In this paper, we consider an
approach to solving this problem using the combined architecture of deep neural
networks with a attention mechanism that transfers style based on the contents
of a particular image segment: with a clear predominance of style over the form
for the background part of the im-age, and with the prevalence of content over
the form in the image part con-taining directly the image of a person. | [
"cs.CV",
"cs.LG",
"eess.IV"
]
|
Driver vigilance estimation is an important task for transportation safety.
Wearable and portable brain-computer interface devices provide a powerful means
for real-time monitoring of the vigilance level of drivers to help with
avoiding distracted or impaired driving. In this paper, we propose a novel
multimodal architecture for in-vehicle vigilance estimation from
Electroencephalogram and Electrooculogram. To enable the system to focus on the
most salient parts of the learned multimodal representations, we propose an
architecture composed of a capsule attention mechanism following a deep Long
Short-Term Memory (LSTM) network. Our model learns hierarchical dependencies in
the data through the LSTM and capsule feature representation layers. To better
explore the discriminative ability of the learned representations, we study the
effect of the proposed capsule attention mechanism including the number of
dynamic routing iterations as well as other parameters. Experiments show the
robustness of our method by outperforming other solutions and baseline
techniques, setting a new state-of-the-art. We then provide an analysis on
different frequency bands and brain regions to evaluate their suitability for
driver vigilance estimation. Lastly, an analysis on the role of capsule
attention, multimodality, and robustness to noise is performed, highlighting
the advantages of our approach. | [
"cs.LG",
"cs.CV",
"eess.SP",
"stat.ML"
]
|
Typical active learning strategies are designed for tasks, such as
classification, with the assumption that the output space is mutually
exclusive. The assumption that these tasks always have exactly one correct
answer has resulted in the creation of numerous uncertainty-based measurements,
such as entropy and least confidence, which operate over a model's outputs.
Unfortunately, many real-world vision tasks, like visual question answering and
image captioning, have multiple correct answers, causing these measurements to
overestimate uncertainty and sometimes perform worse than a random sampling
baseline. In this paper, we propose a new paradigm that estimates uncertainty
in the model's internal hidden space instead of the model's output space. We
specifically study a manifestation of this problem for visual question answer
generation (VQA), where the aim is not to classify the correct answer but to
produce a natural language answer, given an image and a question. Our method
overcomes the paraphrastic nature of language. It requires a semantic space
that structures the model's output concepts and that enables the usage of
techniques like dropout-based Bayesian uncertainty. We build a visual-semantic
space that embeds paraphrases close together for any existing VQA model. We
empirically show state-of-art active learning results on the task of VQA on two
datasets, being 5 times more cost-efficient on Visual Genome and 3 times more
cost-efficient on VQA 2.0. | [
"cs.CV",
"cs.CL"
]
|
Extreme events are occurrences whose magnitude and potential cause extensive
damage on people, infrastructure, and the environment. Motivated by the extreme
nature of the current global health landscape, which is plagued by the
coronavirus pandemic, we seek to better understand and model extreme events.
Modeling extreme events is common in practice and plays an important role in
time-series prediction applications. Our goal is to (i) compare and investigate
the effect of some common extreme events modeling methods to explore which
method can be practical in reality and (ii) accelerate the deep learning
training process, which commonly uses deep recurrent neural network (RNN), by
implementing the asynchronous local Stochastic Gradient Descent (SGD) framework
among multiple compute nodes. In order to verify our distributed extreme events
modeling, we evaluate our proposed framework on a stock data set S\&P500, with
a standard recurrent neural network. Our intuition is to explore the (best)
extreme events modeling method which could work well under the distributed deep
learning setting. Moreover, by using asynchronous distributed learning, we aim
to significantly reduce the communication cost among the compute nodes and
central server, which is the main bottleneck of almost all distributed learning
frameworks.
We implement our proposed work and evaluate its performance on representative
data sets, such as S&P500 stock in $5$-year period. The experimental results
validate the correctness of the design principle and show a significant
training duration reduction upto $8$x, compared to the baseline single compute
node. Our results also show that our proposed work can achieve the same level
of test accuracy, compared to the baseline setting. | [
"cs.LG",
"cs.DC",
"stat.ML"
]
|
For semantic segmentation of remote sensing images (RSI), trade-off between
representation power and location accuracy is quite important. How to get the
trade-off effectively is an open question, where current approaches of
utilizing attention schemes or very deep models result in complex models with
large memory consumption. Compared with the popularly-used convolutional neural
network (CNN) with fixed square kernels, graph convolutional network (GCN) can
explicitly utilize correlations between adjacent land covers and conduct
flexible convolution on arbitrarily irregular image regions. However, the
problems of large variations of target scales and blurred boundary cannot be
easily solved by GCN, while densely connected atrous convolution network
(DenseAtrousCNet) with multi-scale atrous convolution can expand the receptive
fields and obtain image global information. Inspired by the advantages of both
GCN and Atrous CNN, a two-stream deep neural network for semantic segmentation
of RSI (RSI-Net) is proposed in this paper to obtain improved performance
through modeling and propagating spatial contextual structure effectively and a
novel decoding scheme with image-level and graph-level combination. Extensive
experiments are implemented on the Vaihingen, Potsdam and Gaofen RSI datasets,
where the comparison results demonstrate the superior performance of RSI-Net in
terms of overall accuracy, F1 score and kappa coefficient when compared with
six state-of-the-art RSI semantic segmentation methods. | [
"cs.CV",
"cs.AI"
]
|
This thesis investigates unsupervised time series representation learning for
sequence prediction problems, i.e. generating nice-looking input samples given
a previous history, for high dimensional input sequences by decoupling the
static input representation from the recurrent sequence representation. We
introduce three models based on Generative Stochastic Networks (GSN) for
unsupervised sequence learning and prediction. Experimental results for these
three models are presented on pixels of sequential handwritten digit (MNIST)
data, videos of low-resolution bouncing balls, and motion capture data. The
main contribution of this thesis is to provide evidence that GSNs are a viable
framework to learn useful representations of complex sequential input data, and
to suggest a new framework for deep generative models to learn complex
sequences by decoupling static input representations from dynamic time
dependency representations. | [
"cs.LG",
"stat.ML"
]
|
When formulated as an unsupervised learning problem, anomaly detection often
requires a model to learn the distribution of normal data. Previous works apply
Generative Adversarial Networks (GANs) to anomaly detection tasks and show good
performances from these models. Motivated by the observation that GAN ensembles
often outperform single GANs in generation tasks, we propose to construct GAN
ensembles for anomaly detection. In the proposed method, a group of generators
and a group of discriminators are trained together, so every generator gets
feedback from multiple discriminators, and vice versa. Compared to a single
GAN, a GAN ensemble can better model the distribution of normal data and thus
better detect anomalies. Our theoretical analysis of GANs and GAN ensembles
explains the role of a GAN discriminator in anomaly detection. In the empirical
study, we evaluate ensembles constructed from four types of base models, and
the results show that these ensembles clearly outperform single models in a
series of tasks of anomaly detection. | [
"cs.LG",
"cs.AI"
]
|
In the analysis of sequential data, the detection of abrupt changes is
important in predicting future changes. In this paper, we propose statistical
hypothesis tests for detecting covariance structure changes in locally smooth
time series modeled by Gaussian Processes (GPs). We provide theoretically
justified thresholds for the tests, and use them to improve Bayesian Online
Change Point Detection (BOCPD) by confirming statistically significant changes
and non-changes. Our Confirmatory BOCPD (CBOCPD) algorithm finds multiple
structural breaks in GPs even when hyperparameters are not tuned precisely. We
also provide conditions under which CBOCPD provides the lower prediction error
compared to BOCPD. Experimental results on synthetic and real-world datasets
show that our new tests correctly detect changes in the covariance structure in
GPs. The proposed algorithm also outperforms existing methods for the
prediction of nonstationarity in terms of both regression error and log
likelihood. | [
"cs.LG",
"stat.ML"
]
|
Deep learning has revolutionized the performance of classification, but
meanwhile demands sufficient labeled data for training. Given insufficient
data, while many techniques have been developed to help combat overfitting, the
challenge remains if one tries to train deep networks, especially in the
ill-posed extremely low data regimes: only a small set of labeled data are
available, and nothing -- including unlabeled data -- else. Such regimes arise
from practical situations where not only data labeling but also data collection
itself is expensive. We propose a deep adversarial data augmentation (DADA)
technique to address the problem, in which we elaborately formulate data
augmentation as a problem of training a class-conditional and supervised
generative adversarial network (GAN). Specifically, a new discriminator loss is
proposed to fit the goal of data augmentation, through which both real and
augmented samples are enforced to contribute to and be consistent in finding
the decision boundaries. Tailored training techniques are developed
accordingly. To quantitatively validate its effectiveness, we first perform
extensive simulations to show that DADA substantially outperforms both
traditional data augmentation and a few GAN-based options. We then extend
experiments to three real-world small labeled datasets where existing data
augmentation and/or transfer learning strategies are either less effective or
infeasible. All results endorse the superior capability of DADA in enhancing
the generalization ability of deep networks trained in practical extremely low
data regimes. Source code is available at
https://github.com/SchafferZhang/DADA. | [
"cs.CV"
]
|
We show that adding differential privacy to Explainable Boosting Machines
(EBMs), a recent method for training interpretable ML models, yields
state-of-the-art accuracy while protecting privacy. Our experiments on multiple
classification and regression datasets show that DP-EBM models suffer
surprisingly little accuracy loss even with strong differential privacy
guarantees. In addition to high accuracy, two other benefits of applying DP to
EBMs are: a) trained models provide exact global and local interpretability,
which is often important in settings where differential privacy is needed; and
b) the models can be edited after training without loss of privacy to correct
errors which DP noise may have introduced. | [
"cs.LG",
"cs.CR"
]
|
Two-stage methods have dominated Human-Object Interaction (HOI) detection for
several years. Recently, one-stage HOI detection methods have become popular.
In this paper, we aim to explore the essential pros and cons of two-stage and
one-stage methods. With this as the goal, we find that conventional two-stage
methods mainly suffer from positioning positive interactive human-object pairs,
while one-stage methods are challenging to make an appropriate trade-off on
multi-task learning, i.e., object detection, and interaction classification.
Therefore, a core problem is how to take the essence and discard the dregs from
the conventional two types of methods. To this end, we propose a novel
one-stage framework with disentangling human-object detection and interaction
classification in a cascade manner. In detail, we first design a human-object
pair generator based on a state-of-the-art one-stage HOI detector by removing
the interaction classification module or head and then design a relatively
isolated interaction classifier to classify each human-object pair. Two cascade
decoders in our proposed framework can focus on one specific task, detection or
interaction classification. In terms of the specific implementation, we adopt a
transformer-based HOI detector as our base model. The newly introduced
disentangling paradigm outperforms existing methods by a large margin, with a
significant relative mAP gain of 9.32% on HICO-Det. | [
"cs.CV"
]
|
Generative Adversarial Networks (GANs) have achieved remarkable results in
the task of generating realistic natural images. In most successful
applications, GAN models share two common aspects: solving a challenging saddle
point optimization problem, interpreted as an adversarial game between a
generator and a discriminator functions; and parameterizing the generator and
the discriminator as deep convolutional neural networks. The goal of this paper
is to disentangle the contribution of these two factors to the success of GANs.
In particular, we introduce Generative Latent Optimization (GLO), a framework
to train deep convolutional generators using simple reconstruction losses.
Throughout a variety of experiments, we show that GLO enjoys many of the
desirable properties of GANs: synthesizing visually-appealing samples,
interpolating meaningfully between samples, and performing linear arithmetic
with noise vectors; all of this without the adversarial optimization scheme. | [
"stat.ML",
"cs.CV",
"cs.LG"
]
|
We propose Geo-PIFu, a method to recover a 3D mesh from a monocular color
image of a clothed person. Our method is based on a deep implicit
function-based representation to learn latent voxel features using a
structure-aware 3D U-Net, to constrain the model in two ways: first, to resolve
feature ambiguities in query point encoding, second, to serve as a coarse human
shape proxy to regularize the high-resolution mesh and encourage global shape
regularity. We show that, by both encoding query points and constraining global
shape using latent voxel features, the reconstruction we obtain for clothed
human meshes exhibits less shape distortion and improved surface details
compared to competing methods. We evaluate Geo-PIFu on a recent human mesh
public dataset that is $10 \times$ larger than the private commercial dataset
used in PIFu and previous derivative work. On average, we exceed the state of
the art by $42.7\%$ reduction in Chamfer and Point-to-Surface Distances, and
$19.4\%$ reduction in normal estimation errors. | [
"cs.CV",
"cs.GR",
"cs.LG"
]
|
We evaluate the distribution learning capabilities of generative adversarial
networks by testing them on synthetic datasets. The datasets include common
distributions of points in $R^n$ space and images containing polygons of
various shapes and sizes. We find that by and large GANs fail to faithfully
recreate point datasets which contain discontinous support or sharp bends with
noise. Additionally, on image datasets, we find that GANs do not seem to learn
to count the number of objects of the same kind in an image. We also highlight
the apparent tension between generalization and learning in GANs. | [
"cs.LG",
"cs.CV",
"stat.ML"
]
|
Graph convolution networks have recently garnered a lot of attention for
representation learning on non-Euclidean feature spaces. Recent research has
focused on stacking multiple layers like in convolutional neural networks for
the increased expressive power of graph convolution networks. However, simply
stacking multiple graph convolution layers lead to issues like vanishing
gradient, over-fitting and over-smoothing. Such problems are much less when
using shallower networks, even though the shallow networks have lower
expressive power. In this work, we propose a novel Multipath Graph
convolutional neural network that aggregates the output of multiple different
shallow networks. We train and test our model on various benchmarks datasets
for the task of node property prediction. Results show that the proposed method
not only attains increased test accuracy but also requires fewer training
epochs to converge. The full implementation is available at
https://github.com/rangan2510/MultiPathGCN | [
"cs.LG",
"cs.CV"
]
|
Human behavior expression and experience are inherently multi-modal, and
characterized by vast individual and contextual heterogeneity. To achieve
meaningful human-computer and human-robot interactions, multi-modal models of
the users states (e.g., engagement) are therefore needed. Most of the existing
works that try to build classifiers for the users states assume that the data
to train the models are fully labeled. Nevertheless, data labeling is costly
and tedious, and also prone to subjective interpretations by the human coders.
This is even more pronounced when the data are multi-modal (e.g., some users
are more expressive with their facial expressions, some with their voice).
Thus, building models that can accurately estimate the users states during an
interaction is challenging. To tackle this, we propose a novel multi-modal
active learning (AL) approach that uses the notion of deep reinforcement
learning (RL) to find an optimal policy for active selection of the users data,
needed to train the target (modality-specific) models. We investigate different
strategies for multi-modal data fusion, and show that the proposed model-level
fusion coupled with RL outperforms the feature-level and modality-specific
models, and the naive AL strategies such as random sampling, and the standard
heuristics such as uncertainty sampling. We show the benefits of this approach
on the task of engagement estimation from real-world child-robot interactions
during an autism therapy. Importantly, we show that the proposed multi-modal AL
approach can be used to efficiently personalize the engagement classifiers to
the target user using a small amount of actively selected users data. | [
"cs.LG",
"cs.AI",
"cs.HC",
"cs.RO",
"stat.ML"
]
|
Grounding language queries in videos aims at identifying the time interval
(or moment) semantically relevant to a language query. The solution to this
challenging task demands understanding videos' and queries' semantic content
and the fine-grained reasoning about their multi-modal interactions. Our key
idea is to recast this challenge into an algorithmic graph matching problem.
Fueled by recent advances in Graph Neural Networks, we propose to leverage
Graph Convolutional Networks to model video and textual information as well as
their semantic alignment. To enable the mutual exchange of information across
the modalities, we design a novel Video-Language Graph Matching Network
(VLG-Net) to match video and query graphs. Core ingredients include
representation graphs built atop video snippets and query tokens separately and
used to model intra-modality relationships. A Graph Matching layer is adopted
for cross-modal context modeling and multi-modal fusion. Finally, moment
candidates are created using masked moment attention pooling by fusing the
moment's enriched snippet features. We demonstrate superior performance over
state-of-the-art grounding methods on three widely used datasets for temporal
localization of moments in videos with language queries: ActivityNet-Captions,
TACoS, and DiDeMo. | [
"cs.CV",
"cs.CL"
]
|
The problem of anomaly detection has been studied for a long time. In short,
anomalies are abnormal or unlikely things. In financial networks, thieves and
illegal activities are often anomalous in nature. Members of a network want to
detect anomalies as soon as possible to prevent them from harming the network's
community and integrity. Many Machine Learning techniques have been proposed to
deal with this problem; some results appear to be quite promising but there is
no obvious superior method. In this paper, we consider anomaly detection
particular to the Bitcoin transaction network. Our goal is to detect which
users and transactions are the most suspicious; in this case, anomalous
behavior is a proxy for suspicious behavior. To this end, we use three
unsupervised learning methods including k-means clustering, Mahalanobis
distance, and Unsupervised Support Vector Machine (SVM) on two graphs generated
by the Bitcoin transaction network: one graph has users as nodes, and the other
has transactions as nodes. | [
"cs.LG",
"cs.CR"
]
|
In this paper, we present a simple approach to train Generative Adversarial
Networks (GANs) in order to avoid a \textit {mode collapse} issue. Implicit
models such as GANs tend to generate better samples compared to explicit models
that are trained on tractable data likelihood. However, GANs overlook the
explicit data density characteristics which leads to undesirable quantitative
evaluations and mode collapse. To bridge this gap, we propose a hybrid
generative adversarial network (HGAN) for which we can enforce data density
estimation via an autoregressive model and support both adversarial and
likelihood framework in a joint training manner which diversify the estimated
density in order to cover different modes. We propose to use an adversarial
network to \textit {transfer knowledge} from an autoregressive model (teacher)
to the generator (student) of a GAN model. A novel deep architecture within the
GAN formulation is developed to adversarially distill the autoregressive model
information in addition to simple GAN training approach. We conduct extensive
experiments on real-world datasets (i.e., MNIST, CIFAR-10, STL-10) to
demonstrate the effectiveness of the proposed HGAN under qualitative and
quantitative evaluations. The experimental results show the superiority and
competitiveness of our method compared to the baselines. | [
"cs.CV",
"cs.LG"
]
|
While several approaches to face emotion recognition task are proposed in
literature, none of them reports on power consumption nor inference time
required to run the system in an embedded environment. Without adequate
knowledge about these factors it is not clear whether we are actually able to
provide accurate face emotion recognition in the embedded environment or not,
and if not, how far we are from making it feasible and what are the biggest
bottlenecks we face.
The main goal of this paper is to answer these questions and to convey the
message that instead of reporting only detection accuracy also power
consumption and inference time should be reported as real usability of the
proposed systems and their adoption in human computer interaction strongly
depends on it. In this paper, we identify the state-of-the art face emotion
recognition methods that are potentially suitable for embedded environment and
the most frequently used datasets for this task. Our study shows that most of
the performed experiments use datasets with posed expressions or in a
particular experimental setup with special conditions for image collection.
Since our goal is to evaluate the performance of the identified promising
methods in the realistic scenario, we collect a new dataset with
non-exaggerated emotions and we use it, in addition to the publicly available
datasets, for the evaluation of detection accuracy, power consumption and
inference time on three frequently used embedded devices with different
computational capabilities. Our results show that gray images are still more
suitable for embedded environment than color ones and that for most of the
analyzed systems either inference time or energy consumption or both are
limiting factor for their adoption in real-life embedded applications. | [
"cs.CV",
"cs.LG",
"stat.ML"
]
|
We introduce a self-supervised approach for learning node and graph level
representations by contrasting structural views of graphs. We show that unlike
visual representation learning, increasing the number of views to more than two
or contrasting multi-scale encodings do not improve performance, and the best
performance is achieved by contrasting encodings from first-order neighbors and
a graph diffusion. We achieve new state-of-the-art results in self-supervised
learning on 8 out of 8 node and graph classification benchmarks under the
linear evaluation protocol. For example, on Cora (node) and Reddit-Binary
(graph) classification benchmarks, we achieve 86.8% and 84.5% accuracy, which
are 5.5% and 2.4% relative improvements over previous state-of-the-art. When
compared to supervised baselines, our approach outperforms them in 4 out of 8
benchmarks. Source code is released at: https://github.com/kavehhassani/mvgrl | [
"cs.LG",
"stat.ML"
]
|
In this paper, we design a multimodal framework for object detection,
recognition and mapping based on the fusion of stereo camera frames, point
cloud Velodyne Lidar scans, and Vehicle-to-Vehicle (V2V) Basic Safety Messages
(BSMs) exchanged using Dedicated Short Range Communication (DSRC). We merge the
key features of rich texture descriptions of objects from 2D images, depth and
distance between objects provided by 3D point cloud and awareness of hidden
vehicles from BSMs' 3D information. We present a joint pixel to point cloud and
pixel to V2V correspondences of objects in frames from the Kitti Vision
Benchmark Suite by using a semi-supervised manifold alignment approach to
achieve camera-Lidar and camera-V2V mapping of their recognized objects that
have the same underlying manifold. | [
"cs.CV"
]
|
In interactive medical image segmentation, anatomical structures are
extracted from reconstructed volumetric images. The first iterations of user
interaction traditionally consist of drawing pictorial hints as an initial
estimate of the object to extract. Only after this time consuming first phase,
the efficient selective refinement of current segmentation results begins.
Erroneously labeled seeds, especially near the border of the object, are
challenging to detect and replace for a human and may substantially impact the
overall segmentation quality. We propose an automatic seeding pipeline as well
as a configuration based on saliency recognition, in order to skip the
time-consuming initial interaction phase during segmentation. A median Dice
score of 68.22% is reached before the first user interaction on the test data
set with an error rate in seeding of only 0.088%. | [
"cs.CV"
]
|
Recent advancements in deep neural networks have made remarkable
leap-forwards in dense image prediction. However, the issue of feature
alignment remains as neglected by most existing approaches for simplicity.
Direct pixel addition between upsampled and local features leads to feature
maps with misaligned contexts that, in turn, translate to mis-classifications
in prediction, especially on object boundaries. In this paper, we propose a
feature alignment module that learns transformation offsets of pixels to
contextually align upsampled higher-level features; and another feature
selection module to emphasize the lower-level features with rich spatial
details. We then integrate these two modules in a top-down pyramidal
architecture and present the Feature-aligned Pyramid Network (FaPN). Extensive
experimental evaluations on four dense prediction tasks and four datasets have
demonstrated the efficacy of FaPN, yielding an overall improvement of 1.2 - 2.6
points in AP / mIoU over FPN when paired with Faster / Mask R-CNN. In
particular, our FaPN achieves the state-of-the-art of 56.7% mIoU on ADE20K when
integrated within Mask-Former. The code is available from
https://github.com/EMI-Group/FaPN. | [
"cs.CV"
]
|
Over the past few years, we have seen fundamental breakthroughs in core
problems in machine learning, largely driven by advances in deep neural
networks. At the same time, the amount of data collected in a wide array of
scientific domains is dramatically increasing in both size and complexity.
Taken together, this suggests many exciting opportunities for deep learning
applications in scientific settings. But a significant challenge to this is
simply knowing where to start. The sheer breadth and diversity of different
deep learning techniques makes it difficult to determine what scientific
problems might be most amenable to these methods, or which specific combination
of methods might offer the most promising first approach. In this survey, we
focus on addressing this central issue, providing an overview of many widely
used deep learning models, spanning visual, sequential and graph structured
data, associated tasks and different training methods, along with techniques to
use deep learning with less data and better interpret these complex models ---
two central considerations for many scientific use cases. We also include
overviews of the full design process, implementation tips, and links to a
plethora of tutorials, research summaries and open-sourced deep learning
pipelines and pretrained models, developed by the community. We hope that this
survey will help accelerate the use of deep learning across different
scientific domains. | [
"cs.LG",
"stat.ML"
]
|
Recent work has shown great promise in explaining neural network behavior. In
particular, feature attribution methods explain which features were most
important to a model's prediction on a given input. However, for many tasks,
simply knowing which features were important to a model's prediction may not
provide enough insight to understand model behavior. The interactions between
features within the model may better help us understand not only the model, but
also why certain features are more important than others. In this work, we
present Integrated Hessians, an extension of Integrated Gradients that explains
pairwise feature interactions in neural networks. Integrated Hessians overcomes
several theoretical limitations of previous methods to explain interactions,
and unlike such previous methods is not limited to a specific architecture or
class of neural network. Additionally, we find that our method is faster than
existing methods when the number of features is large, and outperforms previous
methods on existing quantitative benchmarks. Code available at
https://github.com/suinleelab/path_explain | [
"cs.LG",
"stat.ML"
]
|
Deep learning based methods have achieved surprising progress in Scene Text
Recognition (STR), one of classic problems in computer vision. In this paper,
we propose a feasible framework for multi-lingual arbitrary-shaped STR,
including instance segmentation based text detection and language model based
attention mechanism for text recognition. Our STR algorithm not only recognizes
Latin and Non-Latin characters, but also supports arbitrary-shaped text
recognition. Our method wins the championship on Scene Text Spotting Task
(Latin Only, Latin and Chinese) of ICDAR2019 Robust Reading Challenge on
ArbitraryShaped Text Competition. Code is available at
https://github.com/zhang0jhon/AttentionOCR. | [
"cs.CV"
]
|
Human motion prediction aims to forecast future human poses given a sequence
of past 3D skeletons. While this problem has recently received increasing
attention, it has mostly been tackled for single humans in isolation. In this
paper we explore this problem from a novel perspective, involving humans
performing collaborative tasks. We assume that the input of our system are two
sequences of past skeletons for two interacting persons, and we aim to predict
the future motion for each of them. For this purpose, we devise a novel cross
interaction attention mechanism that exploits historical information of both
persons and learns to predict cross dependencies between self poses and the
poses of the other person in spite of their spatial or temporal distance. Since
no dataset to train such interactive situations is available, we have captured
ExPI (Extreme Pose Interaction), a new lab-based person interaction dataset of
professional dancers performing acrobatics. ExPI contains 115 sequences with
30k frames and 60k instances with annotated 3D body poses and shapes. We
thoroughly evaluate our cross-interaction network on this dataset and show that
both in short-term and long-term predictions, it consistently outperforms
baselines that independently reason for each person. We plan to release our
code jointly with the dataset and the train/test splits to spur future research
on the topic. | [
"cs.CV"
]
|
There has been an increasing interest in the area of emergent communication
between agents which learn to play referential signalling games with realistic
images. In this work, we consider the signalling game setting of Havrylov and
Titov and investigate the effect of the feature extractor's weights and of the
task being solved on the visual semantics learned or captured by the models. We
impose various augmentation to the input images and additional tasks in the
game with the aim to induce visual representations which capture conceptual
properties of images. Through our set of experiments, we demonstrate that
communication systems which capture visual semantics can be learned in a
completely self-supervised manner by playing the right types of game. | [
"cs.LG",
"cs.CV",
"stat.ML"
]
|
Accurate detection of objects in 3D point clouds is a central problem in many
applications, such as autonomous navigation, housekeeping robots, and
augmented/virtual reality. To interface a highly sparse LiDAR point cloud with
a region proposal network (RPN), most existing efforts have focused on
hand-crafted feature representations, for example, a bird's eye view
projection. In this work, we remove the need of manual feature engineering for
3D point clouds and propose VoxelNet, a generic 3D detection network that
unifies feature extraction and bounding box prediction into a single stage,
end-to-end trainable deep network. Specifically, VoxelNet divides a point cloud
into equally spaced 3D voxels and transforms a group of points within each
voxel into a unified feature representation through the newly introduced voxel
feature encoding (VFE) layer. In this way, the point cloud is encoded as a
descriptive volumetric representation, which is then connected to a RPN to
generate detections. Experiments on the KITTI car detection benchmark show that
VoxelNet outperforms the state-of-the-art LiDAR based 3D detection methods by a
large margin. Furthermore, our network learns an effective discriminative
representation of objects with various geometries, leading to encouraging
results in 3D detection of pedestrians and cyclists, based on only LiDAR. | [
"cs.CV"
]
|
Adversarial attacks on convolutional neural networks (CNN) have gained
significant attention and there have been active research efforts on defense
mechanisms. Stochastic input transformation methods have been proposed, where
the idea is to recover the image from adversarial attack by random
transformation, and to take the majority vote as consensus among the random
samples. However, the transformation improves the accuracy on adversarial
images at the expense of the accuracy on clean images. While it is intuitive
that the accuracy on clean images would deteriorate, the exact mechanism in
which how this occurs is unclear. In this paper, we study the distribution of
softmax induced by stochastic transformations. We observe that with random
transformations on the clean images, although the mass of the softmax
distribution could shift to the wrong class, the resulting distribution of
softmax could be used to correct the prediction. Furthermore, on the
adversarial counterparts, with the image transformation, the resulting shapes
of the distribution of softmax are similar to the distributions from the clean
images. With these observations, we propose a method to improve existing
transformation-based defenses. We train a separate lightweight distribution
classifier to recognize distinct features in the distributions of softmax
outputs of transformed images. Our empirical studies show that our distribution
classifier, by training on distributions obtained from clean images only,
outperforms majority voting for both clean and adversarial images. Our method
is generic and can be integrated with existing transformation-based defenses. | [
"cs.LG",
"cs.CV"
]
|
In this paper we consider the binary similarity problem that consists in
determining if two binary functions are similar only considering their compiled
form. This problem is know to be crucial in several application scenarios, such
as copyright disputes, malware analysis, vulnerability detection, etc. The
current state-of-the-art solutions in this field work by creating an embedding
model that maps binary functions into vectors in $\mathbb{R}^{n}$. Such
embedding model captures syntactic and semantic similarity between binaries,
i.e., similar binary functions are mapped to points that are close in the
vector space. This strategy has many advantages, one of them is the possibility
to precompute embeddings of several binary functions, and then compare them
with simple geometric operations (e.g., dot product). In [32] functions are
first transformed in Annotated Control Flow Graphs (ACFGs) constituted by
manually engineered features and then graphs are embedded into vectors using a
deep neural network architecture. In this paper we propose and test several
ways to compute annotated control flow graphs that use unsupervised approaches
for feature learning, without incurring a human bias. Our methods are inspired
after techniques used in the natural language processing community (e.g., we
use word2vec to encode assembly instructions). We show that our approach is
indeed successful, and it leads to better performance than previous
state-of-the-art solutions. Furthermore, we report on a qualitative analysis of
functions embeddings. We found interesting cases in which embeddings are
clustered according to the semantic of the original binary function. | [
"cs.LG",
"cs.DC"
]
|
Learning unsupervised node embeddings facilitates several downstream tasks
such as node classification and link prediction. A node embedding is universal
if it is designed to be used by and benefit various downstream tasks. This work
introduces PanRep, a graph neural network (GNN) model, for unsupervised
learning of universal node representations for heterogenous graphs. PanRep
consists of a GNN encoder that obtains node embeddings and four decoders, each
capturing different topological and node feature properties. Abiding to these
properties the novel unsupervised framework learns universal embeddings
applicable to different downstream tasks. PanRep can be furthered fine-tuned to
account for possible limited labels. In this operational setting PanRep is
considered as a pretrained model for extracting node embeddings of heterogenous
graph data. PanRep outperforms all unsupervised and certain supervised methods
in node classification and link prediction, especially when the labeled data
for the supervised methods is small. PanRep-FT (with fine-tuning) outperforms
all other supervised approaches, which corroborates the merits of pretraining
models. Finally, we apply PanRep-FT for discovering novel drugs for Covid-19.
We showcase the advantage of universal embeddings in drug repurposing and
identify several drugs used in clinical trials as possible drug candidates. | [
"cs.LG",
"stat.ML"
]
|
We propose to improve text recognition from a new perspective by separating
the text content from complex backgrounds. As vanilla GANs are not sufficiently
robust to generate sequence-like characters in natural images, we propose an
adversarial learning framework for the generation and recognition of multiple
characters in an image. The proposed framework consists of an attention-based
recognizer and a generative adversarial architecture. Furthermore, to tackle
the issue of lacking paired training samples, we design an interactive joint
training scheme, which shares attention masks from the recognizer to the
discriminator, and enables the discriminator to extract the features of each
character for further adversarial training. Benefiting from the character-level
adversarial training, our framework requires only unpaired simple data for
style supervision. Each target style sample containing only one randomly chosen
character can be simply synthesized online during the training. This is
significant as the training does not require costly paired samples or
character-level annotations. Thus, only the input images and corresponding text
labels are needed. In addition to the style normalization of the backgrounds,
we refine character patterns to ease the recognition task. A feedback mechanism
is proposed to bridge the gap between the discriminator and the recognizer.
Therefore, the discriminator can guide the generator according to the confusion
of the recognizer, so that the generated patterns are clearer for recognition.
Experiments on various benchmarks, including both regular and irregular text,
demonstrate that our method significantly reduces the difficulty of
recognition. Our framework can be integrated into recent recognition methods to
achieve new state-of-the-art recognition accuracy. | [
"cs.CV"
]
|
While generative models have shown great success in generating
high-dimensional samples conditional on low-dimensional descriptors (learning
e.g. stroke thickness in MNIST, hair color in CelebA, or speaker identity in
Wavenet), their generation out-of-sample poses fundamental problems. The
conditional variational autoencoder (CVAE) as a simple conditional generative
model does not explicitly relate conditions during training and, hence, has no
incentive of learning a compact joint distribution across conditions. We
overcome this limitation by matching their distributions using maximum mean
discrepancy (MMD) in the decoder layer that follows the bottleneck. This
introduces a strong regularization both for reconstructing samples within the
same condition and for transforming samples across conditions, resulting in
much improved generalization. We refer to the architecture as
\emph{transformer} VAE (trVAE). Benchmarking trVAE on high-dimensional image
and tabular data, we demonstrate higher robustness and higher accuracy than
existing approaches. In particular, we show qualitatively improved predictions
for cellular perturbation response to treatment and disease based on
high-dimensional single-cell gene expression data, by tackling previously
problematic minority classes and multiple conditions. For generic tasks, we
improve Pearson correlations of high-dimensional estimated means and variances
with their ground truths from 0.89 to 0.97 and 0.75 to 0.87, respectively. | [
"cs.LG",
"eess.IV",
"q-bio.CB",
"q-bio.GN",
"stat.ML"
]
|
3D Convolution Neural Networks (CNNs) have been widely applied to 3D scene
understanding, such as video analysis and volumetric image recognition.
However, 3D networks can easily lead to over-parameterization which incurs
expensive computation cost. In this paper, we propose Channel-wise Automatic
KErnel Shrinking (CAKES), to enable efficient 3D learning by shrinking standard
3D convolutions into a set of economic operations e.g., 1D, 2D convolutions.
Unlike previous methods, CAKES performs channel-wise kernel shrinkage, which
enjoys the following benefits: 1) enabling operations deployed in every layer
to be heterogeneous, so that they can extract diverse and complementary
information to benefit the learning process; and 2) allowing for an efficient
and flexible replacement design, which can be generalized to both
spatial-temporal and volumetric data. Further, we propose a new search space
based on CAKES, so that the replacement configuration can be determined
automatically for simplifying 3D networks. CAKES shows superior performance to
other methods with similar model size, and it also achieves comparable
performance to state-of-the-art with much fewer parameters and computational
costs on tasks including 3D medical imaging segmentation and video action
recognition. Codes and models are available at
https://github.com/yucornetto/CAKES | [
"cs.CV"
]
|
This fourth and last tome is focusing on describing the envisioned works for
a project that has been presented in the preceding tome. It is about a new
approach dedicated to the coding of still and moving pictures, trying to bridge
the MPEG-4 and MPEG-7 standard bodies. The aim of this project is to define the
principles of self-descriptive video coding. In order to establish them, the
document is composed in five chapters that describe the various envisioned
techniques for developing such a new approach in visual coding: - image
segmentation, - computation of visual descriptors, - computation of perceptual
groupings, - building of visual dictionaries, - picture and video coding. Based
on the techniques of multiresolution computing, it is proposed to develop an
image segmentation made from piecewise regular components, to compute
attributes on the frame and the rendering of so produced shapes, independently
to the geometric transforms that can occur in the image plane, and to gather
them into perceptual groupings so as to be able in performing recognition of
partially hidden patterns. Due to vector quantization of shapes frame and
rendering, it will appear that simple shapes may be compared to a visual
alphabet and that complex shapes then become words written using this alphabet
and be recorded into a dictionary. With the help of a nearest neighbour
scanning applied on the picture shapes, the self-descriptive coding will then
generate a sentence made from words written using the simple shape alphabet. | [
"cs.CV",
"E.1; I.4; I.5; I.6"
]
|
As the development of 3D sensors, registration of 3D data (e.g. point cloud)
coming from different kind of sensor is dispensable and shows great demanding.
However, point cloud registration between different sensors is challenging
because of the variant of density, missing data, different viewpoint, noise and
outliers, and geometric transformation. In this paper, we propose a method to
learn a 3D descriptor for finding the correspondent relations between these
challenging point clouds. To train the deep learning framework, we use
synthetic 3D point cloud as input. Starting from synthetic dataset, we use
region-based sampling method to select reasonable, large and diverse training
samples from synthetic samples. Then, we use data augmentation to extend our
network be robust to rotation transformation. We focus our work on more general
cases that point clouds coming from different sensors, named cross-source point
cloud. The experiments show that our descriptor is not only able to generalize
to new scenes, but also generalize to different sensors. The results
demonstrate that the proposed method successfully aligns two 3D cross-source
point clouds which outperforms state-of-the-art method. | [
"cs.CV"
]
|
Attribution editing has achieved remarkable progress in recent years owing to
the encoder-decoder structure and generative adversarial network (GAN).
However, it remains challenging in generating high-quality images with accurate
attribute transformation. Attacking these problems, the work proposes a novel
selective attribute editing model based on classification adversarial network
(referred to as ClsGAN) that shows good balance between attribute transfer
accuracy and photo-realistic images. Considering that the editing images are
prone to be affected by original attribute due to skip-connection in
encoder-decoder structure, an upper convolution residual network (referred to
as Tr-resnet) is presented to selectively extract information from the source
image and target label. In addition, to further improve the transfer accuracy
of generated images, an attribute adversarial classifier (referred to as
Atta-cls) is introduced to guide the generator from the perspective of
attribute through learning the defects of attribute transfer images.
Experimental results on CelebA demonstrate that our ClsGAN performs favorably
against state-of-the-art approaches in image quality and transfer accuracy.
Moreover, ablation studies are also designed to verify the great performance of
Tr-resnet and Atta-cls. | [
"cs.CV",
"eess.IV"
]
|
Modern neural network-based algorithms are able to produce highly accurate
depth estimates from stereo image pairs, nearly matching the reliability of
measurements from more expensive depth sensors. However, this accuracy comes
with a higher computational cost since these methods use network architectures
designed to compute and process matching scores across all candidate matches at
all locations, with floating point computations repeated across a match volume
with dimensions corresponding to both space and disparity. This leads to longer
running times to process each image pair, making them impractical for real-time
use in robots and autonomous vehicles. We propose a new stereo algorithm that
employs a significantly more efficient network architecture. Our method builds
an initial match cost volume using traditional matching costs that are fast to
compute, and trains a network to estimate disparity from this volume.
Crucially, our network only employs per-pixel and two-dimensional convolution
operations: to summarize the match information at each location as a
low-dimensional feature vector, and to spatially process these `cost-signature'
features to produce a dense disparity map. Experimental results on the KITTI
benchmark show that our method delivers competitive accuracy at significantly
higher speeds---running at 48 frames per second on a modern GPU. | [
"cs.CV",
"cs.RO"
]
|
In this paper, we propose a coarse-to-fine integration solution inspired by
the classical ICP algorithm, to pairwise 3D point cloud registration with two
improvements of hybrid metric spaces (eg, BSC feature and Euclidean geometry
spaces) and globally optimal correspondences matching. First, we detect the
keypoints of point clouds and use the Binary Shape Context (BSC) descriptor to
encode their local features. Then, we formulate the correspondence matching
task as an energy function, which models the global similarity of keypoints on
the hybrid spaces of BSC feature and Euclidean geometry. Next, we estimate the
globally optimal correspondences through optimizing the energy function by the
Kuhn-Munkres algorithm and then calculate the transformation based on the
correspondences. Finally,we iteratively refine the transformation between two
point clouds by conducting optimal correspondences matching and transformation
calculation in a mutually reinforcing manner, to achieve the coarse-to-fine
registration under an unified framework.The proposed method is evaluated and
compared to several state-of-the-art methods on selected challenging datasets
with repetitive, symmetric and incomplete structures.Comprehensive experiments
demonstrate that the proposed IGSP algorithm obtains good performance and
outperforms the state-of-the-art methods in terms of both rotation and
translation errors. | [
"cs.CV"
]
|
Real-time understanding in video is crucial in various AI applications such
as autonomous driving. This work presents a fast single-shot segmentation
strategy for video scene understanding. The proposed net, called S3-Net,
quickly locates and segments target sub-scenes, meanwhile extracts structured
time-series semantic features as inputs to an LSTM-based spatio-temporal model.
Utilizing tensorization and quantization techniques, S3-Net is intended to be
lightweight for edge computing. Experiments using CityScapes, UCF11, HMDB51 and
MOMENTS datasets demonstrate that the proposed S3-Net achieves an accuracy
improvement of 8.1% versus the 3D-CNN based approach on UCF11, a storage
reduction of 6.9x and an inference speed of 22.8 FPS on CityScapes with a
GTX1080Ti GPU. | [
"cs.CV"
]
|
The incorporation of prior knowledge into learning is essential in achieving
good performance based on small noisy samples. Such knowledge is often
incorporated through the availability of related data arising from domains and
tasks similar to the one of current interest. Ideally one would like to allow
both the data for the current task and for previous related tasks to
self-organize the learning system in such a way that commonalities and
differences between the tasks are learned in a data-driven fashion. We develop
a framework for learning multiple tasks simultaneously, based on sharing
features that are common to all tasks, achieved through the use of a modular
deep feedforward neural network consisting of shared branches, dealing with the
common features of all tasks, and private branches, learning the specific
unique aspects of each task. Once an appropriate weight sharing architecture
has been established, learning takes place through standard algorithms for
feedforward networks, e.g., stochastic gradient descent and its variations. The
method deals with domain adaptation and multi-task learning in a unified
fashion, and can easily deal with data arising from different types of sources.
Numerical experiments demonstrate the effectiveness of learning in domain
adaptation and transfer learning setups, and provide evidence for the flexible
and task-oriented representations arising in the network. | [
"stat.ML",
"cs.LG"
]
|
Self-supervised learning and pre-training strategies have developed over the
last few years especially for Convolutional Neural Networks (CNNs). Recently
application of such methods can also be noticed for Graph Neural Networks
(GNNs) . In this paper, we have used a graph based self-supervised learning
strategy with different loss functions (Barlow Twins[Zbontar et al., 2021],
HSIC[Tsai et al., 2021], VICReg[Bardes et al., 2021]) which have shown
promising results when applied with CNNs previously. We have also proposed a
hybrid loss function combining the advantages of VICReg and HSIC and called it
as VICRegHSIC. The performance of these aforementioned methods have been
compared when applied to different datasets such as MUTAG, PROTEINS and
IMDB-Binary. Moreover, the impact of different batch sizes, projector
dimensions and data augmentation strategies have also been explored | [
"cs.LG",
"cs.AI",
"cs.CG",
"cs.CV",
"stat.ML"
]
|
The problem of selecting the right state-representation in a reinforcement
learning problem is considered. Several models (functions mapping past
observations to a finite set) of the observations are given, and it is known
that for at least one of these models the resulting state dynamics are indeed
Markovian. Without knowing neither which of the models is the correct one, nor
what are the probabilistic characteristics of the resulting MDP, it is required
to obtain as much reward as the optimal policy for the correct model (or for
the best of the correct models, if there are several). We propose an algorithm
that achieves that, with a regret of order T^{2/3} where T is the horizon time. | [
"cs.LG"
]
|
In this paper, we present a new feature representation for first-person
videos. In first-person video understanding (e.g., activity recognition), it is
very important to capture both entire scene dynamics (i.e., egomotion) and
salient local motion observed in videos. We describe a representation framework
based on time series pooling, which is designed to abstract
short-term/long-term changes in feature descriptor elements. The idea is to
keep track of how descriptor values are changing over time and summarize them
to represent motion in the activity video. The framework is general, handling
any types of per-frame feature descriptors including conventional motion
descriptors like histogram of optical flows (HOF) as well as appearance
descriptors from more recent convolutional neural networks (CNN). We
experimentally confirm that our approach clearly outperforms previous feature
representations including bag-of-visual-words and improved Fisher vector (IFV)
when using identical underlying feature descriptors. We also confirm that our
feature representation has superior performance to existing state-of-the-art
features like local spatio-temporal features and Improved Trajectory Features
(originally developed for 3rd-person videos) when handling first-person videos.
Multiple first-person activity datasets were tested under various settings to
confirm these findings. | [
"cs.CV"
]
|
Image segmentation needs both local boundary position information and global
object context information. The performance of the recent state-of-the-art
method, fully convolutional networks, reaches a bottleneck due to the neural
network limit after balancing between the two types of information
simultaneously in an end-to-end training style. To overcome this problem, we
divide the semantic image segmentation into temporal subtasks. First, we find a
possible pixel position of some object boundary; then trace the boundary at
steps within a limited length until the whole object is outlined. We present
the first deep reinforcement learning approach to semantic image segmentation,
called DeepOutline, which outperforms other algorithms in Coco detection
leaderboard in the middle and large size person category in Coco val2017
dataset. Meanwhile, it provides an insight into a divide and conquer way by
reinforcement learning on computer vision problems. | [
"cs.CV",
"cs.AI"
]
|
Given a composite image, image harmonization aims to adjust the foreground to
make it compatible with the background. High-resolution image harmonization is
in high demand, but still remains unexplored. Conventional image harmonization
methods learn global RGB-to-RGB transformation which could effortlessly scale
to high resolution, but ignore diverse local context. Recent deep learning
methods learn the dense pixel-to-pixel transformation which could generate
harmonious outputs, but are highly constrained in low resolution. In this work,
we propose a high-resolution image harmonization network with Collaborative
Dual Transformation (CDTNet) to combine pixel-to-pixel transformation and
RGB-to-RGB transformation coherently in an end-to-end framework. Our CDTNet
consists of a low-resolution generator for pixel-to-pixel transformation, a
color mapping module for RGB-to-RGB transformation, and a refinement module to
take advantage of both. Extensive experiments on high-resolution image
harmonization dataset demonstrate that our CDTNet strikes a good balance
between efficiency and effectiveness. | [
"cs.CV"
]
|
Most successful computer vision models transform low-level features, such as
Gabor filter responses, into richer representations of intermediate or
mid-level complexity for downstream visual tasks. These mid-level
representations have not been explored for event cameras, although it is
especially relevant to the visually sparse and often disjoint spatial
information in the event stream. By making use of locally consistent
intermediate representations, termed as superevents, numerous visual tasks
ranging from semantic segmentation, visual tracking, depth estimation shall
benefit. In essence, superevents are perceptually consistent local units that
delineate parts of an object in a scene. Inspired by recent deep learning
architectures, we present a novel method that employs lifetime augmentation for
obtaining an event stream representation that is fed to a fully convolutional
network to extract superevents. Our qualitative and quantitative experimental
results on several sequences of a benchmark dataset highlights the significant
potential for event-based downstream applications. | [
"cs.CV",
"cs.AI"
]
|
The rapid evolution of Graph Neural Networks (GNNs) has led to a growing
number of new architectures as well as novel applications. However, current
research focuses on proposing and evaluating specific architectural designs of
GNNs, as opposed to studying the more general design space of GNNs that
consists of a Cartesian product of different design dimensions, such as the
number of layers or the type of the aggregation function. Additionally, GNN
designs are often specialized to a single task, yet few efforts have been made
to understand how to quickly find the best GNN design for a novel task or a
novel dataset. Here we define and systematically study the architectural design
space for GNNs which consists of 315,000 different designs over 32 different
predictive tasks. Our approach features three key innovations: (1) A general
GNN design space; (2) a GNN task space with a similarity metric, so that for a
given novel task/dataset, we can quickly identify/transfer the best performing
architecture; (3) an efficient and effective design space evaluation method
which allows insights to be distilled from a huge number of model-task
combinations. Our key results include: (1) A comprehensive set of guidelines
for designing well-performing GNNs; (2) while best GNN designs for different
tasks vary significantly, the GNN task space allows for transferring the best
designs across different tasks; (3) models discovered using our design space
achieve state-of-the-art performance. Overall, our work offers a principled and
scalable approach to transition from studying individual GNN designs for
specific tasks, to systematically studying the GNN design space and the task
space. Finally, we release GraphGym, a powerful platform for exploring
different GNN designs and tasks. GraphGym features modularized GNN
implementation, standardized GNN evaluation, and reproducible and scalable
experiment management. | [
"cs.LG",
"cs.AI",
"cs.SI"
]
|
In this paper, we propose an online Multi-Object Tracking (MOT) approach
which integrates the merits of single object tracking and data association
methods in a unified framework to handle noisy detections and frequent
interactions between targets. Specifically, for applying single object tracking
in MOT, we introduce a cost-sensitive tracking loss based on the
state-of-the-art visual tracker, which encourages the model to focus on hard
negative distractors during online learning. For data association, we propose
Dual Matching Attention Networks (DMAN) with both spatial and temporal
attention mechanisms. The spatial attention module generates dual attention
maps which enable the network to focus on the matching patterns of the input
image pair, while the temporal attention module adaptively allocates different
levels of attention to different samples in the tracklet to suppress noisy
observations. Experimental results on the MOT benchmark datasets show that the
proposed algorithm performs favorably against both online and offline trackers
in terms of identity-preserving metrics. | [
"cs.CV"
]
|
Nowadays, with the rapid development of data collection sources and feature
extraction methods, multi-view data are getting easy to obtain and have
received increasing research attention in recent years, among which, multi-view
clustering (MVC) forms a mainstream research direction and is widely used in
data analysis. However, existing MVC methods mainly assume that each sample
appears in all the views, without considering the incomplete view case due to
data corruption, sensor failure, equipment malfunction, etc. In this study, we
design and build a generative partial multi-view clustering model, named as
GP-MVC, to address the incomplete multi-view problem by explicitly generating
the data of missing views. The main idea of GP-MVC lies at two-fold. First,
multi-view encoder networks are trained to learn common low-dimensional
representations, followed by a clustering layer to capture the consistent
cluster structure across multiple views. Second, view-specific generative
adversarial networks are developed to generate the missing data of one view
conditioning on the shared representation given by other views. These two steps
could be promoted mutually, where learning common representations facilitates
data imputation and the generated data could further explores the view
consistency. Moreover, an weighted adaptive fusion scheme is implemented to
exploit the complementary information among different views. Experimental
results on four benchmark datasets are provided to show the effectiveness of
the proposed GP-MVC over the state-of-the-art methods. | [
"cs.CV"
]
|
Although neural networks can achieve very high predictive performance on
various different tasks such as image recognition or natural language
processing, they are often considered as opaque "black boxes". The difficulty
of interpreting the predictions of a neural network often prevents its use in
fields where explainability is important, such as the financial industry where
regulators and auditors often insist on this aspect. In this paper, we present
a way to assess the relative input features importance of a neural network
based on the sensitivity of the model output with respect to its input. This
method has the advantage of being fast to compute, it can provide both global
and local levels of explanations and is applicable for many types of neural
network architectures. We illustrate the performance of this method on both
synthetic and real data and compare it with other interpretation techniques.
This method is implemented into an open-source Python package that allows its
users to easily generate and visualize explanations for their neural networks. | [
"stat.ML",
"cs.LG"
]
|
Change point detection is an important part of time series analysis, as the
presence of a change point indicates an abrupt and significant change in the
data generating process. While many algorithms for change point detection
exist, little attention has been paid to evaluating their performance on
real-world time series. Algorithms are typically evaluated on simulated data
and a small number of commonly-used series with unreliable ground truth.
Clearly this does not provide sufficient insight into the comparative
performance of these algorithms. Therefore, instead of developing yet another
change point detection method, we consider it vastly more important to properly
evaluate existing algorithms on real-world data. To achieve this, we present
the first data set specifically designed for the evaluation of change point
detection algorithms, consisting of 37 time series from various domains. Each
time series was annotated by five expert human annotators to provide ground
truth on the presence and location of change points. We analyze the consistency
of the human annotators, and describe evaluation metrics that can be used to
measure algorithm performance in the presence of multiple ground truth
annotations. Subsequently, we present a benchmark study where 14 existing
algorithms are evaluated on each of the time series in the data set. This study
shows that binary segmentation (Scott and Knott, 1974) and Bayesian online
change point detection (Adams and MacKay, 2007) are among the best performing
methods. Our aim is that this data set will serve as a proving ground in the
development of novel change point detection algorithms. | [
"stat.ML",
"cs.LG",
"stat.ME",
"62M10",
"G.3"
]
|
Synthesizing high-quality images from text descriptions is a challenging
problem in computer vision and has many practical applications. Samples
generated by existing text-to-image approaches can roughly reflect the meaning
of the given descriptions, but they fail to contain necessary details and vivid
object parts. In this paper, we propose Stacked Generative Adversarial Networks
(StackGAN) to generate 256x256 photo-realistic images conditioned on text
descriptions. We decompose the hard problem into more manageable sub-problems
through a sketch-refinement process. The Stage-I GAN sketches the primitive
shape and colors of the object based on the given text description, yielding
Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text
descriptions as inputs, and generates high-resolution images with
photo-realistic details. It is able to rectify defects in Stage-I results and
add compelling details with the refinement process. To improve the diversity of
the synthesized images and stabilize the training of the conditional-GAN, we
introduce a novel Conditioning Augmentation technique that encourages
smoothness in the latent conditioning manifold. Extensive experiments and
comparisons with state-of-the-arts on benchmark datasets demonstrate that the
proposed method achieves significant improvements on generating photo-realistic
images conditioned on text descriptions. | [
"cs.CV",
"cs.AI",
"stat.ML"
]
|
Even after over two decades, the total variation (TV) remains one of the most
popular regularizations for image processing problems and has sparked a
tremendous amount of research, particularly to move from scalar to
vector-valued functions. In this paper, we consider the gradient of a color
image as a three dimensional matrix or tensor with dimensions corresponding to
the spatial extend, the differences to other pixels, and the spectral channels.
The smoothness of this tensor is then measured by taking different norms along
the different dimensions. Depending on the type of these norms one obtains very
different properties of the regularization, leading to novel models for color
images. We call this class of regularizations collaborative total variation
(CTV). On the theoretical side, we characterize the dual norm, the
subdifferential and the proximal mapping of the proposed regularizers. We
further prove, with the help of the generalized concept of singular vectors,
that an $\ell^{\infty}$ channel coupling makes the most prior assumptions and
has the greatest potential to reduce color artifacts. Our practical
contributions consist of an extensive experimental section where we compare the
performance of a large number of collaborative TV methods for inverse problems
like denoising, deblurring and inpainting. | [
"cs.CV",
"math.HO",
"math.NA",
"math.OC",
"15A60, 65F22, 65K10, 68U10, 90C25, 90C46, 94A08"
]
|
The Graph Convolutional Network (GCN) model and its variants are powerful
graph embedding tools for facilitating classification and clustering on graphs.
However, a major challenge is to reduce the complexity of layered GCNs and make
them parallelizable and scalable on very large graphs -- state-of the art
techniques are unable to achieve scalability without losing accuracy and
efficiency. In this paper, we propose novel parallelization techniques for
graph sampling-based GCNs that achieve superior scalable performance on very
large graphs without compromising accuracy. Specifically, our GCN guarantees
work-efficient training and produces order of magnitude savings in computation
and communication. To scale GCN training on tightly-coupled shared memory
systems, we develop parallelization strategies for the key steps in training:
For the graph sampling step, we exploit parallelism within and across multiple
sampling instances, and devise an efficient data structure for concurrent
accesses that provides theoretical guarantee of near-linear speedup with number
of processing units. For the feature propagation step within the sampled graph,
we improve cache utilization and reduce DRAM communication by data
partitioning. We prove that our partitioning strategy is a 2-approximation for
minimizing the communication time compared to the optimal strategy. We
demonstrate that our parallel graph embedding outperforms state-of-the-art
methods in scalability (with respect to number of processors, graph size and
GCN model size), efficiency and accuracy on several large datasets. On a
40-core Xeon platform, our parallel training achieves $64\times$ speedup (with
AVX) in the sampling step and $25\times$ speedup in the feature propagation
step, compared to the serial implementation, resulting in a net speedup of
$21\times$. | [
"cs.LG",
"cs.PF",
"stat.ML"
]
|
We explore the task of Video Object Grounding (VOG), which grounds objects in
videos referred to in natural language descriptions. Previous methods apply
image grounding based algorithms to address VOG, fail to explore the object
relation information and suffer from limited generalization. Here, we
investigate the role of object relations in VOG and propose a novel framework
VOGNet to encode multi-modal object relations via self-attention with relative
position encoding. To evaluate VOGNet, we propose novel contrasting sampling
methods to generate more challenging grounding input samples, and construct a
new dataset called ActivityNet-SRL (ASRL) based on existing caption and
grounding datasets. Experiments on ASRL validate the need of encoding object
relations in VOG, and our VOGNet outperforms competitive baselines by a
significant margin. | [
"cs.CV",
"cs.CL"
]
|
Extending the capabilities of robotics to real-world complex, unstructured
environments requires the need of developing better perception systems while
maintaining low sample complexity. When dealing with high-dimensional state
spaces, current methods are either model-free or model-based based on
reconstruction objectives. The sample inefficiency of the former constitutes a
major barrier for applying them to the real-world. The later, while they
present low sample complexity, they learn latent spaces that need to
reconstruct every single detail of the scene. In real environments, the task
typically just represents a small fraction of the scene. Reconstruction
objectives suffer in such scenarios as they capture all the unnecessary
components. In this work, we present MIRO, an information theoretic
representational learning algorithm for model-based reinforcement learning. We
design a latent space that maximizes the mutual information with the future
information while being able to capture all the information needed for
planning. We show that our approach is more robust than reconstruction
objectives in the presence of distractors and cluttered scenes | [
"cs.CV",
"cs.AI",
"stat.ML"
]
|
Recently, some works found an interesting phenomenon that adversarially
robust classifiers can generate good images comparable to generative models. We
investigate this phenomenon from an energy perspective and provide a novel
explanation. We reformulate adversarial example generation, adversarial
training, and image generation in terms of an energy function. We find that
adversarial training contributes to obtaining an energy function that is flat
and has low energy around the real data, which is the key for generative
capability. Based on our new understanding, we further propose a better
adversarial training method, Joint Energy Adversarial Training (JEAT), which
can generate high-quality images and achieve new state-of-the-art robustness
under a wide range of attacks. The Inception Score of the images (CIFAR-10)
generated by JEAT is 8.80, much better than original robust classifiers (7.50). | [
"cs.LG",
"cs.CV"
]
|
Spatio-temporal forecasting is an open research field whose interest is
growing exponentially. In this work we focus on creating a complex deep neural
framework for spatio-temporal traffic forecasting with comparatively very good
performance and that shows to be adaptable over several spatio-temporal
conditions while remaining easy to understand and interpret. Our proposal is
based on an interpretable attention-based neural network in which several
modules are combined in order to capture key spatio-temporal time series
components. Through extensive experimentation, we show how the results of our
approach are stable and better than those of other state-of-the-art
alternatives. | [
"cs.LG",
"eess.SP",
"stat.ML"
]
|
Imitation learning is an effective approach for autonomous systems to acquire
control policies when an explicit reward function is unavailable, using
supervision provided as demonstrations from an expert, typically a human
operator. However, standard imitation learning methods assume that the agent
receives examples of observation-action tuples that could be provided, for
instance, to a supervised learning algorithm. This stands in contrast to how
humans and animals imitate: we observe another person performing some behavior
and then figure out which actions will realize that behavior, compensating for
changes in viewpoint, surroundings, object positions and types, and other
factors. We term this kind of imitation learning "imitation-from-observation,"
and propose an imitation learning method based on video prediction with context
translation and deep reinforcement learning. This lifts the assumption in
imitation learning that the demonstration should consist of observations in the
same environment configuration, and enables a variety of interesting
applications, including learning robotic skills that involve tool use simply by
observing videos of human tool use. Our experimental results show the
effectiveness of our approach in learning a wide range of real-world robotic
tasks modeled after common household chores from videos of a human
demonstrator, including sweeping, ladling almonds, pushing objects as well as a
number of tasks in simulation. | [
"cs.LG",
"cs.AI",
"cs.CV",
"cs.NE",
"cs.RO"
]
|
Lie detection is considered a concern for everyone in their day to day life
given its impact on human interactions. Thus, people normally pay attention to
both what their interlocutors are saying and also to their visual appearances,
including faces, to try to find any signs that indicate whether the person is
telling the truth or not. While automatic lie detection may help us to
understand this lying characteristics, current systems are still fairly
limited, partly due to lack of adequate datasets to evaluate their performance
in realistic scenarios. In this work, we have collected an annotated dataset of
facial images, comprising both 2D and 3D information of several participants
during a card game that encourages players to lie. Using our collected dataset,
We evaluated several types of machine learning-based lie detectors in terms of
their generalization, person-specific and cross-domain experiments. Our results
show that models based on deep learning achieve the best accuracy, reaching up
to 57\% for the generalization task and 63\% when dealing with a single
participant. Finally, we also highlight the limitation of the deep learning
based lie detector when dealing with cross-domain lie detection tasks. | [
"cs.CV",
"cs.HC"
]
|
In this work we consider the task of detecting sheep onboard an unmanned
aerial vehicle (UAV) flying at an altitude of 80 m. At this height, the sheep
are relatively small, only about 15 pixels across. Although deep learning
strategies have gained enormous popularity in the last decade and are now
extensively used for object detection in many fields, state-of-the-art
detectors perform poorly in the case of smaller objects. We develop a novel
dataset of UAV imagery of sheep and consider a variety of object detectors to
determine which is the most suitable for our task in terms of both accuracy and
speed. Our findings indicate that a UNet detector using the weighted Hausdorff
distance as a loss function during training is an excellent option for
detection of sheep onboard a UAV. | [
"cs.CV",
"cs.LG",
"eess.IV",
"stat.ML"
]
|
We represent the sequence of fMRI (Functional Magnetic Resonance Imaging)
brain volumes recorded during a cognitive stimulus by a graph which consists of
a set of local meshes. The corresponding cognitive process, encoded in the
brain, is then represented by these meshes each of which is estimated assuming
a linear relationship among the voxel time series in a predefined locality.
First, we define the concept of locality in two neighborhood systems, namely,
the spatial and functional neighborhoods. Then, we construct spatially and
functionally local meshes around each voxel, called seed voxel, by connecting
it either to its spatial or functional p-nearest neighbors. The mesh formed
around a voxel is a directed sub-graph with a star topology, where the
direction of the edges is taken towards the seed voxel at the center of the
mesh. We represent the time series recorded at each seed voxel in terms of
linear combination of the time series of its p-nearest neighbors in the mesh.
The relationships between a seed voxel and its neighbors are represented by the
edge weights of each mesh, and are estimated by solving a linear regression
equation. The estimated mesh edge weights lead to a better representation of
information in the brain for encoding and decoding of the cognitive tasks. We
test our model on a visual object recognition and emotional memory retrieval
experiments using Support Vector Machines that are trained using the mesh edge
weights as features. In the experimental analysis, we observe that the edge
weights of the spatial and functional meshes perform better than the
state-of-the-art brain decoding models. | [
"cs.LG",
"cs.AI",
"cs.CV"
]
|
Making predictions and quantifying their uncertainty when the input data is
sequential is a fundamental learning challenge, recently attracting increasing
attention. We develop SigGPDE, a new scalable sparse variational inference
framework for Gaussian Processes (GPs) on sequential data. Our contribution is
twofold. First, we construct inducing variables underpinning the sparse
approximation so that the resulting evidence lower bound (ELBO) does not
require any matrix inversion. Second, we show that the gradients of the GP
signature kernel are solutions of a hyperbolic partial differential equation
(PDE). This theoretical insight allows us to build an efficient
back-propagation algorithm to optimize the ELBO. We showcase the significant
computational gains of SigGPDE compared to existing methods, while achieving
state-of-the-art performance for classification tasks on large datasets of up
to 1 million multivariate time series. | [
"stat.ML",
"cs.LG",
"60L10 (Primary) 60L20 (Secondary)"
]
|
A key step towards understanding human behavior is the prediction of 3D human
motion. Successful solutions have many applications in human tracking, HCI, and
graphics. Most previous work focuses on predicting a time series of future 3D
joint locations given a sequence 3D joints from the past. This Euclidean
formulation generally works better than predicting pose in terms of joint
rotations. Body joint locations, however, do not fully constrain 3D human pose,
leaving degrees of freedom undefined, making it hard to animate a realistic
human from only the joints. Note that the 3D joints can be viewed as a sparse
point cloud. Thus the problem of human motion prediction can be seen as point
cloud prediction. With this observation, we instead predict a sparse set of
locations on the body surface that correspond to motion capture markers. Given
such markers, we fit a parametric body model to recover the 3D shape and pose
of the person. These sparse surface markers also carry detailed information
about human movement that is not present in the joints, increasing the
naturalness of the predicted motions. Using the AMASS dataset, we train MOJO,
which is a novel variational autoencoder that generates motions from latent
frequencies. MOJO preserves the full temporal resolution of the input motion,
and sampling from the latent frequencies explicitly introduces high-frequency
components into the generated motion. We note that motion prediction methods
accumulate errors over time, resulting in joints or markers that diverge from
true human bodies. To address this, we fit SMPL-X to the predictions at each
time step, projecting the solution back onto the space of valid bodies. These
valid markers are then propagated in time. Experiments show that our method
produces state-of-the-art results and realistic 3D body animations. The code
for research purposes is at https://yz-cnsdqz.github.io/MOJO/MOJO.html | [
"cs.CV"
]
|
Driver attention prediction is currently becoming the focus in safe driving
research community, such as the DR(eye)VE project and newly emerged Berkeley
DeepDrive Attention (BDD-A) database in critical situations. In safe driving,
an essential task is to predict the incoming accidents as early as possible.
BDD-A was aware of this problem and collected the driver attention in
laboratory because of the rarity of such scenes. Nevertheless, BDD-A focuses
the critical situations which do not encounter actual accidents, and just faces
the driver attention prediction task, without a close step for accident
prediction. In contrast to this, we explore the view of drivers' eyes for
capturing multiple kinds of accidents, and construct a more diverse and larger
video benchmark than ever before with the driver attention and the driving
accident annotation simultaneously (named as DADA-2000), which has 2000 video
clips owning about 658,476 frames on 54 kinds of accidents. These clips are
crowd-sourced and captured in various occasions (highway, urban, rural, and
tunnel), weather (sunny, rainy and snowy) and light conditions (daytime and
nighttime). For the driver attention representation, we collect the maps of
fixations, saccade scan path and focusing time. The accidents are annotated by
their categories, the accident window in clips and spatial locations of the
crash-objects. Based on the analysis, we obtain a quantitative and positive
answer for the question in this paper. | [
"cs.CV",
"cs.AI"
]
|
In this paper, we derive a generalization of the Speedy Q-learning (SQL)
algorithm that was proposed in the Reinforcement Learning (RL) literature to
handle slow convergence of Watkins' Q-learning. In most RL algorithms such as
Q-learning, the Bellman equation and the Bellman operator play an important
role. It is possible to generalize the Bellman operator using the technique of
successive relaxation. We use the generalized Bellman operator to derive a
simple and efficient family of algorithms called Generalized Speedy Q-learning
(GSQL-w) and analyze its finite time performance. We show that GSQL-w has an
improved finite time performance bound compared to SQL for the case when the
relaxation parameter w is greater than 1. This improvement is a consequence of
the contraction factor of the generalized Bellman operator being less than that
of the standard Bellman operator. Numerical experiments are provided to
demonstrate the empirical performance of the GSQL-w algorithm. | [
"cs.LG",
"cs.AI"
]
|
We propose a novel deep learning method for shadow removal. Inspired by
physical models of shadow formation, we use a linear illumination
transformation to model the shadow effects in the image that allows the shadow
image to be expressed as a combination of the shadow-free image, the shadow
parameters, and a matte layer. We use two deep networks, namely SP-Net and
M-Net, to predict the shadow parameters and the shadow matte respectively. This
system allows us to remove the shadow effects from images. We then employ an
inpainting network, I-Net, to further refine the results. We train and test our
framework on the most challenging shadow removal dataset (ISTD). Our method
improves the state-of-the-art in terms of root mean square error (RMSE) for the
shadow area by 20\%. Furthermore, this decomposition allows us to formulate a
patch-based weakly-supervised shadow removal method. This model can be trained
without any shadow-free images (that are cumbersome to acquire) and achieves
competitive shadow removal results compared to state-of-the-art methods that
are trained with fully paired shadow and shadow-free images. Last, we introduce
SBU-Timelapse, a video shadow removal dataset for evaluating shadow removal
methods. | [
"cs.CV"
]
|
With the advent of agriculture 3.0 and 4.0, researchers are increasingly
focusing on the development of innovative smart farming and precision
agriculture technologies by introducing automation and robotics into the
agricultural processes. Autonomous agricultural field machines have been
gaining significant attention from farmers and industries to reduce costs,
human workload, and required resources. Nevertheless, achieving sufficient
autonomous navigation capabilities requires the simultaneous cooperation of
different processes; localization, mapping, and path planning are just some of
the steps that aim at providing to the machine the right set of skills to
operate in semi-structured and unstructured environments. In this context, this
study presents a low-cost local motion planner for autonomous navigation in
vineyards based only on an RGB-D camera, low range hardware, and a dual layer
control algorithm. The first algorithm exploits the disparity map and its depth
representation to generate a proportional control for the robotic platform.
Concurrently, a second back-up algorithm, based on representations learning and
resilient to illumination variations, can take control of the machine in case
of a momentaneous failure of the first block. Moreover, due to the double
nature of the system, after initial training of the deep learning model with an
initial dataset, the strict synergy between the two algorithms opens the
possibility of exploiting new automatically labeled data, coming from the
field, to extend the existing model knowledge. The machine learning algorithm
has been trained and tested, using transfer learning, with acquired images
during different field surveys in the North region of Italy and then optimized
for on-device inference with model pruning and quantization. Finally, the
overall system has been validated with a customized robot platform in the
relevant environment. | [
"cs.LG",
"cs.CV",
"stat.ML"
]
|
In this paper, we propose an end-to-end deep learning network named
3dDepthNet, which produces an accurate dense depth image from a single pair of
sparse LiDAR depth and color image for robotics and autonomous driving tasks.
Based on the dimensional nature of depth images, our network offers a novel
3D-to-2D coarse-to-fine dual densification design that is both accurate and
lightweight. Depth densification is first performed in 3D space via point cloud
completion, followed by a specially designed encoder-decoder structure that
utilizes the projected dense depth from 3D completion and the original RGB-D
images to perform 2D image completion. Experiments on the KITTI dataset show
our network achieves state-of-art accuracy while being more efficient. Ablation
and generalization tests prove that each module in our network has positive
influences on the final results, and furthermore, our network is resilient to
even sparser depth. | [
"cs.CV"
]
|
Inspired by the classic Sauvola local image thresholding approach, we
systematically study it from the deep neural network (DNN) perspective and
propose a new solution called SauvolaNet for degraded document binarization
(DDB). It is composed of three explainable modules, namely, Multi-Window
Sauvola (MWS), Pixelwise Window Attention (PWA), and Adaptive Sauolva Threshold
(AST). The MWS module honestly reflects the classic Sauvola but with trainable
parameters and multi-window settings. The PWA module estimates the preferred
window sizes for each pixel location. The AST module further consolidates the
outputs from MWS and PWA and predicts the final adaptive threshold for each
pixel location. As a result, SauvolaNet becomes end-to-end trainable and
significantly reduces the number of required network parameters to 40K -- it is
only 1\% of MobileNetV2. In the meantime, it achieves the State-of-The-Art
(SoTA) performance for the DDB task -- SauvolaNet is at least comparable to, if
not better than, SoTA binarization solutions in our extensive studies on the 13
public document binarization datasets. Our source code is available at
https://github.com/Leedeng/SauvolaNet. | [
"cs.CV"
]
|
In this work, we introduce the concept of bandlimiting into the theory of
machine learning because all physical processes are bandlimited by nature,
including real-world machine learning tasks. After the bandlimiting constraint
is taken into account, our theoretical analysis has shown that all practical
machine learning tasks are asymptotically solvable in a perfect sense.
Furthermore, the key towards this solvability almost solely relies on two
factors: i) a sufficiently large amount of training samples beyond a threshold
determined by a difficulty measurement of the underlying task; ii) a
sufficiently complex and bandlimited model. Moreover, for some special cases,
we have derived new error bounds for perfect learning, which can quantify the
difficulty of learning. These generalization bounds are not only asymptotically
convergent but also irrelevant to model complexity. Our new results on
generalization have provided a new perspective to explain the recent successes
of large-scale supervised learning using complex models like neural networks. | [
"cs.LG",
"cs.AI",
"stat.ML"
]
|
Object detection is a fundamental task in computer vision. While approaches
for axis-aligned bounding box detection have made substantial progress in
recent years, they perform poorly on oriented objects which are common in
several real-world scenarios such as aerial view imagery and security camera
footage. In these cases, a large part of a predicted bounding box will,
undesirably, cover non-object related areas. Therefore, oriented object
detection has emerged with the aim of generalizing object detection to
arbitrary orientations. This enables a tighter fit to oriented objects, leading
to a better separation of bounding boxes especially in case of dense object
distributions. The vast majority of the work in this area has focused on
complex two-stage anchor-based approaches. Anchors act as priors on the
bounding box shape and require attentive hyper-parameter fine-tuning on a
per-dataset basis, increased model size, and come with computational overhead.
In this work, we present DAFNe: A Dense one-stage Anchor-Free deep Network for
oriented object detection. As a one-stage model, DAFNe performs predictions on
a dense grid over the input image, being architecturally simpler and faster, as
well as easier to optimize than its two-stage counterparts. Furthermore, as an
anchor-free model, DAFNe reduces the prediction complexity by refraining from
employing bounding box anchors. Moreover, we introduce an orientation-aware
generalization of the center-ness function for arbitrarily oriented bounding
boxes to down-weight low-quality predictions and a center-to-corner bounding
box prediction strategy that improves object localization performance. DAFNe
improves the prediction accuracy over the previous best one-stage anchor-free
model results on DOTA 1.0 by 4.65% mAP, setting the new state-of-the-art
results by achieving 76.95% mAP. | [
"cs.CV",
"cs.AI",
"cs.LG"
]
|
In Hindsight Experience Replay (HER), a reinforcement learning agent is
trained by treating whatever it has achieved as virtual goals. However, in
previous work, the experience was replayed at random, without considering which
episode might be the most valuable for learning. In this paper, we develop an
energy-based framework for prioritizing hindsight experience in robotic
manipulation tasks. Our approach is inspired by the work-energy principle in
physics. We define a trajectory energy function as the sum of the transition
energy of the target object over the trajectory. We hypothesize that replaying
episodes that have high trajectory energy is more effective for reinforcement
learning in robotics. To verify our hypothesis, we designed a framework for
hindsight experience prioritization based on the trajectory energy of goal
states. The trajectory energy function takes the potential, kinetic, and
rotational energy into consideration. We evaluate our Energy-Based
Prioritization (EBP) approach on four challenging robotic manipulation tasks in
simulation. Our empirical results show that our proposed method surpasses
state-of-the-art approaches in terms of both performance and sample-efficiency
on all four tasks, without increasing computational time. A video showing
experimental results is available at https://youtu.be/jtsF2tTeUGQ | [
"cs.LG",
"cs.AI",
"stat.ML"
]
|
Data privacy is an increasingly important aspect of many real-world Data
sources that contain sensitive information may have immense potential which
could be unlocked using the right privacy enhancing transformations, but
current methods often fail to produce convincing output. Furthermore, finding
the right balance between privacy and utility is often a tricky trade-off. In
this work, we propose a novel approach for data privatization, which involves
two steps: in the first step, it removes the sensitive information, and in the
second step, it replaces this information with an independent random sample.
Our method builds on adversarial representation learning which ensures strong
privacy by training the model to fool an increasingly strong adversary. While
previous methods only aim at obfuscating the sensitive information, we find
that adding new random information in its place strengthens the provided
privacy and provides better utility at any given level of privacy. The result
is an approach that can provide stronger privatization on image data, and yet
be preserving both the domain and the utility of the inputs, entirely
independent of the downstream task. | [
"cs.LG",
"cs.CR",
"stat.ML"
]
|
Logic optimization is an NP-hard problem commonly approached through
hand-engineered heuristics. We propose to combine graph convolutional networks
with reinforcement learning and a novel, scalable node embedding method to
learn which local transforms should be applied to the logic graph. We show that
this method achieves a similar size reduction as ABC on smaller circuits and
outperforms it by 1.5-1.75x on larger random graphs. | [
"cs.LG"
]
|
Many approaches to 3D image segmentation are based on hierarchical clustering
of supervoxels into image regions. Here we describe a distributed algorithm
capable of handling a tremendous number of supervoxels. The algorithm works
recursively, the regions are divided into chunks that are processed
independently in parallel by multiple workers. At each round of the recursive
procedure, the chunk size in all dimensions are doubled until a single chunk
encompasses the entire image. The final result is provably independent of the
chunking scheme, and the same as if the entire image were processed without
division into chunks. This is nontrivial because a pair of adjacent regions is
scored by some statistical property (e.g. mean or median) of the affinities at
the interface, and the interface may extend over arbitrarily many chunks. The
trick is to delay merge decisions for regions that touch chunk boundaries, and
only complete them in a later round after the regions are fully contained
within a chunk. We demonstrate the algorithm by clustering an affinity graph
with over 1.5 trillion edges between 135 billion supervoxels derived from a 3D
electron microscopic brain image. | [
"cs.CV"
]
|
The Gromov-Wasserstein (GW) framework adapts ideas from optimal transport to
allow for the comparison of probability distributions defined on different
metric spaces. Scalable computation of GW distances and associated matchings on
graphs and point clouds have recently been made possible by state-of-the-art
algorithms such as S-GWL and MREC. Each of these algorithmic breakthroughs
relies on decomposing the underlying spaces into parts and performing matchings
on these parts, adding recursion as needed. While very successful in practice,
theoretical guarantees on such methods are limited. Inspired by recent advances
in the theory of quantization for metric measure spaces, we define Quantized
Gromov Wasserstein (qGW): a metric that treats parts as fundamental objects and
fits into a hierarchy of theoretical upper bounds for the GW problem. This
formulation motivates a new algorithm for approximating optimal GW matchings
which yields algorithmic speedups and reductions in memory complexity.
Consequently, we are able to go beyond outperforming state-of-the-art and apply
GW matching at scales that are an order of magnitude larger than in the
existing literature, including datasets containing over 1M points. | [
"cs.LG"
]
|
Image Captioning is a task that combines computer vision and natural language
processing, where it aims to generate descriptive legends for images. It is a
two-fold process relying on accurate image understanding and correct language
understanding both syntactically and semantically. It is becoming increasingly
difficult to keep up with the latest research and findings in the field of
image captioning due to the growing amount of knowledge available on the topic.
There is not, however, enough coverage of those findings in the available
review papers. We perform in this paper a run-through of the current
techniques, datasets, benchmarks and evaluation metrics used in image
captioning. The current research on the field is mostly focused on deep
learning-based methods, where attention mechanisms along with deep
reinforcement and adversarial learning appear to be in the forefront of this
research topic. In this paper, we review recent methodologies such as UpDown,
OSCAR, VIVO, Meta Learning and a model that uses conditional generative
adversarial nets. Although the GAN-based model achieves the highest score,
UpDown represents an important basis for image captioning and OSCAR and VIVO
are more useful as they use novel object captioning. This review paper serves
as a roadmap for researchers to keep up to date with the latest contributions
made in the field of image caption generation. | [
"cs.CV"
]
|
Acquiring accurate three-dimensional depth information conventionally
requires expensive multibeam LiDAR devices. Recently, researchers have
developed a less expensive option by predicting depth information from
two-dimensional color imagery. However, there still exists a substantial gap in
accuracy between depth information estimated from two-dimensional images and
real LiDAR point-cloud. In this paper, we introduce a fusion-based depth
prediction method, called FusionMapping. This is the first method that fuses
colored imagery and two-dimensional laser scan to estimate depth in-formation.
More specifically, we propose an autoencoder-based depth prediction network and
a novel point-cloud refinement network for depth estimation. We analyze the
performance of our FusionMapping approach on the KITTI LiDAR odometry dataset
and an indoor mobile robot system. The results show that our introduced
approach estimates depth with better accuracy when compared to existing
methods. | [
"cs.CV"
]
|
With the wide use of deep neural networks (DNN), model interpretability has
become a critical concern, since explainable decisions are preferred in
high-stake scenarios. Current interpretation techniques mainly focus on the
feature attribution perspective, which are limited in indicating why and how
particular explanations are related to the prediction. To this end, an
intriguing class of explanations, named counterfactuals, has been developed to
further explore the "what-if" circumstances for interpretation, and enables the
reasoning capability on black-box models. However, generating counterfactuals
for raw data instances (i.e., text and image) is still in the early stage due
to its challenges on high data dimensionality and unsemantic raw features. In
this paper, we design a framework to generate counterfactuals specifically for
raw data instances with the proposed Attribute-Informed Perturbation (AIP). By
utilizing generative models conditioned with different attributes,
counterfactuals with desired labels can be obtained effectively and
efficiently. Instead of directly modifying instances in the data space, we
iteratively optimize the constructed attribute-informed latent space, where
features are more robust and semantic. Experimental results on real-world texts
and images demonstrate the effectiveness, sample quality as well as efficiency
of our designed framework, and show the superiority over other alternatives.
Besides, we also introduce some practical applications based on our framework,
indicating its potential beyond the model interpretability aspect. | [
"cs.LG",
"cs.AI"
]
|
We study the sample complexity of teaching, termed as "teaching dimension"
(TDim) in the literature, for the teaching-by-reinforcement paradigm, where the
teacher guides the student through rewards. This is distinct from the
teaching-by-demonstration paradigm motivated by robotics applications, where
the teacher teaches by providing demonstrations of state/action trajectories.
The teaching-by-reinforcement paradigm applies to a wider range of real-world
settings where a demonstration is inconvenient, but has not been studied
systematically. In this paper, we focus on a specific family of reinforcement
learning algorithms, Q-learning, and characterize the TDim under different
teachers with varying control power over the environment, and present matching
optimal teaching algorithms. Our TDim results provide the minimum number of
samples needed for reinforcement learning, and we discuss their connections to
standard PAC-style RL sample complexity and teaching-by-demonstration sample
complexity results. Our teaching algorithms have the potential to speed up RL
agent learning in applications where a helpful teacher is available. | [
"cs.LG",
"cs.AI",
"stat.ML"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.