text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Temporal difference methods enable efficient estimation of value functions in
reinforcement learning in an incremental fashion, and are of broader interest
because they correspond learning as observed in biological systems. Standard
value functions correspond to the expected value of a sum of discounted
returns. While this formulation is often sufficient for many purposes, it would
often be useful to be able to represent functions of the return as well.
Unfortunately, most such functions cannot be estimated directly using TD
methods. We propose a means of estimating functions of the return using its
moments, which can be learned online using a modified TD algorithm. The moments
of the return are then used as part of a Taylor expansion to approximate
analytic functions of the return. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
The notion of concept drift refers to the phenomenon that the distribution,
which is underlying the observed data, changes over time. We are interested in
an identification of those features, that are most relevant for the observed
drift. We distinguish between drift inducing features, for which the observed
feature drift cannot be explained by any other feature, and faithfully drifting
features, which correlate with the present drift of other features. This notion
gives rise to minimal subsets of the feature space, which are able to
characterize the observed drift as a whole. We relate this problem to the
problems of feature selection and feature relevance learning, which allows us
to derive a detection algorithm. We demonstrate its usefulness on different
benchmarks. | [
"cs.LG",
"stat.ML"
] |
Pattern recognition is generally assumed as an interaction of two inversely
directed image-processing streams: the bottom-up information details gathering
and localization (segmentation) stream, and the top-down information features
aggregation, association and interpretation (recognition) stream. Inspired by
recent evidence from biological vision research and by the insights of
Kolmogorov Complexity theory, we propose a new, just top-down evolving,
procedure of initial image segmentation. We claim that traditional top-down
cognitive reasoning, which is supposed to guide the segmentation process to its
final result, is not at all a part of the image information content evaluation.
And that initial image segmentation is certainly an unsupervised process. We
present some illustrative examples, which support our claims. | [
"cs.CV",
"cs.IR"
] |
Accuracy of many visiolinguistic tasks has benefited significantly from the
application of vision-and-language(V&L) BERT. However, its application for the
task of vision-and-language navigation (VLN) remains limited. One reason for
this is the difficulty adapting the BERT architecture to the partially
observable Markov decision process present in VLN, requiring history-dependent
attention and decision making. In this paper we propose a recurrent BERT model
that is time-aware for use in VLN. Specifically, we equip the BERT model with a
recurrent function that maintains cross-modal state information for the agent.
Through extensive experiments on R2R and REVERIE we demonstrate that our model
can replace more complex encoder-decoder models to achieve state-of-the-art
results. Moreover, our approach can be generalised to other transformer-based
architectures, supports pre-training, and is capable of solving navigation and
referring expression tasks simultaneously. | [
"cs.CV"
] |
While recent deep monocular depth estimation approaches based on supervised
regression have achieved remarkable performance, costly ground truth
annotations are required during training. To cope with this issue, in this
paper we present a novel unsupervised deep learning approach for predicting
depth maps and show that the depth estimation task can be effectively tackled
within an adversarial learning framework. Specifically, we propose a deep
generative network that learns to predict the correspondence field i.e. the
disparity map between two image views in a calibrated stereo camera setting.
The proposed architecture consists of two generative sub-networks jointly
trained with adversarial learning for reconstructing the disparity map and
organized in a cycle such as to provide mutual constraints and supervision to
each other. Extensive experiments on the publicly available datasets KITTI and
Cityscapes demonstrate the effectiveness of the proposed model and competitive
results with state of the art methods. The code and trained model are available
on https://github.com/andrea-pilzer/unsup-stereo-depthGAN. | [
"cs.CV"
] |
Medical imaging contains the essential information for rendering diagnostic
and treatment decisions. Inspecting (visual perception) and interpreting image
to generate a report are tedious clinical routines for a radiologist where
automation is expected to greatly reduce the workload. Despite rapid
development of natural image captioning, computer-aided medical image visual
perception and interpretation remain a challenging task, largely due to the
lack of high-quality annotated image-report pairs and tailor-made generative
models for sufficient extraction and exploitation of localized semantic
features, particularly those associated with abnormalities. To tackle these
challenges, we present Vispi, an automatic medical image interpretation system,
which first annotates an image via classifying and localizing common thoracic
diseases with visual support and then followed by report generation from an
attentive LSTM model. Analyzing an open IU X-ray dataset, we demonstrate a
superior performance of Vispi in disease classification, localization and
report generation using automatic performance evaluation metrics ROUGE and
CIDEr. | [
"cs.CV",
"cs.CL"
] |
Checkpointing enables the training of deep learning models under restricted
memory budgets by freeing intermediate activations from memory and recomputing
them on demand. Current checkpointing techniques statically plan these
recomputations offline and assume static computation graphs. We demonstrate
that a simple online algorithm can achieve comparable performance by
introducing Dynamic Tensor Rematerialization (DTR), a greedy online algorithm
for checkpointing that is extensible and general, is parameterized by eviction
policy, and supports dynamic models. We prove that DTR can train an $N$-layer
linear feedforward network on an $\Omega(\sqrt{N})$ memory budget with only
$\mathcal{O}(N)$ tensor operations. DTR closely matches the performance of
optimal static checkpointing in simulated experiments. We incorporate a DTR
prototype into PyTorch merely by interposing on tensor allocations and operator
calls and collecting lightweight metadata on tensors. | [
"cs.LG",
"cs.PL",
"stat.ML",
"C.3"
] |
In this technical report, we present our solutions of Waymo Open Dataset
(WOD) Challenge 2020 - 2D Object Track. We adopt FPN as our basic framework.
Cascade RCNN, stacked PAFPN Neck and Double-Head are used for performance
improvements. In order to handle the small object detection problem in WOD, we
use very large image scales for both training and testing. Using our methods,
our team RW-TSDet achieved the 1st place in the 2D Object Detection Track. | [
"cs.CV"
] |
Inferring the depth of images is a fundamental inverse problem within the
field of Computer Vision since depth information is obtained through 2D images,
which can be generated from infinite possibilities of observed real scenes.
Benefiting from the progress of Convolutional Neural Networks (CNNs) to explore
structural features and spatial image information, Single Image Depth
Estimation (SIDE) is often highlighted in scopes of scientific and
technological innovation, as this concept provides advantages related to its
low implementation cost and robustness to environmental conditions. In the
context of autonomous vehicles, state-of-the-art CNNs optimize the SIDE task by
producing high-quality depth maps, which are essential during the autonomous
navigation process in different locations. However, such networks are usually
supervised by sparse and noisy depth data, from Light Detection and Ranging
(LiDAR) laser scans, and are carried out at high computational cost, requiring
high-performance Graphic Processing Units (GPUs). Therefore, we propose a new
lightweight and fast supervised CNN architecture combined with novel feature
extraction models which are designed for real-world autonomous navigation. We
also introduce an efficient surface normals module, jointly with a simple
geometric 2.5D loss function, to solve SIDE problems. We also innovate by
incorporating multiple Deep Learning techniques, such as the use of
densification algorithms and additional semantic, surface normals and depth
information to train our framework. The method introduced in this work focuses
on robotic applications in indoor and outdoor environments and its results are
evaluated on the competitive and publicly available NYU Depth V2 and KITTI
Depth datasets. | [
"cs.CV",
"cs.LG",
"cs.RO"
] |
Supervised learning in large discriminative models is a mainstay for modern
computer vision. Such an approach necessitates investing in large-scale
human-annotated datasets for achieving state-of-the-art results. In turn, the
efficacy of supervised learning may be limited by the size of the human
annotated dataset. This limitation is particularly notable for image
segmentation tasks, where the expense of human annotation is especially large,
yet large amounts of unlabeled data may exist. In this work, we ask if we may
leverage semi-supervised learning in unlabeled video sequences and extra images
to improve the performance on urban scene segmentation, simultaneously tackling
semantic, instance, and panoptic segmentation. The goal of this work is to
avoid the construction of sophisticated, learned architectures specific to
label propagation (e.g., patch matching and optical flow). Instead, we simply
predict pseudo-labels for the unlabeled data and train subsequent models with
both human-annotated and pseudo-labeled data. The procedure is iterated for
several times. As a result, our Naive-Student model, trained with such simple
yet effective iterative semi-supervised learning, attains state-of-the-art
results at all three Cityscapes benchmarks, reaching the performance of 67.8%
PQ, 42.6% AP, and 85.2% mIOU on the test set. We view this work as a notable
step towards building a simple procedure to harness unlabeled video sequences
and extra images to surpass state-of-the-art performance on core computer
vision tasks. | [
"cs.CV"
] |
We propose a novel procedure which adds "content-addressability" to any given
unconditional implicit model e.g., a generative adversarial network (GAN). The
procedure allows users to control the generative process by specifying a set
(arbitrary size) of desired examples based on which similar samples are
generated from the model. The proposed approach, based on kernel mean matching,
is applicable to any generative models which transform latent vectors to
samples, and does not require retraining of the model. Experiments on various
high-dimensional image generation problems (CelebA-HQ, LSUN bedroom, bridge,
tower) show that our approach is able to generate images which are consistent
with the input set, while retaining the image quality of the original model. To
our knowledge, this is the first work that attempts to construct, at test time,
a content-addressable generative model from a trained marginal model. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Deep reinforcement learning (RL) has made groundbreaking advancements in
robotics, data center management and other applications. Unfortunately,
system-level bottlenecks in RL workloads are poorly understood; we observe
fundamental structural differences in RL workloads that make them inherently
less GPU-bound than supervised learning (SL). To explain where training time is
spent in RL workloads, we propose RL-Scope, a cross-stack profiler that scopes
low-level CPU/GPU resource usage to high-level algorithmic operations, and
provides accurate insights by correcting for profiling overhead. Using
RL-Scope, we survey RL workloads across its major dimensions including ML
backend, RL algorithm, and simulator. For ML backends, we explain a $2.3\times$
difference in runtime between equivalent PyTorch and TensorFlow algorithm
implementations, and identify a bottleneck rooted in overly abstracted
algorithm implementations. For RL algorithms and simulators, we show that
on-policy algorithms are at least $3.5\times$ more simulation-bound than
off-policy algorithms. Finally, we profile a scale-up workload and demonstrate
that GPU utilization metrics reported by commonly used tools dramatically
inflate GPU usage, whereas RL-Scope reports true GPU-bound time. RL-Scope is an
open-source tool available at https://github.com/UofT-EcoSystem/rlscope . | [
"cs.LG",
"cs.SE"
] |
Following the success in advancing natural language processing and
understanding, transformers are expected to bring revolutionary changes to
computer vision. This work provides the first and comprehensive study on the
robustness of vision transformers (ViTs) against adversarial perturbations.
Tested on various white-box and transfer attack settings, we find that ViTs
possess better adversarial robustness when compared with convolutional neural
networks (CNNs). We summarize the following main observations contributing to
the improved robustness of ViTs:
1) Features learned by ViTs contain less low-level information and are more
generalizable, which contributes to superior robustness against adversarial
perturbations.
2) Introducing convolutional or tokens-to-token blocks for learning low-level
features in ViTs can improve classification accuracy but at the cost of
adversarial robustness.
3) Increasing the proportion of transformers in the model structure (when the
model consists of both transformer and CNN blocks) leads to better robustness.
But for a pure transformer model, simply increasing the size or adding layers
cannot guarantee a similar effect.
4) Pre-training on larger datasets does not significantly improve adversarial
robustness though it is critical for training ViTs.
5) Adversarial training is also applicable to ViT for training robust models.
Furthermore, feature visualization and frequency analysis are conducted for
explanation. The results show that ViTs are less sensitive to high-frequency
perturbations than CNNs and there is a high correlation between how well the
model learns low-level features and its robustness against different
frequency-based perturbations. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
In this paper, we propose a novel CycleGAN without checkerboard artifacts for
counter-forensics of fake-image detection. Recent rapid advances in image
manipulation tools and deep image synthesis techniques, such as Generative
Adversarial Networks (GANs) have easily generated fake images, so detecting
manipulated images has become an urgent issue. Most state-of-the-art forgery
detection methods assume that images include checkerboard artifacts which are
generated by using DNNs. Accordingly, we propose a novel CycleGAN without any
checkerboard artifacts for counter-forensics of fake-mage detection methods for
the first time, as an example of GANs without checkerboard artifacts. | [
"cs.CV",
"eess.IV"
] |
Lending decisions are usually made with proprietary models that provide
minimally acceptable explanations to users. In a future world without such
secrecy, what decision support tools would one want to use for justified
lending decisions? This question is timely, since the economy has dramatically
shifted due to a pandemic, and a massive number of new loans will be necessary
in the short term. We propose a framework for such decisions, including a
globally interpretable machine learning model, an interactive visualization of
it, and several types of summaries and explanations for any given decision. The
machine learning model is a two-layer additive risk model, which resembles a
two-layer neural network, but is decomposable into subscales. In this model,
each node in the first (hidden) layer represents a meaningful subscale model,
and all of the nonlinearities are transparent. Our online visualization tool
allows exploration of this model, showing precisely how it came to its
conclusion. We provide three types of explanations that are simpler than, but
consistent with, the global model: case-based reasoning explanations that use
neighboring past cases, a set of features that were the most important for the
model's prediction, and summary-explanations that provide a customized sparse
explanation for any particular lending decision made by the model. Our
framework earned the FICO recognition award for the Explainable Machine
Learning Challenge, which was the first public challenge in the domain of
explainable machine learning. | [
"cs.LG"
] |
Arbitrary text appearance poses a great challenge in scene text recognition
tasks. Existing works mostly handle with the problem in consideration of the
shape distortion, including perspective distortions, line curvature or other
style variations. Therefore, methods based on spatial transformers are
extensively studied. However, chromatic difficulties in complex scenes have not
been paid much attention on. In this work, we introduce a new learnable
geometric-unrelated module, the Structure-Preserving Inner Offset Network
(SPIN), which allows the color manipulation of source data within the network.
This differentiable module can be inserted before any recognition architecture
to ease the downstream tasks, giving neural networks the ability to actively
transform input intensity rather than the existing spatial rectification. It
can also serve as a complementary module to known spatial transformations and
work in both independent and collaborative ways with them. Extensive
experiments show that the use of SPIN results in a significant improvement on
multiple text recognition benchmarks compared to the state-of-the-arts. | [
"cs.CV"
] |
We build a collaborative filtering recommender system to restore images with
impulse noise for which the noisy pixels have been previously identified. We
define this recommender system in terms of a new color image representation
using three matrices that depend on the noise-free pixels of the image to
restore, and two parameters: $k$, the number of features; and $\lambda$, the
regularization factor. We perform experiments on a well known image database to
test our algorithm and we provide image quality statistics for the results
obtained. We discuss the roles of bias and variance in the performance of our
algorithm as determined by the values of $k$ and $\lambda$, and provide
guidance on how to choose the values of these parameters. Finally, we discuss
the possibility of using our collaborative filtering recommender system to
perform image inpainting and super-resolution. | [
"cs.CV",
"stat.ML"
] |
The ability to detect anomalies in time series is considered highly valuable
in numerous application domains. The sequential nature of time series objects
is responsible for an additional feature complexity, ultimately requiring
specialized approaches in order to solve the task. Essential characteristics of
time series, situated outside the time domain, are often difficult to capture
with state-of-the-art anomaly detection methods when no transformations have
been applied to the time series. Inspired by the success of deep learning
methods in computer vision, several studies have proposed transforming time
series into image-like representations, used as inputs for deep learning
models, and have led to very promising results in classification tasks. In this
paper, we first review the signal to image encoding approaches found in the
literature. Second, we propose modifications to some of their original
formulations to make them more robust to the variability in large datasets.
Third, we compare them on the basis of a common unsupervised task to
demonstrate how the choice of the encoding can impact the results when used in
the same deep learning architecture. We thus provide a comparison between six
encoding algorithms with and without the proposed modifications. The selected
encoding methods are Gramian Angular Field, Markov Transition Field, recurrence
plot, grey scale encoding, spectrogram, and scalogram. We also compare the
results achieved with the raw signal used as input for another deep learning
model. We demonstrate that some encodings have a competitive advantage and
might be worth considering within a deep learning framework. The comparison is
performed on a dataset collected and released by Airbus SAS, containing highly
complex vibration measurements from real helicopter flight tests. The different
encodings provide competitive results for anomaly detection. | [
"cs.LG",
"eess.SP",
"stat.ML"
] |
In this paper, we propose an automatic labeled sequential data generation
pipeline for human segmentation and velocity estimation with point clouds.
Considering the impact of deep neural networks, state-of-the-art network
architectures have been proposed for human recognition using point clouds
captured by Light Detection and Ranging (LiDAR). However, one disadvantage is
that legacy datasets may only cover the image domain without providing
important label information and this limitation has disturbed the progress of
research to date. Therefore, we develop an automatic labeled sequential data
generation pipeline, in which we can control any parameter or data generation
environment with pixel-wise and per-frame ground truth segmentation and
pixel-wise velocity information for human recognition. Our approach uses a
precise human model and reproduces a precise motion to generate realistic
artificial data. We present more than 7K video sequences which consist of 32
frames generated by the proposed pipeline. With the proposed sequence
generator, we confirm that human segmentation performance is improved when
using the video domain compared to when using the image domain. We also
evaluate our data by comparing with data generated under different conditions.
In addition, we estimate pedestrian velocity with LiDAR by only utilizing data
generated by the proposed pipeline. | [
"cs.CV"
] |
Various algorithms in reinforcement learning exhibit dramatic variability in
their convergence rates and ultimate accuracy as a function of the problem
structure. Such instance-specific behavior is not captured by existing global
minimax bounds, which are worst-case in nature. We analyze the problem of
estimating optimal $Q$-value functions for a discounted Markov decision process
with discrete states and actions and identify an instance-dependent functional
that controls the difficulty of estimation in the $\ell_\infty$-norm. Using a
local minimax framework, we show that this functional arises in lower bounds on
the accuracy on any estimation procedure. In the other direction, we establish
the sharpness of our lower bounds, up to factors logarithmic in the state and
action spaces, by analyzing a variance-reduced version of $Q$-learning. Our
theory provides a precise way of distinguishing "easy" problems from "hard"
ones in the context of $Q$-learning, as illustrated by an ensemble with a
continuum of difficulty. | [
"stat.ML",
"cs.LG"
] |
Employing machine learning models in the real world requires collecting large
amounts of data, which is both time consuming and costly to collect. A common
approach to circumvent this is to leverage existing, similar data-sets with
large amounts of labelled data. However, models trained on these canonical
distributions do not readily transfer to real-world ones. Domain adaptation and
transfer learning are often used to breach this "reality gap", though both
require a substantial amount of real-world data. In this paper we discuss a
more general approach: we propose learning a general transformation to bring
arbitrary images towards a canonical distribution where we can naively apply
the trained machine learning models. This transformation is trained in an
unsupervised regime, leveraging data augmentation to generate off-canonical
examples of images and training a Deep Learning model to recover their original
counterpart. We quantify the performance of this transformation using
pre-trained ImageNet classifiers, demonstrating that this procedure can recover
half of the loss in performance on the distorted data-set. We then validate the
effectiveness of this approach on a series of pre-trained ImageNet models on a
real world data set collected by printing and photographing images in different
lighting conditions. | [
"cs.CV",
"cs.LG"
] |
Monitoring physiological responses to hemodynamic stress can help in
determining appropriate treatment and ensuring good patient outcomes.
Physicians' intuition suggests that the human body has a number of
physiological response patterns to hemorrhage which escalate as blood loss
continues, however the exact etiology and phenotypes of such responses are not
well known or understood only at a coarse level. Although previous research has
shown that machine learning models can perform well in hemorrhage detection and
survival prediction, it is unclear whether machine learning could help to
identify and characterize the underlying physiological responses in raw vital
sign data. We approach this problem by first transforming the high-dimensional
vital sign time series into a tractable, lower-dimensional latent space using a
dilated, causal convolutional encoder model trained purely unsupervised.
Second, we identify informative clusters in the embeddings. By analyzing the
clusters of latent embeddings and visualizing them over time, we hypothesize
that the clusters correspond to the physiological response patterns that match
physicians' intuition. Furthermore, we attempt to evaluate the latent
embeddings using a variety of methods, such as predicting the cluster labels
using explainable features. | [
"cs.LG",
"stat.ML"
] |
Traditional approaches for learning 3D object categories have been
predominantly trained and evaluated on synthetic datasets due to the
unavailability of real 3D-annotated category-centric data. Our main goal is to
facilitate advances in this field by collecting real-world data in a magnitude
similar to the existing synthetic counterparts. The principal contribution of
this work is thus a large-scale dataset, called Common Objects in 3D, with real
multi-view images of object categories annotated with camera poses and ground
truth 3D point clouds. The dataset contains a total of 1.5 million frames from
nearly 19,000 videos capturing objects from 50 MS-COCO categories and, as such,
it is significantly larger than alternatives both in terms of the number of
categories and objects. We exploit this new dataset to conduct one of the first
large-scale "in-the-wild" evaluations of several new-view-synthesis and
category-centric 3D reconstruction methods. Finally, we contribute NerFormer -
a novel neural rendering method that leverages the powerful Transformer to
reconstruct an object given a small number of its views. The CO3D dataset is
available at https://github.com/facebookresearch/co3d . | [
"cs.CV"
] |
Superquadrics provide a compact representation of common shapes and have been
used both for object/surface modelling in computer graphics and as object-part
representation in computer vision and robotics. Superquadrics refer to a family
of shapes: here we deal with the superellipsoids and superparaboloids. Due to
the strong non-linearities involved in the equations, uniform or
close-to-uniform sampling is not attainable through a naive approach of direct
sampling from the parametric formulation. This is specially true for more
`cubic' superquadrics (with shape parameters close to $0.1$). We extend a
previous solution of 2D close-to-uniform uniform sampling of superellipses to
the superellipsoid (3D) case and derive our own for the superparaboloid.
Additionally, we are able to provide normals for each sampled point. To the
best of our knowledge, this is the first complete approach for close-to-uniform
sampling of superellipsoids and superparaboloids in one single framework. We
present derivations, pseudocode and qualitative and quantitative results using
our code, which is available online. | [
"cs.CV"
] |
One of the key problems in tensor completion is the number of uniformly
random sample entries required for recovery guarantee. The main aim of this
paper is to study $n_1 \times n_2 \times n_3$ third-order tensor completion and
investigate into incoherence conditions of $n_3$ low-rank $n_1$-by-$n_2$ matrix
slices under the transformed tensor singular value decomposition where the
unitary transformation is applied along $n_3$-dimension. We show that such
low-rank tensors can be recovered exactly with high probability when the number
of randomly observed entries is of order $O( r\max \{n_1, n_2 \} \log ( \max \{
n_1, n_2 \} n_3))$, where $r$ is the sum of the ranks of these $n_3$ matrix
slices in the transformed tensor. By utilizing synthetic data and imaging data
sets, we demonstrate that the theoretical result can be obtained under valid
incoherence conditions, and the tensor completion performance of the proposed
method is also better than that of existing methods in terms of sample sizes
requirement. | [
"stat.ML",
"cs.LG"
] |
Deep learning based image recognition systems have been widely deployed on
mobile devices in today's world. In recent studies, however, deep learning
models are shown vulnerable to adversarial examples. One variant of adversarial
examples, called adversarial patch, draws researchers' attention due to its
strong attack abilities. Though adversarial patches achieve high attack success
rates, they are easily being detected because of the visual inconsistency
between the patches and the original images. Besides, it usually requires a
large amount of data for adversarial patch generation in the literature, which
is computationally expensive and time-consuming. To tackle these challenges, we
propose an approach to generate inconspicuous adversarial patches with one
single image. In our approach, we first decide the patch locations basing on
the perceptual sensitivity of victim models, then produce adversarial patches
in a coarse-to-fine way by utilizing multiple-scale generators and
discriminators. The patches are encouraged to be consistent with the background
images with adversarial training while preserving strong attack abilities. Our
approach shows the strong attack abilities in white-box settings and the
excellent transferability in black-box settings through extensive experiments
on various models with different architectures and training methods. Compared
to other adversarial patches, our adversarial patches hold the most negligible
risks to be detected and can evade human observations, which is supported by
the illustrations of saliency maps and results of user evaluations. Lastly, we
show that our adversarial patches can be applied in the physical world. | [
"cs.CV",
"cs.AI"
] |
Face attributes are interesting due to their detailed description of human
faces. Unlike prior researches working on attribute prediction, we address an
inverse and more challenging problem called face attribute manipulation which
aims at modifying a face image according to a given attribute value. Instead of
manipulating the whole image, we propose to learn the corresponding residual
image defined as the difference between images before and after the
manipulation. In this way, the manipulation can be operated efficiently with
modest pixel modification. The framework of our approach is based on the
Generative Adversarial Network. It consists of two image transformation
networks and a discriminative network. The transformation networks are
responsible for the attribute manipulation and its dual operation and the
discriminative network is used to distinguish the generated images from real
images. We also apply dual learning to allow transformation networks to learn
from each other. Experiments show that residual images can be effectively
learned and used for attribute manipulations. The generated images remain most
of the details in attribute-irrelevant areas. | [
"cs.CV"
] |
How much does having visual priors about the world (e.g. the fact that the
world is 3D) assist in learning to perform downstream motor tasks (e.g.
delivering a package)? We study this question by integrating a generic
perceptual skill set (e.g. a distance estimator, an edge detector, etc.) within
a reinforcement learning framework--see Figure 1. This skill set (hereafter
mid-level perception) provides the policy with a more processed state of the
world compared to raw images.
We find that using a mid-level perception confers significant advantages over
training end-to-end from scratch (i.e. not leveraging priors) in
navigation-oriented tasks. Agents are able to generalize to situations where
the from-scratch approach fails and training becomes significantly more sample
efficient. However, we show that realizing these gains requires careful
selection of the mid-level perceptual skills. Therefore, we refine our findings
into an efficient max-coverage feature set that can be adopted in lieu of raw
images. We perform our study in completely separate buildings for training and
testing and compare against visually blind baseline policies and
state-of-the-art feature learning methods. | [
"cs.CV",
"cs.AI",
"cs.LG",
"cs.NE",
"cs.RO"
] |
Image-to-image translation tasks have been widely investigated with
Generative Adversarial Networks (GANs) and dual learning. However, existing
models lack the ability to control the translated results in the target domain
and their results usually lack of diversity in the sense that a fixed image
usually leads to (almost) deterministic translation result. In this paper, we
study a new problem, conditional image-to-image translation, which is to
translate an image from the source domain to the target domain conditioned on a
given image in the target domain. It requires that the generated image should
inherit some domain-specific features of the conditional image from the target
domain. Therefore, changing the conditional image in the target domain will
lead to diverse translation results for a fixed input image from the source
domain, and therefore the conditional input image helps to control the
translation results. We tackle this problem with unpaired data based on GANs
and dual learning. We twist two conditional translation models (one translation
from A domain to B domain, and the other one from B domain to A domain)
together for inputs combination and reconstruction while preserving domain
independent features. We carry out experiments on men's faces from-to women's
faces translation and edges to shoes&bags translations. The results demonstrate
the effectiveness of our proposed method. | [
"cs.CV"
] |
We consider the problem of finding Nash equilibrium for two-player turn-based
zero-sum games. Inspired by the AlphaGo Zero (AGZ) algorithm, we develop a
Reinforcement Learning based approach. Specifically, we propose
Explore-Improve-Supervise (EIS) method that combines "exploration", "policy
improvement"' and "supervised learning" to find the value function and policy
associated with Nash equilibrium. We identify sufficient conditions for
convergence and correctness for such an approach. For a concrete instance of
EIS where random policy is used for "exploration", Monte-Carlo Tree Search is
used for "policy improvement" and Nearest Neighbors is used for "supervised
learning", we establish that this method finds an $\varepsilon$-approximate
value function of Nash equilibrium in $\widetilde{O}(\varepsilon^{-(d+4)})$
steps when the underlying state-space of the game is continuous and
$d$-dimensional. This is nearly optimal as we establish a lower bound of
$\widetilde{\Omega}(\varepsilon^{-(d+2)})$ for any policy. | [
"cs.LG",
"stat.ML"
] |
Most existing text-to-image synthesis tasks are static single-turn
generation, based on pre-defined textual descriptions of images. To explore
more practical and interactive real-life applications, we introduce a new task
- Interactive Image Editing, where users can guide an agent to edit images via
multi-turn textual commands on-the-fly. In each session, the agent takes a
natural language description from the user as the input and modifies the image
generated in the previous turn to a new design, following the user description.
The main challenges in this sequential and interactive image generation task
are two-fold: 1) contextual consistency between a generated image and the
provided textual description; 2) step-by-step region-level modification to
maintain visual consistency across the generated image sequence in each
session. To address these challenges, we propose a novel Sequential Attention
Generative Adversarial Net-work (SeqAttnGAN), which applies a neural state
tracker to encode the previous image and the textual description in each turn
of the sequence, and uses a GAN framework to generate a modified version of the
image that is consistent with the preceding images and coherent with the
description. To achieve better region-specific refinement, we also introduce a
sequential attention mechanism into the model. To benchmark on the new task, we
introduce two new datasets, Zap-Seq and DeepFashion-Seq, which contain
multi-turn sessions with image-description sequences in the fashion domain.
Experiments on both datasets show that the proposed SeqAttnGANmodel outperforms
state-of-the-art approaches on the interactive image editing task across all
evaluation metrics including visual quality, image sequence coherence, and
text-image consistency. | [
"cs.CV",
"cs.AI",
"stat.ML"
] |
Learning-based stereo matching and depth estimation networks currently excel
on public benchmarks with impressive results. However, state-of-the-art
networks often fail to generalize from synthetic imagery to more challenging
real data domains. This paper is an attempt to uncover hidden secrets of
achieving domain robustness and in particular, discovering the important
ingredients of generalization success of stereo matching networks by analyzing
the effect of synthetic image learning on real data performance. We provide
evidence that demonstrates that learning of features in the synthetic domain by
a stereo matching network is heavily influenced by two "shortcuts" presented in
the synthetic data: (1) identical local statistics (RGB colour features)
between matching pixels in the synthetic stereo images and (2) lack of realism
in synthetic textures on 3D objects simulated in game engines. We will show
that by removing such shortcuts, we can achieve domain robustness in the
state-of-the-art stereo matching frameworks and produce a remarkable
performance on multiple realistic datasets, despite the fact that the networks
were trained on synthetic data, only. Our experimental results point to the
fact that eliminating shortcuts from the synthetic data is key to achieve
domain-invariant generalization between synthetic and real data domains. | [
"cs.CV",
"cs.LG"
] |
The computational complexity of leveraging deep neural networks for
extracting deep feature representations is a significant barrier to its
widespread adoption, particularly for use in embedded devices. One particularly
promising strategy to addressing the complexity issue is the notion of
evolutionary synthesis of deep neural networks, which was demonstrated to
successfully produce highly efficient deep neural networks while retaining
modeling performance. Here, we further extend upon the evolutionary synthesis
strategy for achieving efficient feature extraction via the introduction of a
stress-induced evolutionary synthesis framework, where stress signals are
imposed upon the synapses of a deep neural network during training to induce
stress and steer the synthesis process towards the production of more efficient
deep neural networks over successive generations and improved model fidelity at
a greater efficiency. The proposed stress-induced evolutionary synthesis
approach is evaluated on a variety of different deep neural network
architectures (LeNet5, AlexNet, and YOLOv2) on different tasks (object
classification and object detection) to synthesize efficient StressedNets over
multiple generations. Experimental results demonstrate the efficacy of the
proposed framework to synthesize StressedNets with significant improvement in
network architecture efficiency (e.g., 40x for AlexNet and 33x for YOLOv2) and
speed improvements (e.g., 5.5x inference speed-up for YOLOv2 on an Nvidia Tegra
X1 mobile processor). | [
"cs.CV",
"cs.LG",
"cs.NE"
] |
In this paper, we present a novel unsupervised video summarization model that
requires no manual annotation. The proposed model termed Cycle-SUM adopts a new
cycle-consistent adversarial LSTM architecture that can effectively maximize
the information preserving and compactness of the summary video. It consists of
a frame selector and a cycle-consistent learning based evaluator. The selector
is a bi-direction LSTM network that learns video representations that embed the
long-range relationships among video frames. The evaluator defines a learnable
information preserving metric between original video and summary video and
"supervises" the selector to identify the most informative frames to form the
summary video. In particular, the evaluator is composed of two generative
adversarial networks (GANs), in which the forward GAN is learned to reconstruct
original video from summary video while the backward GAN learns to invert the
processing. The consistency between the output of such cycle learning is
adopted as the information preserving metric for video summarization. We
demonstrate the close relation between mutual information maximization and such
cycle learning procedure. Experiments on two video summarization benchmark
datasets validate the state-of-the-art performance and superiority of the
Cycle-SUM model over previous baselines. | [
"cs.CV"
] |
Video transformers have recently emerged as a competitive alternative to 3D
CNNs for video understanding. However, due to their large number of parameters
and reduced inductive biases, these models require supervised pretraining on
large-scale image datasets to achieve top performance. In this paper, we
empirically demonstrate that self-supervised pretraining of video transformers
on video-only datasets can lead to action recognition results that are on par
or better than those obtained with supervised pretraining on large-scale image
datasets, even massive ones such as ImageNet-21K. Since transformer-based
models are effective at capturing dependencies over extended temporal spans, we
propose a simple learning procedure that forces the model to match a long-term
view to a short-term view of the same video. Our approach, named Long-Short
Temporal Contrastive Learning (LSTCL), enables video transformers to learn an
effective clip-level representation by predicting temporal context captured
from a longer temporal extent. To demonstrate the generality of our findings,
we implement and validate our approach under three different self-supervised
contrastive learning frameworks (MoCo v3, BYOL, SimSiam) using two distinct
video-transformer architectures, including an improved variant of the Swin
Transformer augmented with space-time attention. We conduct a thorough ablation
study and show that LSTCL achieves competitive performance on multiple video
benchmarks and represents a convincing alternative to supervised image-based
pretraining. | [
"cs.CV",
"cs.AI"
] |
Self-supervised tasks such as colorization, inpainting and zigsaw puzzle have
been utilized for visual representation learning for still images, when the
number of labeled images is limited or absent at all. Recently, this worthwhile
stream of study extends to video domain where the cost of human labeling is
even more expensive. However, the most of existing methods are still based on
2D CNN architectures that can not directly capture spatio-temporal information
for video applications. In this paper, we introduce a new self-supervised task
called as \textit{Space-Time Cubic Puzzles} to train 3D CNNs using large scale
video dataset. This task requires a network to arrange permuted 3D
spatio-temporal crops. By completing \textit{Space-Time Cubic Puzzles}, the
network learns both spatial appearance and temporal relation of video frames,
which is our final goal. In experiments, we demonstrate that our learned 3D
representation is well transferred to action recognition tasks, and outperforms
state-of-the-art 2D CNN-based competitors on UCF101 and HMDB51 datasets. | [
"cs.CV"
] |
There is a growing amount of literature on the relationship between wide
neural networks (NNs) and Gaussian processes (GPs), identifying an equivalence
between the two for a variety of NN architectures. This equivalence enables,
for instance, accurate approximation of the behaviour of wide Bayesian NNs
without MCMC or variational approximations, or characterisation of the
distribution of randomly initialised wide NNs optimised by gradient descent
without ever running an optimiser. We provide a rigorous extension of these
results to NNs involving attention layers, showing that unlike single-head
attention, which induces non-Gaussian behaviour, multi-head attention
architectures behave as GPs as the number of heads tends to infinity. We
further discuss the effects of positional encodings and layer normalisation,
and propose modifications of the attention mechanism which lead to improved
results for both finite and infinitely wide NNs. We evaluate attention kernels
empirically, leading to a moderate improvement upon the previous
state-of-the-art on CIFAR-10 for GPs without trainable kernels and advanced
data preprocessing. Finally, we introduce new features to the Neural Tangents
library (Novak et al., 2020) allowing applications of NNGP/NTK models, with and
without attention, to variable-length sequences, with an example on the IMDb
reviews dataset. | [
"stat.ML",
"cs.LG"
] |
Vision transformers (ViT) have demonstrated impressive performance across
various machine vision problems. These models are based on multi-head
self-attention mechanisms that can flexibly attend to a sequence of image
patches to encode contextual cues. An important question is how such
flexibility in attending image-wide context conditioned on a given patch can
facilitate handling nuisances in natural images e.g., severe occlusions, domain
shifts, spatial permutations, adversarial and natural perturbations. We
systematically study this question via an extensive set of experiments
encompassing three ViT families and comparisons with a high-performing
convolutional neural network (CNN). We show and analyze the following
intriguing properties of ViT: (a) Transformers are highly robust to severe
occlusions, perturbations and domain shifts, e.g., retain as high as 60% top-1
accuracy on ImageNet even after randomly occluding 80% of the image content.
(b) The robust performance to occlusions is not due to a bias towards local
textures, and ViTs are significantly less biased towards textures compared to
CNNs. When properly trained to encode shape-based features, ViTs demonstrate
shape recognition capability comparable to that of human visual system,
previously unmatched in the literature. (c) Using ViTs to encode shape
representation leads to an interesting consequence of accurate semantic
segmentation without pixel-level supervision. (d) Off-the-shelf features from a
single ViT model can be combined to create a feature ensemble, leading to high
accuracy rates across a range of classification datasets in both traditional
and few-shot learning paradigms. We show effective features of ViTs are due to
flexible and dynamic receptive fields possible via the self-attention
mechanism. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
This paper presents the MAXQ approach to hierarchical reinforcement learning
based on decomposing the target Markov decision process (MDP) into a hierarchy
of smaller MDPs and decomposing the value function of the target MDP into an
additive combination of the value functions of the smaller MDPs. The paper
defines the MAXQ hierarchy, proves formal results on its representational
power, and establishes five conditions for the safe use of state abstractions.
The paper presents an online model-free learning algorithm, MAXQ-Q, and proves
that it converges wih probability 1 to a kind of locally-optimal policy known
as a recursively optimal policy, even in the presence of the five kinds of
state abstraction. The paper evaluates the MAXQ representation and MAXQ-Q
through a series of experiments in three domains and shows experimentally that
MAXQ-Q (with state abstractions) converges to a recursively optimal policy much
faster than flat Q learning. The fact that MAXQ learns a representation of the
value function has an important benefit: it makes it possible to compute and
execute an improved, non-hierarchical policy via a procedure similar to the
policy improvement step of policy iteration. The paper demonstrates the
effectiveness of this non-hierarchical execution experimentally. Finally, the
paper concludes with a comparison to related work and a discussion of the
design tradeoffs in hierarchical reinforcement learning. | [
"cs.LG",
"I.2.6"
] |
Objective: To determine the completeness of argumentative steps necessary to
conclude effectiveness of an algorithm in a sample of current ML/AI supervised
learning literature.
Data Sources: Papers published in the Neural Information Processing Systems
(NeurIPS, n\'ee NIPS) journal where the official record showed a 2017 year of
publication.
Eligibility Criteria: Studies reporting a (semi-)supervised model, or
pre-processing fused with (semi-)supervised models for tabular data.
Study Appraisal: Three reviewers applied the assessment criteria to determine
argumentative completeness. The criteria were split into three groups,
including: experiments (e.g real and/or synthetic data), baselines (e.g
uninformed and/or state-of-art) and quantitative comparison (e.g. performance
quantifiers with confidence intervals and formal comparison of the algorithm
against baselines).
Results: Of the 121 eligible manuscripts (from the sample of 679 abstracts),
99\% used real-world data and 29\% used synthetic data. 91\% of manuscripts did
not report an uninformed baseline and 55\% reported a state-of-art baseline.
32\% reported confidence intervals for performance but none provided references
or exposition for how these were calculated. 3\% reported formal comparisons.
Limitations: The use of one journal as the primary information source may not
be representative of all ML/AI literature. However, the NeurIPS conference is
recognised to be amongst the top tier concerning ML/AI studies, so it is
reasonable to consider its corpus to be representative of high-quality
research.
Conclusion: Using the 2017 sample of the NeurIPS supervised learning corpus
as an indicator for the quality and trustworthiness of current ML/AI research,
it appears that complete argumentative chains in demonstrations of algorithmic
effectiveness are rare. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Deep Learning (DL) is considered the state-of-the-art in computer vision,
speech recognition and natural language processing. Until recently, it was also
widely accepted that DL is irrelevant for learning tasks on tabular data,
especially in the small sample regime where ensemble methods are acknowledged
as the gold standard. We present a new end-to-end differentiable method to
train a standard FFNN. Our method, \textbf{Muddling labels for Regularization}
(\texttt{MLR}), penalizes memorization through the generation of uninformative
labels and the application of a differentiable close-form regularization scheme
on the last hidden layer during training. \texttt{MLR} outperforms classical NN
and the gold standard (GBDT, RF) for regression and classification tasks on
several datasets from the UCI database and Kaggle covering a large range of
sample sizes and feature to sample ratios. Researchers and practitioners can
use \texttt{MLR} on its own as an off-the-shelf \DL{} solution or integrate it
into the most advanced ML pipelines. | [
"cs.LG",
"cs.AI",
"68T07"
] |
The over-segmentation into superpixels is an important preprocessing step to
smartly compress the input size and speed up higher level tasks. A superpixel
was traditionally considered as a small cluster of square-based pixels that
have similar color intensities and are closely located to each other. In this
discrete model the boundaries of superpixels often have irregular zigzags
consisting of horizontal or vertical edges from a given pixel grid. However
digital images represent a continuous world, hence the following continuous
model in the resolution-independent formulation can be more suitable for the
reconstruction problem.
Instead of uniting squares in a grid, a resolution-independent superpixel is
defined as a polygon that has straight edges with any possible slope at
subpixel resolution. The harder continuous version of the over-segmentation
problem is to split an image into polygons and find a best (say, constant)
color of each polygon so that the resulting colored mesh well approximates the
given image. Such a mesh of polygons can be rendered at any higher resolution
with all edges kept straight.
We propose a fast conversion of any traditional superpixels into polygons and
guarantees that their straight edges do not intersect. The meshes based on the
superpixels SEEDS (Superpixels Extracted via Energy-Driven Sampling) and SLIC
(Simple Linear Iterative Clustering) are compared with past meshes based on the
Line Segment Detector. The experiments on the Berkeley Segmentation Database
confirm that the new superpixels have more compact shapes than pixel-based
superpixels. | [
"cs.CV"
] |
We investigate a classification problem using multiple mobile agents capable
of collecting (partial) pose-dependent observations of an unknown environment.
The objective is to classify an image over a finite time horizon. We propose a
network architecture on how agents should form a local belief, take local
actions, and extract relevant features from their raw partial observations.
Agents are allowed to exchange information with their neighboring agents to
update their own beliefs. It is shown how reinforcement learning techniques can
be utilized to achieve decentralized implementation of the classification
problem by running a decentralized consensus protocol. Our experimental results
on the MNIST handwritten digit dataset demonstrates the effectiveness of our
proposed framework. | [
"cs.LG",
"cs.CV",
"cs.MA",
"cs.RO",
"cs.SY",
"stat.ML"
] |
We study how robots can autonomously learn skills that require a combination
of navigation and grasping. While reinforcement learning in principle provides
for automated robotic skill learning, in practice reinforcement learning in the
real world is challenging and often requires extensive instrumentation and
supervision. Our aim is to devise a robotic reinforcement learning system for
learning navigation and manipulation together, in an autonomous way without
human intervention, enabling continual learning under realistic assumptions.
Our proposed system, ReLMM, can learn continuously on a real-world platform
without any environment instrumentation, without human intervention, and
without access to privileged information, such as maps, objects positions, or a
global view of the environment. Our method employs a modularized policy with
components for manipulation and navigation, where manipulation policy
uncertainty drives exploration for the navigation controller, and the
manipulation module provides rewards for navigation. We evaluate our method on
a room cleanup task, where the robot must navigate to and pick up items
scattered on the floor. After a grasp curriculum training phase, ReLMM can
learn navigation and grasping together fully automatically, in around 40 hours
of autonomous real-world training. | [
"cs.LG",
"cs.RO"
] |
Food quality and safety are of great concern to society since it is an
essential guarantee not only for human health but also for social development,
and stability. Ensuring food quality and safety is a complex process. All food
processing stages should be considered, from cultivating, harvesting and
storage to preparation and consumption. Grading is one of the essential
processes to control food quality. This paper proposed a mobile visual-based
system to evaluate food grading. Specifically, the proposed system acquires
images of bananas when they are on moving conveyors. A two-layer image
processing system based on machine learning is used to grade bananas, and these
two layers are allocated on edge devices and cloud servers, respectively.
Support Vector Machine (SVM) is the first layer to classify bananas based on an
extracted feature vector composed of color and texture features. Then, the a
You Only Look Once (YOLO) v3 model further locating the peel's defected area
and determining if the inputs belong to the mid-ripened or well-ripened class.
According to experimental results, the first layer's performance achieved an
accuracy of 98.5% while the accuracy of the second layer is 85.7%, and the
overall accuracy is 96.4%. | [
"cs.CV",
"cs.AI",
"cs.LG",
"cs.SY",
"eess.SY"
] |
Human action recognition from skeleton data, fueled by the Graph
Convolutional Network (GCN), has attracted lots of attention, due to its
powerful capability of modeling non-Euclidean structure data. However, many
existing GCN methods provide a pre-defined graph and fix it through the entire
network, which can loss implicit joint correlations. Besides, the mainstream
spectral GCN is approximated by one-order hop, thus higher-order connections
are not well involved. Therefore, huge efforts are required to explore a better
GCN architecture. To address these problems, we turn to Neural Architecture
Search (NAS) and propose the first automatically designed GCN for
skeleton-based action recognition. Specifically, we enrich the search space by
providing multiple dynamic graph modules after fully exploring the
spatial-temporal correlations between nodes. Besides, we introduce multiple-hop
modules and expect to break the limitation of representational capacity caused
by one-order approximation. Moreover, a sampling- and memory-efficient
evolution strategy is proposed to search an optimal architecture for this task.
The resulted architecture proves the effectiveness of the higher-order
approximation and the dynamic graph modeling mechanism with temporal
interactions, which is barely discussed before. To evaluate the performance of
the searched model, we conduct extensive experiments on two very large scaled
datasets and the results show that our model gets the state-of-the-art results. | [
"cs.CV"
] |
Reinforcement learning methods have recently been very successful at
performing complex sequential tasks like playing Atari games, Go and Poker.
These algorithms have outperformed humans in several tasks by learning from
scratch, using only scalar rewards obtained through interaction with their
environment. While there certainly has been considerable independent innovation
to produce such results, many core ideas in reinforcement learning are inspired
by phenomena in animal learning, psychology and neuroscience. In this paper, we
comprehensively review a large number of findings in both neuroscience and
psychology that evidence reinforcement learning as a promising candidate for
modeling learning and decision making in the brain. In doing so, we construct a
mapping between various classes of modern RL algorithms and specific findings
in both neurophysiological and behavioral literature. We then discuss the
implications of this observed relationship between RL, neuroscience and
psychology and its role in advancing research in both AI and brain science. | [
"cs.LG"
] |
With the prevalence of Diabetes, the Diabetes Mellitus Retinopathy (DR) is
becoming a major health problem across the world. The long-term medical
complications arising due to DR have a significant impact on the patient as
well as the society, as the disease mostly affects individuals in their most
productive years. Early detection and treatment can help reduce the extent of
damage to the patients. The rise of Convolutional Neural Networks for
predictive analysis in the medical field paves the way for a robust solution to
DR detection. This paper studies the performance of several highly efficient
and scalable CNN architectures for Diabetic Retinopathy Classification with the
help of Transfer Learning. The research focuses on VGG16, Resnet50 V2 and
EfficientNet B0 models. The classification performance is analyzed using
several performance metrics including True Positive Rate, False Positive Rate,
Accuracy, etc. Also, several performance graphs are plotted for visualizing the
architecture performance including Confusion Matrix, ROC Curve, etc. The
results indicate that Transfer Learning with ImageNet weights using VGG 16
model demonstrates the best classification performance with the best Accuracy
of 95%. It is closely followed by ResNet50 V2 architecture with the best
Accuracy of 93%. This paper shows that predictive analysis of DR from retinal
images is achieved with Transfer Learning on Convolutional Neural Networks. | [
"cs.CV",
"cs.AI"
] |
Invariance to a broad array of image corruptions, such as warping, noise, or
color shifts, is an important aspect of building robust models in computer
vision. Recently, several new data augmentations have been proposed that
significantly improve performance on ImageNet-C, a benchmark of such
corruptions. However, there is still a lack of basic understanding on the
relationship between data augmentations and test-time corruptions. To this end,
we develop a feature space for image transforms, and then use a new measure in
this space between augmentations and corruptions called the Minimal Sample
Distance to demonstrate there is a strong correlation between similarity and
performance. We then investigate recent data augmentations and observe a
significant degradation in corruption robustness when the test-time corruptions
are sampled to be perceptually dissimilar from ImageNet-C in this feature
space. Our results suggest that test error can be improved by training on
perceptually similar augmentations, and data augmentations may not generalize
well beyond the existing benchmark. We hope our results and tools will allow
for more robust progress towards improving robustness to image corruptions. | [
"cs.CV",
"cs.LG"
] |
Graph representation learning plays a vital role in processing
graph-structured data. However, prior arts on graph representation learning
heavily rely on labeling information. To overcome this problem, inspired by the
recent success of graph contrastive learning and Siamese networks in visual
representation learning, we propose a novel self-supervised approach in this
paper to learn node representations by enhancing Siamese self-distillation with
multi-scale contrastive learning. Specifically, we first generate two augmented
views from the input graph based on local and global perspectives. Then, we
employ two objectives called cross-view and cross-network contrastiveness to
maximize the agreement between node representations across different views and
networks. To demonstrate the effectiveness of our approach, we perform
empirical experiments on five real-world datasets. Our method not only achieves
new state-of-the-art results but also surpasses some semi-supervised
counterparts by large margins. Code is made available at
https://github.com/GRAND-Lab/MERIT | [
"cs.LG",
"cs.SI"
] |
This paper proposes a general multi-modal data learning method, which
includes Global Homogeneous Transformation, Local Homogeneous Transformation
and their combination. During ReID model training, on the one hand, it randomly
selected a rectangular area in the RGB image and replace its color with the
same rectangular area in corresponding homogeneous image, thus it generate a
training image with different homogeneous areas; On the other hand, it convert
an image into a homogeneous image. These two methods help the model to directly
learn the relationship between different modalities in the Special ReID task.
In single-modal ReID tasks, it can be used as an effective data augmentation.
The experimental results show that our method achieves a performance
improvement of up to 3.3% in single modal ReID task, and performance
improvement in the Sketch Re-identification more than 8%. In addition, our
experiments also show that this method is also very useful in adversarial
training for adversarial defense. It can help the model learn faster and better
from adversarial examples. | [
"cs.CV"
] |
We introduce a method for learning to generate the surface of 3D shapes. Our
approach represents a 3D shape as a collection of parametric surface elements
and, in contrast to methods generating voxel grids or point clouds, naturally
infers a surface representation of the shape. Beyond its novelty, our new shape
generation framework, AtlasNet, comes with significant advantages, such as
improved precision and generalization capabilities, and the possibility to
generate a shape of arbitrary resolution without memory issues. We demonstrate
these benefits and compare to strong baselines on the ShapeNet benchmark for
two applications: (i) auto-encoding shapes, and (ii) single-view reconstruction
from a still image. We also provide results showing its potential for other
applications, such as morphing, parametrization, super-resolution, matching,
and co-segmentation. | [
"cs.CV"
] |
This paper studies the problem of blind face restoration from an
unconstrained blurry, noisy, low-resolution, or compressed image (i.e.,
degraded observation). For better recovery of fine facial details, we modify
the problem setting by taking both the degraded observation and a high-quality
guided image of the same identity as input to our guided face restoration
network (GFRNet). However, the degraded observation and guided image generally
are different in pose, illumination and expression, thereby making plain CNNs
(e.g., U-Net) fail to recover fine and identity-aware facial details. To tackle
this issue, our GFRNet model includes both a warping subnetwork (WarpNet) and a
reconstruction subnetwork (RecNet). The WarpNet is introduced to predict flow
field for warping the guided image to correct pose and expression (i.e., warped
guidance), while the RecNet takes the degraded observation and warped guidance
as input to produce the restoration result. Due to that the ground-truth flow
field is unavailable, landmark loss together with total variation
regularization are incorporated to guide the learning of WarpNet. Furthermore,
to make the model applicable to blind restoration, our GFRNet is trained on the
synthetic data with versatile settings on blur kernel, noise level,
downsampling scale factor, and JPEG quality factor. Experiments show that our
GFRNet not only performs favorably against the state-of-the-art image and face
restoration methods, but also generates visually photo-realistic results on
real degraded facial images. | [
"cs.CV"
] |
Verb Sense Disambiguation is a well-known task in NLP, the aim is to find the
correct sense of a verb in a sentence. Recently, this problem has been extended
in a multimodal scenario, by exploiting both textual and visual features of
ambiguous verbs leading to a new problem, the Visual Verb Sense Disambiguation
(VVSD). Here, the sense of a verb is assigned considering the content of an
image paired with it rather than a sentence in which the verb appears.
Annotating a dataset for this task is more complex than textual disambiguation,
because assigning the correct sense to a pair of $<$image, verb$>$ requires
both non-trivial linguistic and visual skills. In this work, differently from
the literature, the VVSD task will be performed in a transductive
semi-supervised learning (SSL) setting, in which only a small amount of labeled
information is required, reducing tremendously the need for annotated data. The
disambiguation process is based on a graph-based label propagation method which
takes into account mono or multimodal representations for $<$image, verb$>$
pairs. Experiments have been carried out on the recently published dataset
VerSe, the only available dataset for this task. The achieved results
outperform the current state-of-the-art by a large margin while using only a
small fraction of labeled samples per sense. Code available:
https://github.com/GiBg1aN/TVVSD. | [
"cs.CV",
"cs.CL"
] |
Depth estimation, as a necessary clue to convert 2D images into the 3D space,
has been applied in many machine vision areas. However, to achieve an entire
surrounding 360-degree geometric sensing, traditional stereo matching
algorithms for depth estimation are limited due to large noise, low accuracy,
and strict requirements for multi-camera calibration. In this work, for a
unified surrounding perception, we introduce panoramic images to obtain larger
field of view. We extend PADENet first appeared in our previous conference work
for outdoor scene understanding, to perform panoramic monocular depth
estimation with a focus for indoor scenes. At the same time, we improve the
training process of the neural network adapted to the characteristics of
panoramic images. In addition, we fuse traditional stereo matching algorithm
with deep learning methods and further improve the accuracy of depth
predictions. With a comprehensive variety of experiments, this research
demonstrates the effectiveness of our schemes aiming for indoor scene
perception. | [
"cs.CV",
"cs.RO"
] |
Nowadays, with the rapid development of consumer Unmanned Aerial Vehicles
(UAVs), visual surveillance by utilizing the UAV platform has been very
attractive. Most of the research works for UAV captured visual data are mainly
focused on the tasks of object detection and tracking. However, limited
attention has been paid to the task of person Re-identification (ReID) which
has been widely studied in ordinary surveillance cameras with fixed
emplacements. In this paper, to facilitate the research of person ReID in
aerial imagery, we collect a large scale airborne person ReID dataset named as
Person ReID for Aerial Imagery (PRAI-1581), which consists of 39,461 images of
1581 person identities. The images of the dataset are shot by two DJI consumer
UAVs flying at an altitude ranging from 20 to 60 meters above the ground, which
covers most of the real UAV surveillance scenarios. In addition, we propose to
utilize subspace pooling of convolution feature maps to represent the input
person images. Our method can learn a discriminative and compact feature
representation for ReID in aerial imagery and can be trained in an end-to-end
fashion efficiently. We conduct extensive experiments on the proposed dataset
and the experimental results demonstrate that re-identify persons in aerial
imagery is a challenging problem, where our method performs favorably against
state of the arts. Our dataset can be accessed via
\url{https://github.com/stormyoung/PRAI-1581}. | [
"cs.CV"
] |
The objective of this paper is self-supervised learning of spatio-temporal
embeddings from video, suitable for human action recognition. We make three
contributions: First, we introduce the Dense Predictive Coding (DPC) framework
for self-supervised representation learning on videos. This learns a dense
encoding of spatio-temporal blocks by recurrently predicting future
representations; Second, we propose a curriculum training scheme to predict
further into the future with progressively less temporal context. This
encourages the model to only encode slowly varying spatial-temporal signals,
therefore leading to semantic representations; Third, we evaluate the approach
by first training the DPC model on the Kinetics-400 dataset with
self-supervised learning, and then finetuning the representation on a
downstream task, i.e. action recognition. With single stream (RGB only), DPC
pretrained representations achieve state-of-the-art self-supervised performance
on both UCF101(75.7% top1 acc) and HMDB51(35.7% top1 acc), outperforming all
previous learning methods by a significant margin, and approaching the
performance of a baseline pre-trained on ImageNet. | [
"cs.CV"
] |
We propose a method for causal inference using satellite image time series,
in order to determine the treatment effects of interventions which impact
climate change, such as deforestation. Simply put, the aim is to quantify the
'before versus after' effect of climate related human driven interventions,
such as urbanization; as well as natural disasters, such as hurricanes and
forest fires. As a concrete example, we focus on quantifying forest tree cover
change/ deforestation due to human led causes. The proposed method involves the
following steps. First, we uae computer vision and machine learning/deep
learning techniques to detect and quantify forest tree coverage levels over
time, at every time epoch. We then look at this time series to identify
changepoints. Next, we estimate the expected (forest tree cover) values using a
Bayesian structural causal model and projecting/forecasting the counterfactual.
This is compared to the values actually observed post intervention, and the
difference in the two values gives us the effect of the intervention (as
compared to the non intervention scenario, i.e. what would have possibly
happened without the intervention). As a specific use case, we analyze
deforestation levels before and after the hyperinflation event (intervention)
in Brazil (which ended in 1993-94), for the Amazon rainforest region, around
Rondonia, Brazil. For this deforestation use case, using our causal inference
framework can help causally attribute change/reduction in forest tree cover and
increasing deforestation rates due to human activities at various points in
time. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Video question answering (VideoQA) is challenging given its multimodal
combination of visual understanding and natural language understanding. While
existing approaches seldom leverage the appearance-motion information in the
video at multiple temporal scales, the interaction between the question and the
visual information for textual semantics extraction is frequently ignored.
Targeting these issues, this paper proposes a novel Temporal Pyramid
Transformer (TPT) model with multimodal interaction for VideoQA. The TPT model
comprises two modules, namely Question-specific Transformer (QT) and Visual
Inference (VI). Given the temporal pyramid constructed from a video, QT builds
the question semantics from the coarse-to-fine multimodal co-occurrence between
each word and the visual content. Under the guidance of such question-specific
semantics, VI infers the visual clues from the local-to-global multi-level
interactions between the question and the video. Within each module, we
introduce a multimodal attention mechanism to aid the extraction of
question-video interactions, with residual connections adopted for the
information passing across different levels. Through extensive experiments on
three VideoQA datasets, we demonstrate better performances of the proposed
method in comparison with the state-of-the-arts. | [
"cs.CV"
] |
We present an approach to explain the decisions of black box models for image
classification. While using the black box to label images, our explanation
method exploits the latent feature space learned through an adversarial
autoencoder. The proposed method first generates exemplar images in the latent
feature space and learns a decision tree classifier. Then, it selects and
decodes exemplars respecting local decision rules. Finally, it visualizes them
in a manner that shows to the user how the exemplars can be modified to either
stay within their class, or to become counter-factuals by "morphing" into
another class. Since we focus on black box decision systems for image
classification, the explanation obtained from the exemplars also provides a
saliency map highlighting the areas of the image that contribute to its
classification, and areas of the image that push it into another class. We
present the results of an experimental evaluation on three datasets and two
black box models. Besides providing the most useful and interpretable
explanations, we show that the proposed method outperforms existing explainers
in terms of fidelity, relevance, coherence, and stability. | [
"cs.CV",
"cs.LG"
] |
This paper introduces SuperGlue, a neural network that matches two sets of
local features by jointly finding correspondences and rejecting non-matchable
points. Assignments are estimated by solving a differentiable optimal transport
problem, whose costs are predicted by a graph neural network. We introduce a
flexible context aggregation mechanism based on attention, enabling SuperGlue
to reason about the underlying 3D scene and feature assignments jointly.
Compared to traditional, hand-designed heuristics, our technique learns priors
over geometric transformations and regularities of the 3D world through
end-to-end training from image pairs. SuperGlue outperforms other learned
approaches and achieves state-of-the-art results on the task of pose estimation
in challenging real-world indoor and outdoor environments. The proposed method
performs matching in real-time on a modern GPU and can be readily integrated
into modern SfM or SLAM systems. The code and trained weights are publicly
available at https://github.com/magicleap/SuperGluePretrainedNetwork. | [
"cs.CV"
] |
The vulnerability of face recognition systems to presentation attacks has
limited their application in security-critical scenarios. Automatic methods of
detecting such malicious attempts are essential for the safe use of facial
recognition technology. Although various methods have been suggested for
detecting such attacks, most of them over-fit the training set and fail in
generalizing to unseen attacks and environments. In this work, we use transfer
learning from the vision transformer model for the zero-shot anti-spoofing
task. The effectiveness of the proposed approach is demonstrated through
experiments in publicly available datasets. The proposed approach outperforms
the state-of-the-art methods in the zero-shot protocols in the HQ-WMCA and
SiW-M datasets by a large margin. Besides, the model achieves a significant
boost in cross-database performance as well. | [
"cs.CV"
] |
Most semantic segmentation models treat semantic segmentation as a pixel-wise
classification task and use a pixel-wise classification error as their
optimization criterions. However, the pixel-wise error ignores the strong
dependencies among the pixels in an image, which limits the performance of the
model. Several ways to incorporate the structure information of the objects
have been investigated, \eg, conditional random fields (CRF), image structure
priors based methods, and generative adversarial network (GAN). Nevertheless,
these methods usually require extra model branches or additional memories, and
some of them show limited improvements. In contrast, we propose a simple yet
effective structural similarity loss (SSL) to encode the structure information
of the objects, which only requires a few additional computational resources in
the training phase. Inspired by the widely-used structural similarity (SSIM)
index in image quality assessment, we use the linear correlation between two
images to quantify their structural similarity. And the goal of the proposed
SSL is to pay more attention to the positions, whose associated predictions
lead to a low degree of linear correlation between two corresponding regions in
the ground truth map and the predicted map. Thus the model can achieve a strong
structural similarity between the two maps through minimizing the SSL over the
whole map. The experimental results demonstrate that our method can achieve
substantial and consistent improvements in performance on the PASCAL VOC 2012
and Cityscapes datasets. The code will be released soon. | [
"cs.CV"
] |
Deep generative models learned through adversarial training have become
increasingly popular for their ability to generate naturalistic image textures.
However, aside from their texture, the visual appearance of objects is
significantly influenced by their shape geometry; information which is not
taken into account by existing generative models. This paper introduces the
Geometry-Aware Generative Adversarial Networks (GAGAN) for incorporating
geometric information into the image generation process. Specifically, in GAGAN
the generator samples latent variables from the probability space of a
statistical shape model. By mapping the output of the generator to a canonical
coordinate frame through a differentiable geometric transformation, we enforce
the geometry of the objects and add an implicit connection from the prior to
the generated object. Experimental results on face generation indicate that the
GAGAN can generate realistic images of faces with arbitrary facial attributes
such as facial expression, pose, and morphology, that are of better quality
than current GAN-based methods. Our method can be used to augment any existing
GAN architecture and improve the quality of the images generated. | [
"cs.CV"
] |
In this paper, we propose a deep learning architecture that produces accurate
dense depth for the outdoor scene from a single color image and a sparse depth.
Inspired by the indoor depth completion, our network estimates surface normals
as the intermediate representation to produce dense depth, and can be trained
end-to-end. With a modified encoder-decoder structure, our network effectively
fuses the dense color image and the sparse LiDAR depth. To address outdoor
specific challenges, our network predicts a confidence mask to handle mixed
LiDAR signals near foreground boundaries due to occlusion, and combines
estimates from the color image and surface normals with learned attention maps
to improve the depth accuracy especially for distant areas. Extensive
experiments demonstrate that our model improves upon the state-of-the-art
performance on KITTI depth completion benchmark. Ablation study shows the
positive impact of each model components to the final performance, and
comprehensive analysis shows that our model generalizes well to the input with
higher sparsity or from indoor scenes. | [
"cs.CV"
] |
In this work, we address the task of referring image segmentation (RIS),
which aims at predicting a segmentation mask for the object described by a
natural language expression. Most existing methods focus on establishing
unidirectional or directional relationships between visual and linguistic
features to associate two modalities together, while the multi-scale context is
ignored or insufficiently modeled. Multi-scale context is crucial to localize
and segment those objects that have large scale variations during the
multi-modal fusion process. To solve this problem, we propose a simple yet
effective Cascaded Multi-modal Fusion (CMF) module, which stacks multiple
atrous convolutional layers in parallel and further introduces a cascaded
branch to fuse visual and linguistic features. The cascaded branch can
progressively integrate multi-scale contextual information and facilitate the
alignment of two modalities during the multi-modal fusion process. Experimental
results on four benchmark datasets demonstrate that our method outperforms most
state-of-the-art methods. Code is available at
https://github.com/jianhua2022/CMF-Refseg. | [
"cs.CV"
] |
Although transfer learning is proven to be effective in computer vision and
natural language processing applications, it is rarely investigated in
forecasting financial time series. Majority of existing works on transfer
learning are based on single-source transfer learning due to the availability
of open-access large-scale datasets. However, in financial domain, the lengths
of individual time series are relatively short and single-source transfer
learning models are less effective. Therefore, in this paper, we investigate
multi-source deep transfer learning for financial time series. We propose two
multi-source transfer learning methods namely Weighted Average Ensemble for
Transfer Learning (WAETL) and Tree-structured Parzen Estimator Ensemble
Selection (TPEES). The effectiveness of our approach is evaluated on financial
time series extracted from stock markets. Experiment results reveal that TPEES
outperforms other baseline methods on majority of multi-source transfer tasks. | [
"cs.LG"
] |
As an intuitive way of expression emotion, the animated Graphical Interchange
Format (GIF) images have been widely used on social media. Most previous
studies on automated GIF emotion recognition fail to effectively utilize GIF's
unique properties, and this potentially limits the recognition performance. In
this study, we demonstrate the importance of human related information in GIFs
and conduct human-centered GIF emotion recognition with a proposed Keypoint
Attended Visual Attention Network (KAVAN). The framework consists of a facial
attention module and a hierarchical segment temporal module. The facial
attention module exploits the strong relationship between GIF contents and
human characters, and extracts frame-level visual feature with a focus on human
faces. The Hierarchical Segment LSTM (HS-LSTM) module is then proposed to
better learn global GIF representations. Our proposed framework outperforms the
state-of-the-art on the MIT GIFGIF dataset. Furthermore, the facial attention
module provides reliable facial region mask predictions, which improves the
model's interpretability. | [
"cs.CV"
] |
In this paper, we present an object detection method that tackles the
stingray detection problem based on aerial images. In this problem, the images
are aerially captured on a sea-surface area by using an Unmanned Aerial Vehicle
(UAV), and the stingrays swimming under (but close to) the sea surface are the
target we want to detect and locate. To this end, we use a deep object
detection method, faster RCNN, to train a stingray detector based on a limited
training set of images. To boost the performance, we develop a new generative
approach, conditional GLO, to increase the training samples of stingray, which
is an extension of the Generative Latent Optimization (GLO) approach. Unlike
traditional data augmentation methods that generate new data only for image
classification, our proposed method that mixes foreground and background
together can generate new data for an object detection task, and thus improve
the training efficacy of a CNN detector. Experimental results show that
satisfiable performance can be obtained by using our approach on stingray
detection in aerial images. | [
"cs.CV"
] |
We introduce a new approach to functional causal modeling from observational
data, called Causal Generative Neural Networks (CGNN). CGNN leverages the power
of neural networks to learn a generative model of the joint distribution of the
observed variables, by minimizing the Maximum Mean Discrepancy between
generated and observed data. An approximate learning criterion is proposed to
scale the computational cost of the approach to linear complexity in the number
of observations. The performance of CGNN is studied throughout three
experiments. Firstly, CGNN is applied to cause-effect inference, where the task
is to identify the best causal hypothesis out of $X\rightarrow Y$ and
$Y\rightarrow X$. Secondly, CGNN is applied to the problem of identifying
v-structures and conditional independences. Thirdly, CGNN is applied to
multivariate functional causal modeling: given a skeleton describing the direct
dependences in a set of random variables $\textbf{X} = [X_1, \ldots, X_d]$,
CGNN orients the edges in the skeleton to uncover the directed acyclic causal
graph describing the causal structure of the random variables. On all three
tasks, CGNN is extensively assessed on both artificial and real-world data,
comparing favorably to the state-of-the-art. Finally, CGNN is extended to
handle the case of confounders, where latent variables are involved in the
overall causal model. | [
"stat.ML"
] |
This paper introduces the task of visual question answering for remote
sensing data (RSVQA). Remote sensing images contain a wealth of information
which can be useful for a wide range of tasks including land cover
classification, object counting or detection. However, most of the available
methodologies are task-specific, thus inhibiting generic and easy access to the
information contained in remote sensing data. As a consequence, accurate remote
sensing product generation still requires expert knowledge. With RSVQA, we
propose a system to extract information from remote sensing data that is
accessible to every user: we use questions formulated in natural language and
use them to interact with the images. With the system, images can be queried to
obtain high level information specific to the image content or relational
dependencies between objects visible in the images. Using an automatic method
introduced in this article, we built two datasets (using low and high
resolution data) of image/question/answer triplets. The information required to
build the questions and answers is queried from OpenStreetMap (OSM). The
datasets can be used to train (when using supervised methods) and evaluate
models to solve the RSVQA task. We report the results obtained by applying a
model based on Convolutional Neural Networks (CNNs) for the visual part and on
a Recurrent Neural Network (RNN) for the natural language part to this task.
The model is trained on the two datasets, yielding promising results in both
cases. | [
"cs.CV"
] |
Robust detection and tracking of objects is crucial for the deployment of
autonomous vehicle technology. Image based benchmark datasets have driven
development in computer vision tasks such as object detection, tracking and
segmentation of agents in the environment. Most autonomous vehicles, however,
carry a combination of cameras and range sensors such as lidar and radar. As
machine learning based methods for detection and tracking become more
prevalent, there is a need to train and evaluate such methods on datasets
containing range sensor data along with images. In this work we present
nuTonomy scenes (nuScenes), the first dataset to carry the full autonomous
vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree
field of view. nuScenes comprises 1000 scenes, each 20s long and fully
annotated with 3D bounding boxes for 23 classes and 8 attributes. It has 7x as
many annotations and 100x as many images as the pioneering KITTI dataset. We
define novel 3D detection and tracking metrics. We also provide careful dataset
analysis as well as baselines for lidar and image based detection and tracking.
Data, development kit and more information are available online. | [
"cs.LG",
"cs.CV",
"cs.RO",
"stat.ML"
] |
Topological alignments and snakes are used in image processing, particularly
in locating object boundaries. Both of them have their own advantages and
limitations. To improve the overall image boundary detection system, we focused
on developing a novel algorithm for image processing. The algorithm we propose
to develop will based on the active contour method in conjunction with
topological alignments method to enhance the image detection approach. The
algorithm presents novel technique to incorporate the advantages of both
Topological Alignments and snakes. Where the initial segmentation by
Topological Alignments is firstly transformed into the input of the snake model
and begins its evolvement to the interested object boundary. The results show
that the algorithm can deal with low contrast images and shape cells,
demonstrate the segmentation accuracy under weak image boundaries, which
responsible for lacking accuracy in image detecting techniques. We have
achieved better segmentation and boundary detecting for the image, also the
ability of the system to improve the low contrast and deal with over and under
segmentation. | [
"cs.CV"
] |
Continual learning protocols are attracting increasing attention from the
medical imaging community. In a continual setup, data from different sources
arrives sequentially and each batch is only available for a limited period.
Given the inherent privacy risks associated with medical data, this setup
reflects the reality of deployment for deep learning diagnostic radiology
systems. Many techniques exist to learn continuously for classification tasks,
and several have been adapted to semantic segmentation. Yet most have at least
one of the following flaws: a) they rely too heavily on domain identity
information during inference, or b) data as seen in early training stages does
not profit from training with later data. In this work, we propose an
evaluation framework that addresses both concerns, and introduce a fair
multi-model benchmark. We show that the benchmark outperforms two popular
continual learning methods for the task of T2-weighted MR prostate
segmentation. | [
"cs.CV",
"cs.LG"
] |
Recent work has shown significant progress in the direction of synthetic data
generation using Generative Adversarial Networks (GANs). GANs have been applied
in many fields of computer vision including text-to-image conversion, domain
transfer, super-resolution, and image-to-video applications. In computer
vision, traditional GANs are based on deep convolutional neural networks.
However, deep convolutional neural networks can require extensive computational
resources because they are based on multiple operations performed by
convolutional layers, which can consist of millions of trainable parameters.
Training a GAN model can be difficult and it takes a significant amount of time
to reach an equilibrium point. In this paper, we investigate the use of
depthwise separable convolutions to reduce training time while maintaining data
generation performance. Our results show that a DepthwiseGAN architecture can
generate realistic images in shorter training periods when compared to a
StarGan architecture, but that model capacity still plays a significant role in
generative modelling. In addition, we show that depthwise separable
convolutions perform best when only applied to the generator. For quality
evaluation of generated images, we use the Fr\'echet Inception Distance (FID),
which compares the similarity between the generated image distribution and that
of the training dataset. | [
"cs.CV",
"eess.IV"
] |
Image segmentation algorithms often depend on appearance models that
characterize the distribution of pixel values in different image regions. We
describe a new approach for estimating appearance models directly from an
image, without explicit consideration of the pixels that make up each region.
Our approach is based on novel algebraic expressions that relate local image
statistics to the appearance of spatially coherent regions. We describe two
algorithms that can use the aforementioned algebraic expressions to estimate
appearance models directly from an image. The first algorithm solves a system
of linear and quadratic equations using a least squares formulation. The second
algorithm is a spectral method based on an eigenvector computation. We present
experimental results that demonstrate the proposed methods work well in
practice and lead to effective image segmentation algorithms. | [
"cs.CV",
"68U10, 62M05, 62H30, 65C20"
] |
Many of the recent triumphs in machine learning are dependent on well-tuned
hyperparameters. This is particularly prominent in reinforcement learning (RL)
where a small change in the configuration can lead to failure. Despite the
importance of tuning hyperparameters, it remains expensive and is often done in
a naive and laborious way. A recent solution to this problem is Population
Based Training (PBT) which updates both weights and hyperparameters in a single
training run of a population of agents. PBT has been shown to be particularly
effective in RL, leading to widespread use in the field. However, PBT lacks
theoretical guarantees since it relies on random heuristics to explore the
hyperparameter space. This inefficiency means it typically requires vast
computational resources, which is prohibitive for many small and medium sized
labs. In this work, we introduce the first provably efficient PBT-style
algorithm, Population-Based Bandits (PB2). PB2 uses a probabilistic model to
guide the search in an efficient way, making it possible to discover high
performing hyperparameter configurations with far fewer agents than typically
required by PBT. We show in a series of RL experiments that PB2 is able to
achieve high performance with a modest computational budget. | [
"cs.LG",
"stat.ML"
] |
Virtual screening can accelerate drug discovery by identifying promising
candidates for experimental evaluation. Machine learning is a powerful method
for screening, as it can learn complex structure-property relationships from
experimental data and make rapid predictions over virtual libraries. Molecules
inherently exist as a three-dimensional ensemble and their biological action
typically occurs through supramolecular recognition. However, most deep
learning approaches to molecular property prediction use a 2D graph
representation as input, and in some cases a single 3D conformation. Here we
investigate how the 3D information of multiple conformers, traditionally known
as 4D information in the cheminformatics community, can improve molecular
property prediction in deep learning models. We introduce multiple deep
learning models that expand upon key architectures such as ChemProp and Schnet,
adding elements such as multiple-conformer inputs and conformer attention. We
then benchmark the performance trade-offs of these models on 2D, 3D and 4D
representations in the prediction of drug activity using a large training set
of geometrically resolved molecules. The new architectures perform
significantly better than 2D models, but their performance is often just as
strong with a single conformer as with many. We also find that 4D deep learning
models learn interpretable attention weights for each conformer. | [
"cs.LG",
"physics.chem-ph"
] |
Recent improvement and availability of remote satellite and IoT data offers
interesting and diverse applications of artificial intelligence in precision
agriculture. Soil moisture is an important component of multiple agricultural
and food supply chain practices. It measures the amount of water stored in
various depth of soil. Existing data driven approaches for soil moisture
prediction use conventional models which fail to capture the dynamic dependency
of soil moisture values in near-by locations over time. In this work, we
propose to convert the problem of soil moisture prediction as a semi-supervised
learning on temporal graphs. We propose a dynamic graph neural network which
can use the dependency of related locations over a region to predict soil
moisture. However, unlike social or information networks, graph structure is
not explicitly given for soil moisture prediction. Hence, we incorporate the
problem of graph structure learning in the framework of dynamic GNN. Our
algorithm, referred as DGLR, provides an end-to-end learning which can predict
soil moisture over multiple locations in a region over time and also update the
graph structure in between. Our solution achieves state-of-the-art results on
real-world soil moisture datasets compared to existing machine learning
approaches. | [
"cs.LG",
"cs.AI",
"cs.SI"
] |
In this work, we propose a method for three-dimensional (3D) reconstruction
of wide crime scene, based on a Simultaneous Localization and Mapping (SLAM)
approach. We used a Kinect V2 Time-of-Flight (TOF) RGB-D camera to provide
colored dense point clouds at a 30 Hz frequency. This device is moved freely (6
degrees of freedom) during the scene exploration. The implemented SLAM solution
aligns successive point clouds using an 3D keypoints description and matching
approach. This type of approach exploits both colorimetric and geometrical
information, and permits reconstruction under poor illumination conditions. Our
solution has been tested for indoor crime scene and outdoor archaeological site
reconstruction, returning a mean error around one centimeter. It is less
precise than environmental laser scanner solution, but more practical and
portable as well as less cumbersome. Also, the hardware is definitively
cheaper. | [
"cs.CV"
] |
A social interaction is a social exchange between two or more
individuals,where individuals modify and adjust their behaviors in response to
their interaction partners. Our social interactions are one of most fundamental
aspects of our lives and can profoundly affect our mood, both positively and
negatively. With growing interest in virtual reality and avatar-mediated
interactions,it is desirable to make these interactions natural and human like
to promote positive effect in the interactions and applications such as
intelligent tutoring systems, automated interview systems and e-learning. In
this paper, we propose a method to generate facial behaviors for an agent.
These behaviors include facial expressions and head pose and they are generated
considering the users affective state. Our models learn semantically meaningful
representations of the face and generate appropriate and temporally smooth
facial behaviors in dyadic interactions. | [
"cs.CV"
] |
For many fundamental scene understanding tasks, it is difficult or impossible
to obtain per-pixel ground truth labels from real images. We address this
challenge by introducing Hypersim, a photorealistic synthetic dataset for
holistic indoor scene understanding. To create our dataset, we leverage a large
repository of synthetic scenes created by professional artists, and we generate
77,400 images of 461 indoor scenes with detailed per-pixel labels and
corresponding ground truth geometry. Our dataset: (1) relies exclusively on
publicly available 3D assets; (2) includes complete scene geometry, material
information, and lighting information for every scene; (3) includes dense
per-pixel semantic instance segmentations and complete camera information for
every image; and (4) factors every image into diffuse reflectance, diffuse
illumination, and a non-diffuse residual term that captures view-dependent
lighting effects.
We analyze our dataset at the level of scenes, objects, and pixels, and we
analyze costs in terms of money, computation time, and annotation effort.
Remarkably, we find that it is possible to generate our entire dataset from
scratch, for roughly half the cost of training a popular open-source natural
language processing model. We also evaluate sim-to-real transfer performance on
two real-world scene understanding tasks - semantic segmentation and 3D shape
prediction - where we find that pre-training on our dataset significantly
improves performance on both tasks, and achieves state-of-the-art performance
on the most challenging Pix3D test set. All of our rendered image data, as well
as all the code we used to generate our dataset and perform our experiments, is
available online. | [
"cs.CV",
"cs.GR"
] |
We evaluate a version of the recently-proposed classification system named
Optimized Dissimilarity Space Embedding (ODSE) that operates in the input space
of sequences of generic objects. The ODSE system has been originally presented
as a classification system for patterns represented as labeled graphs. However,
since ODSE is founded on the dissimilarity space representation of the input
data, the classifier can be easily adapted to any input domain where it is
possible to define a meaningful dissimilarity measure. Here we demonstrate the
effectiveness of the ODSE classifier for sequences by considering an
application dealing with the recognition of the solubility degree of the
Escherichia coli proteome. Solubility, or analogously aggregation propensity,
is an important property of protein molecules, which is intimately related to
the mechanisms underlying the chemico-physical process of folding. Each protein
of our dataset is initially associated with a solubility degree and it is
represented as a sequence of symbols, denoting the 20 amino acid residues. The
herein obtained computational results, which we stress that have been achieved
with no context-dependent tuning of the ODSE system, confirm the validity and
generality of the ODSE-based approach for structured data classification. | [
"cs.CV",
"cs.AI",
"physics.bio-ph",
"q-bio.BM",
"I.5"
] |
The trend is to implement intelligent agents capable of analyzing available
information and utilize it efficiently. This work presents a number of
reinforcement learning (RL) architectures; one of them is designed for
intelligent agents. The proposed architectures are called selector-actor-critic
(SAC), tuner-actor-critic (TAC), and estimator-selector-actor-critic (ESAC).
These architectures are improved models of a well known architecture in RL
called actor-critic (AC). In AC, an actor optimizes the used policy, while a
critic estimates a value function and evaluate the optimized policy by the
actor. SAC is an architecture equipped with an actor, a critic, and a selector.
The selector determines the most promising action at the current state based on
the last estimate from the critic. TAC consists of a tuner, a model-learner, an
actor, and a critic. After receiving the approximated value of the current
state-action pair from the critic and the learned model from the model-learner,
the tuner uses the Bellman equation to tune the value of the current
state-action pair. ESAC is proposed to implement intelligent agents based on
two ideas, which are lookahead and intuition. Lookahead appears in estimating
the values of the available actions at the next state, while the intuition
appears in maximizing the probability of selecting the most promising action.
The newly added elements are an underlying model learner, an estimator, and a
selector. The model learner is used to approximate the underlying model. The
estimator uses the approximated value function, the learned underlying model,
and the Bellman equation to estimate the values of all actions at the next
state. The selector is used to determine the most promising action at the next
state, which will be used by the actor to optimize the used policy. Finally,
the results show the superiority of ESAC compared with the other architectures. | [
"cs.LG",
"eess.SP",
"stat.ML"
] |
We present a novel event recognition approach called Spatially-preserved
Doubly-injected Object Detection CNN (S-DOD-CNN), which incorporates the
spatially preserved object detection information in both a direct and an
indirect way. Indirect injection is carried out by simply sharing the weights
between the object detection modules and the event recognition module.
Meanwhile, our novelty lies in the fact that we have preserved the spatial
information for the direct injection. Once multiple regions-of-intereset (RoIs)
are acquired, their feature maps are computed and then projected onto a
spatially-preserving combined feature map using one of the four RoI Projection
approaches we present. In our architecture, combined feature maps are generated
for object detection which are directly injected to the event recognition
module. Our method provides the state-of-the-art accuracy for malicious event
recognition. | [
"cs.CV"
] |
Spectral graph convolutional networks (GCNs) are particular deep models which
aim at extending neural networks to arbitrary irregular domains. The principle
of these networks consists in projecting graph signals using the
eigen-decomposition of their Laplacians, then achieving filtering in the
spectral domain prior to back-project the resulting filtered signals onto the
input graph domain. However, the success of these operations is highly
dependent on the relevance of the used Laplacians which are mostly handcrafted
and this makes GCNs clearly sub-optimal. In this paper, we introduce a novel
spectral GCN that learns not only the usual convolutional parameters but also
the Laplacian operators. The latter are designed "end-to-end" as a part of a
recursive Chebyshev decomposition with the particularity of conveying both the
differential and the non-differential properties of the learned representations
-- with increasing order and discrimination power -- without overparametrizing
the trained GCNs. Extensive experiments, conducted on the challenging task of
skeleton-based action recognition, show the generalization ability and the
outperformance of our proposed Laplacian design w.r.t. different baselines
(built upon handcrafted and other learned Laplacians) as well as the related
work. | [
"cs.CV"
] |
Visual Question Answering (VQA) models employ attention mechanisms to
discover image locations that are most relevant for answering a specific
question. For this purpose, several multimodal fusion strategies have been
proposed, ranging from relatively simple operations (e.g., linear sum) to more
complex ones (e.g., Block). The resulting multimodal representations define an
intermediate feature space for capturing the interplay between visual and
semantic features, that is helpful in selectively focusing on image content. In
this paper, we propose a question-agnostic attention mechanism that is
complementary to the existing question-dependent attention mechanisms. Our
proposed model parses object instances to obtain an `object map' and applies
this map on the visual features to generate Question-Agnostic Attention (QAA)
features. In contrast to question-dependent attention approaches that are
learned end-to-end, the proposed QAA does not involve question-specific
training, and can be easily included in almost any existing VQA model as a
generic light-weight pre-processing step, thereby adding minimal computation
overhead for training. Further, when used in complement with the
question-dependent attention, the QAA allows the model to focus on the regions
containing objects that might have been overlooked by the learned attention
representation. Through extensive evaluation on VQAv1, VQAv2 and TDIUC
datasets, we show that incorporating complementary QAA allows state-of-the-art
VQA models to perform better, and provides significant boost to simplistic VQA
models, enabling them to performance on par with highly sophisticated fusion
strategies. | [
"cs.CV"
] |
Fine-grained recognition is challenging due to its subtle local inter-class
differences versus large intra-class variations such as poses. A key to address
this problem is to localize discriminative parts to extract pose-invariant
features. However, ground-truth part annotations can be expensive to acquire.
Moreover, it is hard to define parts for many fine-grained classes. This work
introduces Fully Convolutional Attention Networks (FCANs), a reinforcement
learning framework to optimally glimpse local discriminative regions adaptive
to different fine-grained domains. Compared to previous methods, our approach
enjoys three advantages: 1) the weakly-supervised reinforcement learning
procedure requires no expensive part annotations; 2) the fully-convolutional
architecture speeds up both training and testing; 3) the greedy reward strategy
accelerates the convergence of the learning. We demonstrate the effectiveness
of our method with extensive experiments on four challenging fine-grained
benchmark datasets, including CUB-200-2011, Stanford Dogs, Stanford Cars and
Food-101. | [
"cs.CV"
] |
We introduce the Metropolis-Hastings generative adversarial network (MH-GAN),
which combines aspects of Markov chain Monte Carlo and GANs. The MH-GAN draws
samples from the distribution implicitly defined by a GAN's
discriminator-generator pair, as opposed to standard GANs which draw samples
from the distribution defined only by the generator. It uses the discriminator
from GAN training to build a wrapper around the generator for improved
sampling. With a perfect discriminator, this wrapped generator samples from the
true distribution on the data exactly even when the generator is imperfect. We
demonstrate the benefits of the improved generator on multiple benchmark
datasets, including CIFAR-10 and CelebA, using the DCGAN, WGAN, and progressive
GAN. | [
"stat.ML",
"cs.LG"
] |
We present a novel dataset for training and benchmarking semantic SLAM
methods. The dataset consists of 200 long sequences, each one containing
3000-5000 data frames. We generate the sequences using realistic home layouts.
For that we sample trajectories that simulate motions of a simple home robot,
and then render the frames along the trajectories. Each data frame contains a)
RGB images generated using physically-based rendering, b) simulated depth
measurements, c) simulated IMU readings and d) ground truth occupancy grid of a
house. Our dataset serves a wider range of purposes compared to existing
datasets and is the first large-scale benchmark focused on the mapping
component of SLAM. The dataset is split into train/validation/test parts
sampled from different sets of virtual houses. We present benchmarking results
forboth classical geometry-based and recent learning-based SLAM algorithms, a
baseline mapping method, semantic segmentation and panoptic segmentation. | [
"cs.CV"
] |
With the increasing adoption of AI, inherent security and privacy
vulnerabilities formachine learning systems are being discovered. One such
vulnerability makes itpossible for an adversary to obtain private information
about the types of instancesused to train the targeted machine learning model.
This so-called model inversionattack is based on sequential leveraging of
classification scores towards obtaininghigh confidence representations for
various classes. However, for deep networks,such procedures usually lead to
unrecognizable representations that are uselessfor the adversary. In this
paper, we introduce a more realistic definition of modelinversion, where the
adversary is aware of the general purpose of the attackedmodel (for instance,
whether it is an OCR system or a facial recognition system),and the goal is to
find realistic class representations within the corresponding lower-dimensional
manifold (of, respectively, general symbols or general faces). To thatend, we
leverage properties of generative adversarial networks for constructinga
connected lower-dimensional manifold, and demonstrate the efficiency of
ourmodel inversion attack that is carried out within that manifold. | [
"cs.LG",
"stat.ML"
] |
Co-evolving time series appears in a multitude of applications such as
environmental monitoring, financial analysis, and smart transportation. This
paper aims to address the following challenges, including (C1) how to
incorporate explicit relationship networks of the time series; (C2) how to
model the implicit relationship of the temporal dynamics. We propose a novel
model called Network of Tensor Time Series, which is comprised of two modules,
including Tensor Graph Convolutional Network (TGCN) and Tensor Recurrent Neural
Network (TRNN). TGCN tackles the first challenge by generalizing Graph
Convolutional Network (GCN) for flat graphs to tensor graphs, which captures
the synergy between multiple graphs associated with the tensors. TRNN leverages
tensor decomposition to model the implicit relationships among co-evolving time
series. The experimental results on five real-world datasets demonstrate the
efficacy of the proposed method. | [
"cs.LG",
"cs.AI"
] |
Learning disentangled representation of data without supervision is an
important step towards improving the interpretability of generative models.
Despite recent advances in disentangled representation learning, existing
approaches often suffer from the trade-off between representation learning and
generation performance i.e. improving generation quality sacrifices
disentanglement performance). We propose an Information-Distillation Generative
Adversarial Network (ID-GAN), a simple yet generic framework that easily
incorporates the existing state-of-the-art models for both disentanglement
learning and high-fidelity synthesis. Our method learns disentangled
representation using VAE-based models, and distills the learned representation
with an additional nuisance variable to the separate GAN-based generator for
high-fidelity synthesis. To ensure that both generative models are aligned to
render the same generative factors, we further constrain the GAN generator to
maximize the mutual information between the learned latent code and the output.
Despite the simplicity, we show that the proposed method is highly effective,
achieving comparable image generation quality to the state-of-the-art methods
using the disentangled representation. We also show that the proposed
decomposition leads to an efficient and stable model design, and we demonstrate
photo-realistic high-resolution image synthesis results (1024x1024 pixels) for
the first time using the disentangled representations. | [
"cs.CV",
"eess.IV"
] |
The hatching process also influences the success of hatching eggs beside the
initial egg factor. So that the results have a large percentage of hatching, it
is necessary to check the development of the embryo at the beginning of the
hatching. This process aims to sort eggs that have embryos to remain hatched
until the end. Maximum checking is done the first week in the hatching period.
This study aims to detect the presence of embryos in eggs. Detection of the
existence of embryos is processed using segmentation. Egg images are segmented
using the K-means algorithm based on Lab color images. The results of the
images acquisition are converted into Lab color space images. The results of
Lab color space images are processed using K-means for each color. The K-means
process uses cluster k=3, where this cluster divided the image into three
parts, namely background, eggs, and yolk eggs. Yolk eggs are part of eggs that
have embryonic characteristics. This study applies the concept of color in the
initial segmentation and grayscale in the final stages. The results of the
initial phase show that the image segmentation results using k-means clustering
based on Lab color space provide a grouping of three parts. At the grayscale
image processing stage, the results of color image segmentation are processed
with grayscaling, image enhancement, and morphology. Thus, it seems clear that
the yolk segmented shows the presence of egg embryos. Based on this process and
results, K-means segmentation based on Lab color space can be used for the
initial stages of the embryo detection process. The evaluation uses MSE and
MSSIM, with values of 0.0486 and 0.9979; this can be used as a reference that
the results obtained can indicate the detection of embryos in egg yolk. | [
"cs.CV",
"eess.IV"
] |
One of the fundamental challenges in reinforcement learning (RL) is the one
of data efficiency: modern algorithms require a very large number of training
samples, especially compared to humans, for solving environments with
high-dimensional observations. The severity of this problem is increased when
the reward signal is sparse. In this work, we propose learning a state
representation in a self-supervised manner for reward prediction. The reward
predictor learns to estimate either a raw or a smoothed version of the true
reward signal in environment with a single, terminating, goal state. We augment
the training of out-of-the-box RL agents by shaping the reward using our reward
predictor during policy learning. Using our representation for preprocessing
high-dimensional observations, as well as using the predictor for reward
shaping, is shown to significantly enhance Actor Critic using
Kronecker-factored Trust Region and Proximal Policy Optimization in single-goal
environments with visual inputs. | [
"cs.LG",
"stat.ML"
] |
Human gaze is known to be an intention-revealing signal in human
demonstrations of tasks. In this work, we use gaze cues from human
demonstrators to enhance the performance of agents trained via three popular
imitation learning methods -- behavioral cloning (BC), behavioral cloning from
observation (BCO), and Trajectory-ranked Reward EXtrapolation (T-REX). Based on
similarities between the attention of reinforcement learning agents and human
gaze, we propose a novel approach for utilizing gaze data in a computationally
efficient manner, as part of an auxiliary loss function, which guides a network
to have higher activations in image regions where the human's gaze fixated.
This work is a step towards augmenting any existing convolutional imitation
learning agent's training with auxiliary gaze data. Our auxiliary
coverage-based gaze loss (CGL) guides learning toward a better reward function
or policy, without adding any additional learnable parameters and without
requiring gaze data at test time. We find that our proposed approach improves
the performance by 95% for BC, 343% for BCO, and 390% for T-REX, averaged over
20 different Atari games. We also find that compared to a prior
state-of-the-art imitation learning method assisted by human gaze (AGIL), our
method achieves better performance, and is more efficient in terms of learning
with fewer demonstrations. We further interpret trained CGL agents with a
saliency map visualization method to explain their performance. At last, we
show that CGL can help alleviate a well-known causal confusion problem in
imitation learning. | [
"cs.LG",
"cs.AI"
] |
While data has certainly taken the center stage in computer vision in recent
years, it can still be difficult to obtain in certain scenarios. In particular,
acquiring ground truth 3D shapes of objects pictured in 2D images remains a
challenging feat and this has hampered progress in recognition-based object
reconstruction from a single image. Here we propose to bypass previous
solutions such as 3D scanning or manual design, that scale poorly, and instead
populate object category detection datasets semi-automatically with dense,
per-object 3D reconstructions, bootstrapped from:(i) class labels, (ii) ground
truth figure-ground segmentations and (iii) a small set of keypoint
annotations. Our proposed algorithm first estimates camera viewpoint using
rigid structure-from-motion and then reconstructs object shapes by optimizing
over visual hull proposals guided by loose within-class shape similarity
assumptions. The visual hull sampling process attempts to intersect an object's
projection cone with the cones of minimal subsets of other similar objects
among those pictured from certain vantage points. We show that our method is
able to produce convincing per-object 3D reconstructions and to accurately
estimate cameras viewpoints on one of the most challenging existing
object-category detection datasets, PASCAL VOC. We hope that our results will
re-stimulate interest on joint object recognition and 3D reconstruction from a
single image. | [
"cs.CV"
] |
We extend conformal inference to general settings that allow for time series
data. Our proposal is developed as a randomization method and accounts for
potential serial dependence by including block structures in the permutation
scheme. As a result, the proposed method retains the exact, model-free validity
when the data are i.i.d. or more generally exchangeable, similar to usual
conformal inference methods. When exchangeability fails, as is the case for
common time series data, the proposed approach is approximately valid under
weak assumptions on the conformity score. | [
"stat.ML",
"cs.LG"
] |
Crop yield is affected by various soil and environmental parameters and can
vary significantly. Therefore, a crop yield estimation model which can predict
pre-harvest yield is required for food security. The study is conducted on tea
forms operating under National Tea Research Institute, Pakistan. The data is
recorded on monthly basis for ten years period. The parameters collected are
minimum and maximum temperature, humidity, rainfall, PH level of the soil,
usage of pesticide and labor expertise. The design of model incorporated all of
these parameters and identified the parameters which are most crucial for yield
predictions. Feature transformation is performed to obtain better performing
model. The designed model is based on an ensemble of neural networks and
provided an R-squared of 0.9461 and RMSE of 0.1204 indicating the usability of
the proposed model in yield forecasting based on surface and environmental
parameters. | [
"cs.LG"
] |
Tables on the web constitute a valuable data source for many applications,
like factual search and knowledge base augmentation. However, as genuine tables
containing relational knowledge only account for a small proportion of tables
on the web, reliable genuine web table classification is a crucial first step
of table extraction. Previous works usually rely on explicit feature
construction from the HTML code. In contrast, we propose an approach for web
table classification by exploiting the full visual appearance of a table, which
works purely by applying a convolutional neural network on the rendered image
of the web table. Since these visual features can be extracted automatically,
our approach circumvents the need for explicit feature construction. A new hand
labeled gold standard dataset containing HTML source code and images for 13,112
tables was generated for this task. Transfer learning techniques are applied to
well known VGG16 and ResNet50 architectures. The evaluation of CNN image
classification with fine tuned ResNet50 (F1 93.29%) shows that this approach
achieves results comparable to previous solutions using explicitly defined HTML
code based features. By combining visual and explicit features, an F-measure of
93.70% can be achieved by Random Forest classification, which beats current
state of the art methods. | [
"cs.CV"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.