text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Pairwise learning or dyadic prediction concerns the prediction of properties
for pairs of objects. It can be seen as an umbrella covering various machine
learning problems such as matrix completion, collaborative filtering,
multi-task learning, transfer learning, network prediction and zero-shot
learning. In this work we analyze kernel-based methods for pairwise learning,
with a particular focus on a recently-suggested two-step method. We show that
this method offers an appealing alternative for commonly-applied
Kronecker-based methods that model dyads by means of pairwise feature
representations and pairwise kernels. In a series of theoretical results, we
establish correspondences between the two types of methods in terms of linear
algebra and spectral filtering, and we analyze their statistical consistency.
In addition, the two-step method allows us to establish novel algorithmic
shortcuts for efficient training and validation on very large datasets. Putting
those properties together, we believe that this simple, yet powerful method can
become a standard tool for many problems. Extensive experimental results for a
range of practical settings are reported. | [
"cs.LG"
] |
For classifying time series, a nearest-neighbor approach is widely used in
practice with performance often competitive with or better than more elaborate
methods such as neural networks, decision trees, and support vector machines.
We develop theoretical justification for the effectiveness of
nearest-neighbor-like classification of time series. Our guiding hypothesis is
that in many applications, such as forecasting which topics will become trends
on Twitter, there aren't actually that many prototypical time series to begin
with, relative to the number of time series we have access to, e.g., topics
become trends on Twitter only in a few distinct manners whereas we can collect
massive amounts of Twitter data. To operationalize this hypothesis, we propose
a latent source model for time series, which naturally leads to a "weighted
majority voting" classification rule that can be approximated by a
nearest-neighbor classifier. We establish nonasymptotic performance guarantees
of both weighted majority voting and nearest-neighbor classification under our
model accounting for how much of the time series we observe and the model
complexity. Experimental results on synthetic data show weighted majority
voting achieving the same misclassification rate as nearest-neighbor
classification while observing less of the time series. We then use weighted
majority to forecast which news topics on Twitter become trends, where we are
able to detect such "trending topics" in advance of Twitter 79% of the time,
with a mean early advantage of 1 hour and 26 minutes, a true positive rate of
95%, and a false positive rate of 4%. | [
"stat.ML",
"cs.LG",
"cs.SI"
] |
The recent GRAPH-BERT model introduces a new approach to learning graph
representations merely based on the attention mechanism. GRAPH-BERT provides an
opportunity for transferring pre-trained models and learned graph
representations across different tasks within the same graph dataset. In this
paper, we will further investigate the graph-to-graph transfer of a universal
GRAPH-BERT for graph representation learning across different graph datasets,
and our proposed model is also referred to as the G5 for simplicity. Many
challenges exist in learning G5 to adapt the distinct input and output
configurations for each graph data source, as well as the information
distributions differences. G5 introduces a pluggable model architecture: (a)
each data source will be pre-processed with a unique input representation
learning component; (b) each output application task will also have a specific
functional component; and (c) all such diverse input and output components will
all be conjuncted with a universal GRAPH-BERT core component via an input size
unification layer and an output representation fusion layer, respectively.
The G5 model removes the last obstacle for cross-graph representation
learning and transfer. For the graph sources with very sparse training data,
the G5 model pre-trained on other graphs can still be utilized for
representation learning with necessary fine-tuning. What's more, the
architecture of G5 also allows us to learn a supervised functional classifier
for data sources without any training data at all. Such a problem is also named
as the Apocalypse Learning task in this paper. Two different label reasoning
strategies, i.e., Cross-Source Classification Consistency Maximization (CCCM)
and Cross-Source Dynamic Routing (CDR), are introduced in this paper to address
the problem. | [
"cs.LG",
"cs.NE",
"cs.SI",
"stat.ML"
] |
Following the trends of mobile and edge computing for DNN models, an
intermediate option, split computing, has been attracting attentions from the
research community. Previous studies empirically showed that while mobile and
edge computing often would be the best options in terms of total inference
time, there are some scenarios where split computing methods can achieve
shorter inference time. All the proposed split computing approaches, however,
focus on image classification tasks, and most are assessed with small datasets
that are far from the practical scenarios. In this paper, we discuss the
challenges in developing split computing methods for powerful R-CNN object
detectors trained on a large dataset, COCO 2017. We extensively analyze the
object detectors in terms of layer-wise tensor size and model size, and show
that naive split computing methods would not reduce inference time. To the best
of our knowledge, this is the first study to inject small bottlenecks to such
object detectors and unveil the potential of a split computing approach. The
source code and trained models' weights used in this study are available at
https://github.com/yoshitomo-matsubara/hnd-ghnd-object-detectors . | [
"cs.CV",
"eess.IV"
] |
Recently, deep architectures, such as recurrent and recursive neural networks
have been successfully applied to various natural language processing tasks.
Inspired by bidirectional recurrent neural networks which use representations
that summarize the past and future around an instance, we propose a novel
architecture that aims to capture the structural information around an input,
and use it to label instances. We apply our method to the task of opinion
expression extraction, where we employ the binary parse tree of a sentence as
the structure, and word vector representations as the initial representation of
a single token. We conduct preliminary experiments to investigate its
performance and compare it to the sequential approach. | [
"cs.LG",
"cs.CL",
"stat.ML"
] |
In this work, we propose a method to simultaneously perform (i) biometric
recognition (i.e., identify the individual), and (ii) device recognition,
(i.e., identify the device) from a single biometric image, say, a face image,
using a one-shot schema. Such a joint recognition scheme can be useful in
devices such as smartphones for enhancing security as well as privacy. We
propose to automatically learn a joint representation that encapsulates both
biometric-specific and sensor-specific features. We evaluate the proposed
approach using iris, face and periocular images acquired using near-infrared
iris sensors and smartphone cameras. Experiments conducted using 14,451 images
from 15 sensors resulted in a rank-1 identification accuracy of upto 99.81% and
a verification accuracy of upto 100% at a false match rate of 1%. | [
"cs.CV"
] |
Automatic transcription of scene understanding in images and videos is a step
towards artificial general intelligence. Image captioning is a nomenclature for
describing meaningful information in an image using computer vision techniques.
Automated image captioning techniques utilize encoder and decoder architecture,
where the encoder extracts features from an image and the decoder generates a
transcript. In this work, we investigate two unexplored ideas for image
captioning using transformers: First, we demonstrate the enforcement of using
objects' relevance in the surrounding environment. Second, learning an explicit
association between labels and language constructs. We propose label-attention
Transformer with geometrically coherent objects (LATGeO). The proposed
technique acquires a proposal of geometrically coherent objects using a deep
neural network (DNN) and generates captions by investigating their
relationships using a label-attention module. Object coherence is defined using
the localized ratio of the geometrical properties of the proposals. The
label-attention module associates the extracted objects classes to the
available dictionary using self-attention layers. The experimentation results
show that objects' relevance in surroundings and binding of their visual
feature with their geometrically localized ratios combined with its associated
labels help in defining meaningful captions. The proposed framework is tested
on the MSCOCO dataset, and a thorough evaluation resulting in overall better
quantitative scores pronounces its superiority. | [
"cs.CV",
"cs.AI"
] |
Temporal segmentation of long videos is an important problem, that has
largely been tackled through supervised learning, often requiring large amounts
of annotated training data. In this paper, we tackle the problem of
self-supervised temporal segmentation of long videos that alleviate the need
for any supervision. We introduce a self-supervised, predictive learning
framework that draws inspiration from cognitive psychology to segment long,
visually complex videos into individual, stable segments that share the same
semantics. We also introduce a new adaptive learning paradigm that helps reduce
the effect of catastrophic forgetting in recurrent neural networks. Extensive
experiments on three publicly available datasets - Breakfast Actions, 50
Salads, and INRIA Instructional Videos datasets show the efficacy of the
proposed approach. We show that the proposed approach is able to outperform
weakly-supervised and other unsupervised learning approaches by up to 24% and
have competitive performance compared to fully supervised approaches. We also
show that the proposed approach is able to learn highly discriminative features
that help improve action recognition when used in a representation learning
paradigm. | [
"cs.CV"
] |
This work examines the use of reinforcement learning (RL) to optimize cyclic
lockdowns, which is one of the methods available for control of the COVID-19
pandemic. The problem is structured as an optimal control system for tracking a
reference value, corresponding to the maximum usage level of a critical
resource, such as ICU beds. However, instead of using conventional optimal
control methods, RL is used to find optimal control policies. A framework was
developed to calculate optimal cyclic lockdown timings using an RL-based on-off
controller. The RL-based controller is implemented as an RL agent that
interacts with an epidemic simulator, implemented as an extended SEIR epidemic
model. The RL agent learns a policy function that produces an optimal sequence
of open/lockdown decisions such that goals specified in the RL reward function
are optimized. Two concurrent goals were used: the first one is a public health
goal that minimizes overshoots of ICU bed usage above an ICU bed threshold, and
the second one is a socio-economic goal that minimizes the time spent under
lockdowns. It is assumed that cyclic lockdowns are considered as a temporary
alternative to extended lockdowns when a region faces imminent danger of
overpassing resource capacity limits and when imposing an extended lockdown
would cause severe social and economic consequences due to lack of necessary
economic resources to support its affected population during an extended
lockdown. | [
"cs.LG",
"cs.AI",
"q-bio.PE"
] |
Advances in technology have led to the development of methods that can create
desired visual multimedia. In particular, image generation using deep learning
has been extensively studied across diverse fields. In comparison, video
generation, especially on conditional inputs, remains a challenging and less
explored area. To narrow this gap, we aim to train our model to produce a video
corresponding to a given text description. We propose a novel training
framework, Text-to-Image-to-Video Generative Adversarial Network (TiVGAN),
which evolves frame-by-frame and finally produces a full-length video. In the
first phase, we focus on creating a high-quality single video frame while
learning the relationship between the text and an image. As the steps proceed,
our model is trained gradually on more number of consecutive frames.This
step-by-step learning process helps stabilize the training and enables the
creation of high-resolution video based on conditional text descriptions.
Qualitative and quantitative experimental results on various datasets
demonstrate the effectiveness of the proposed method. | [
"cs.CV"
] |
Spiking neural networks (SNNs) equipped with latency coding and spike-timing
dependent plasticity rules offer an alternative to solve the data and energy
bottlenecks of standard computer vision approaches: they can learn visual
features without supervision and can be implemented by ultra-low power hardware
architectures. However, their performance in image classification has never
been evaluated on recent image datasets. In this paper, we compare SNNs to
auto-encoders on three visual recognition datasets, and extend the use of SNNs
to color images. The analysis of the results helps us identify some bottlenecks
of SNNs: the limits of on-center/off-center coding, especially for color
images, and the ineffectiveness of current inhibition mechanisms. These issues
should be addressed to build effective SNNs for image recognition. | [
"cs.CV",
"cs.NE"
] |
Recent research on super-resolution (SR) has witnessed major developments
with the advancements of deep convolutional neural networks. There is a need
for information extraction from scenic text images or even document images on
device, most of which are low-resolution (LR) images. Therefore, SR becomes an
essential pre-processing step as Bicubic Upsampling, which is conventionally
present in smartphones, performs poorly on LR images. To give the user more
control over his privacy, and to reduce the carbon footprint by reducing the
overhead of cloud computing and hours of GPU usage, executing SR models on the
edge is a necessity in the recent times. There are various challenges in
running and optimizing a model on resource-constrained platforms like
smartphones. In this paper, we present a novel deep neural network that
reconstructs sharper character edges and thus boosts OCR confidence. The
proposed architecture not only achieves significant improvement in PSNR over
bicubic upsampling on various benchmark datasets but also runs with an average
inference time of 11.7 ms per image. We have outperformed state-of-the-art on
the Text330 dataset. We also achieve an OCR accuracy of 75.89% on the ICDAR
2015 TextSR dataset, where ground truth has an accuracy of 78.10%. | [
"cs.CV"
] |
Medical image segmentation is a relevant task as it serves as the first step
for several diagnosis processes, thus it is indispensable in clinical usage.
Whilst major success has been reported using supervised techniques, they assume
a large and well-representative labelled set. This is a strong assumption in
the medical domain where annotations are expensive, time-consuming, and
inherent to human bias. To address this problem, unsupervised techniques have
been proposed in the literature yet it is still an open problem due to the
difficulty of learning any transformation pattern. In this work, we present a
novel optimisation model framed into a new CNN-based contrastive registration
architecture for unsupervised medical image segmentation. The core of our
approach is to exploit image-level registration and feature-level from a
contrastive learning mechanism, to perform registration-based segmentation.
Firstly, we propose an architecture to capture the image-to-image
transformation pattern via registration for unsupervised medical image
segmentation. Secondly, we embed a contrastive learning mechanism into the
registration architecture to enhance the discriminating capacity of the network
in the feature-level. We show that our proposed technique mitigates the major
drawbacks of existing unsupervised techniques. We demonstrate, through
numerical and visual experiments, that our technique substantially outperforms
the current state-of-the-art unsupervised segmentation methods on two major
medical image datasets. | [
"cs.CV"
] |
Scheduling computational tasks represented by directed acyclic graphs (DAGs)
is challenging because of its complexity. Conventional scheduling algorithms
rely heavily on simple heuristics such as shortest job first (SJF) and critical
path (CP), and are often lacking in scheduling quality. In this paper, we
present a novel learning-based approach to scheduling DAG tasks. The algorithm
employs a reinforcement learning agent to iteratively add directed edges to the
DAG, one at a time, to enforce ordering (i.e., priorities of execution and
resource allocation) of "tricky" job nodes. By doing so, the original DAG
scheduling problem is dramatically reduced to a much simpler proxy problem, on
which heuristic scheduling algorithms such as SJF and CP can be efficiently
improved. Our approach can be easily applied to any existing heuristic
scheduling algorithms. On the benchmark dataset of TPC-H, we show that our
learning based approach can significantly improve over popular heuristic
algorithms and consistently achieves the best performance among several methods
under a variety of settings. | [
"cs.LG",
"cs.AI"
] |
The use of multi-modal data such as the combination of whole slide images
(WSIs) and gene expression data for survival analysis can lead to more accurate
survival predictions. Previous multi-modal survival models are not able to
efficiently excavate the intrinsic information within each modality. Moreover,
despite experimental results show that WSIs provide more effective information
than gene expression data, previous methods regard the information from
different modalities as similarly important so they cannot flexibly utilize the
potential connection between the modalities. To address the above problems, we
propose a new asymmetrical multi-modal method, termed as AMMASurv.
Specifically, we design an asymmetrical multi-modal attention mechanism (AMMA)
in Transformer encoder for multi-modal data to enable a more flexible
multi-modal information fusion for survival prediction. Different from previous
works, AMMASurv can effectively utilize the intrinsic information within every
modality and flexibly adapts to the modalities of different importance.
Extensive experiments are conducted to validate the effectiveness of the
proposed model. Encouraging results demonstrate the superiority of our method
over other state-of-the-art methods. | [
"cs.CV",
"cs.AI"
] |
Low-resolution face recognition (LRFR) has received increasing attention over
the past few years. Its applications lie widely in the real-world environment
when high-resolution or high-quality images are hard to capture. One of the
biggest demands for LRFR technologies is video surveillance. As the the number
of surveillance cameras in the city increases, the videos that captured will
need to be processed automatically. However, those videos or images are usually
captured with large standoffs, arbitrary illumination condition, and diverse
angles of view. Faces in these images are generally small in size. Several
studies addressed this problem employed techniques like super resolution,
deblurring, or learning a relationship between different resolution domains. In
this paper, we provide a comprehensive review of approaches to low-resolution
face recognition in the past five years. First, a general problem definition is
given. Later, systematically analysis of the works on this topic is presented
by catogory. In addition to describing the methods, we also focus on datasets
and experiment settings. We further address the related works on unconstrained
low-resolution face recognition and compare them with the result that use
synthetic low-resolution data. Finally, we summarized the general limitations
and speculate a priorities for the future effort. | [
"cs.CV"
] |
Building embodied autonomous agents capable of participating in social
interactions with humans is one of the main challenges in AI. This problem
motivated many research directions on embodied language use. Current approaches
focus on language as a communication tool in very simplified and non diverse
social situations: the "naturalness" of language is reduced to the concept of
high vocabulary size and variability. In this paper, we argue that aiming
towards human-level AI requires a broader set of key social skills: 1) language
use in complex and variable social contexts; 2) beyond language, complex
embodied communication in multimodal settings within constantly evolving social
worlds. In this work we explain how concepts from cognitive sciences could help
AI to draw a roadmap towards human-like intelligence, with a focus on its
social dimensions. We then study the limits of a recent SOTA Deep RL approach
when tested on a first grid-world environment from the upcoming SocialAI, a
benchmark to assess the social skills of Deep RL agents. Videos and code are
available at https://sites.google.com/view/socialai01 . | [
"cs.LG",
"cs.AI"
] |
To facilitate research in the direction of sample efficient reinforcement
learning, we held the MineRL Competition on Sample Efficient Reinforcement
Learning Using Human Priors at the Thirty-third Conference on Neural
Information Processing Systems (NeurIPS 2019). The primary goal of this
competition was to promote the development of algorithms that use human
demonstrations alongside reinforcement learning to reduce the number of samples
needed to solve complex, hierarchical, and sparse environments. We describe the
competition, outlining the primary challenge, the competition design, and the
resources that we provided to the participants. We provide an overview of the
top solutions, each of which use deep reinforcement learning and/or imitation
learning. We also discuss the impact of our organizational decisions on the
competition and future directions for improvement. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Adversarial learning methods have been proposed for a wide range of
applications, but the training of adversarial models can be notoriously
unstable. Effectively balancing the performance of the generator and
discriminator is critical, since a discriminator that achieves very high
accuracy will produce relatively uninformative gradients. In this work, we
propose a simple and general technique to constrain information flow in the
discriminator by means of an information bottleneck. By enforcing a constraint
on the mutual information between the observations and the discriminator's
internal representation, we can effectively modulate the discriminator's
accuracy and maintain useful and informative gradients. We demonstrate that our
proposed variational discriminator bottleneck (VDB) leads to significant
improvements across three distinct application areas for adversarial learning
algorithms. Our primary evaluation studies the applicability of the VDB to
imitation learning of dynamic continuous control skills, such as running. We
show that our method can learn such skills directly from \emph{raw} video
demonstrations, substantially outperforming prior adversarial imitation
learning methods. The VDB can also be combined with adversarial inverse
reinforcement learning to learn parsimonious reward functions that can be
transferred and re-optimized in new settings. Finally, we demonstrate that VDB
can train GANs more effectively for image generation, improving upon a number
of prior stabilization methods. | [
"cs.LG",
"stat.ML"
] |
The Deep Q-Network proposed by Mnih et al. [2015] has become a benchmark and
building point for much deep reinforcement learning research. However,
replicating results for complex systems is often challenging since original
scientific publications are not always able to describe in detail every
important parameter setting and software engineering solution. In this paper,
we present results from our work reproducing the results of the DQN paper. We
highlight key areas in the implementation that were not covered in great detail
in the original paper to make it easier for researchers to replicate these
results, including termination conditions and gradient descent algorithms.
Finally, we discuss methods for improving the computational performance and
provide our own implementation that is designed to work with a range of
domains, and not just the original Arcade Learning Environment [Bellemare et
al., 2013]. | [
"cs.LG",
"cs.AI"
] |
Recognizing group activities is challenging due to the difficulties in
isolating individual entities, finding the respective roles played by the
individuals and representing the complex interactions among the participants.
Individual actions and group activities in videos can be represented in a
common framework as they share the following common feature: both are composed
of a set of low-level features describing motions, e.g., optical flow for each
pixel or a trajectory for each feature point, according to a set of composition
constraints in both temporal and spatial dimensions. In this paper, we present
a unified model to assess the similarity between two given individual or group
activities. Our approach avoids explicit extraction of individual actors,
identifying and representing the inter-person interactions. With the proposed
approach, retrieval from a video database can be performed through
Query-by-Example; and activities can be recognized by querying videos
containing known activities. The suggested video matching process can be
performed in an unsupervised manner. We demonstrate the performance of our
approach by recognizing a set of human actions and football plays. | [
"cs.CV",
"stat.ML"
] |
Segmentation of magnetic resonance (MR) images is a fundamental step in many
medical imaging-based applications. The recent implementation of deep
convolutional neural networks (CNNs) in image processing has been shown to have
significant impacts on medical image segmentation. Network training of
segmentation CNNs typically requires images and paired annotation data
representing pixel-wise tissue labels referred to as masks. However, the
supervised training of highly efficient CNNs with deeper structure and more
network parameters requires a large number of training images and paired tissue
masks. Thus, there is great need to develop a generalized CNN-based
segmentation method which would be applicable for a wide variety of MR image
datasets with different tissue contrasts. The purpose of this study was to
develop and evaluate a generalized CNN-based method for fully-automated
segmentation of different MR image datasets using a single set of annotated
training data. A technique called cycle-consistent generative adversarial
network (CycleGAN) is applied as the core of the proposed method to perform
image-to-image translation between MR image datasets with different tissue
contrasts. A joint segmentation network is incorporated into the adversarial
network to obtain additional segmentation functionality. The proposed method
was evaluated for segmenting bone and cartilage on two clinical knee MR image
datasets acquired at our institution using only a single set of annotated data
from a publicly available knee MR image dataset. The new technique may further
improve the applicability and efficiency of CNN-based segmentation of medical
images while eliminating the need for large amounts of annotated training data. | [
"cs.CV",
"cs.AI"
] |
This paper proposes \textit{layer fusion} - a model compression technique
that discovers which weights to combine and then fuses weights of similar
fully-connected, convolutional and attention layers. Layer fusion can
significantly reduce the number of layers of the original network with little
additional computation overhead, while maintaining competitive performance.
From experiments on CIFAR-10, we find that various deep convolution neural
networks can remain within 2\% accuracy points of the original networks up to a
compression ratio of 3.33 when iteratively retrained with layer fusion. For
experiments on the WikiText-2 language modelling dataset where pretrained
transformer models are used, we achieve compression that leads to a network
that is 20\% of its original size while being within 5 perplexity points of the
original network. We also find that other well-established compression
techniques can achieve competitive performance when compared to their original
networks given a sufficient number of retraining steps. Generally, we observe a
clear inflection point in performance as the amount of compression increases,
suggesting a bound on the amount of compression that can be achieved before an
exponential degradation in performance. | [
"cs.LG",
"stat.ML"
] |
Reinforcement learning (RL) is always the preferred embodiment to construct
the control strategy of complex tasks, like asymmetric assembly tasks. However,
the convergence speed of reinforcement learning severely restricts its
practical application. In this paper, the convergence is first accelerated by
combining RL and compliance control. Then a completely innovative progressive
extension of action dimension (PEAD) mechanism is proposed to optimize the
convergence of RL algorithms. The PEAD method is verified in DDPG and PPO. The
results demonstrate the PEAD method will enhance the data-efficiency and
time-efficiency of RL algorithms as well as increase the stable reward, which
provides more potential for the application of RL. | [
"cs.LG",
"cs.SY",
"eess.SY"
] |
We investigate Referring Image Segmentation (RIS), which outputs a
segmentation map corresponding to the given natural language description. To
solve RIS efficiently, we need to understand each word's relationship with
other words, each region in the image to other regions, and cross-modal
alignment between linguistic and visual domains. We argue that one of the
limiting factors in the recent methods is that they do not handle these
interactions simultaneously. To this end, we propose a novel architecture
called JRNet, which uses a Joint Reasoning Module(JRM) to concurrently capture
the inter-modal and intra-modal interactions. The output of JRM is passed
through a novel Cross-Modal Multi-Level Fusion (CMMLF) module which further
refines the segmentation masks by exchanging contextual information across
visual hierarchy through linguistic features acting as a bridge. We present
thorough ablation studies and validate our approach's performance on four
benchmark datasets, showing considerable performance gains over the existing
state-of-the-art methods. | [
"cs.CV"
] |
Object detection-the computer vision task dealing with detecting instances of
objects of a certain class (e.g., 'car', 'plane', etc.) in images-attracted a
lot of attention from the community during the last 5 years. This strong
interest can be explained not only by the importance this task has for many
applications but also by the phenomenal advances in this area since the arrival
of deep convolutional neural networks (DCNN). This article reviews the recent
literature on object detection with deep CNN, in a comprehensive way, and
provides an in-depth view of these recent advances. The survey covers not only
the typical architectures (SSD, YOLO, Faster-RCNN) but also discusses the
challenges currently met by the community and goes on to show how the problem
of object detection can be extended. This survey also reviews the public
datasets and associated state-of-the-art algorithms. | [
"cs.CV"
] |
Predicting 3D human pose from a single monoscopic video can be highly
challenging due to factors such as low resolution, motion blur and occlusion,
in addition to the fundamental ambiguity in estimating 3D from 2D. Approaches
that directly regress the 3D pose from independent images can be particularly
susceptible to these factors and result in jitter, noise and/or inconsistencies
in skeletal estimation. Much of which can be overcome if the temporal evolution
of the scene and skeleton are taken into account. However, rather than tracking
body parts and trying to temporally smooth them, we propose a novel transformer
based network that can learn a distribution over both pose and motion in an
unsupervised fashion. We call our approach Skeletor. Skeletor overcomes
inaccuracies in detection and corrects partial or entire skeleton corruption.
Skeletor uses strong priors learn from on 25 million frames to correct skeleton
sequences smoothly and consistently. Skeletor can achieve this as it implicitly
learns the spatio-temporal context of human motion via a transformer based
neural network. Extensive experiments show that Skeletor achieves improved
performance on 3D human pose estimation and further provides benefits for
downstream tasks such as sign language translation. | [
"cs.CV"
] |
Correctly identifying vulnerable road users (VRUs), e.g. cyclists and
pedestrians, remains one of the most challenging environment perception tasks
for autonomous vehicles (AVs). This work surveys the current state-of-the-art
in VRU detection, covering topics such as benchmarks and datasets, object
detection techniques and relevant machine learning algorithms. The article
concludes with a discussion of remaining open challenges and promising future
research directions for this domain. | [
"cs.CV"
] |
Graph representation learning has attracted lots of attention recently.
Existing graph neural networks fed with the complete graph data are not
scalable due to limited computation and memory costs. Thus, it remains a great
challenge to capture rich information in large-scale graph data. Besides, these
methods mainly focus on supervised learning and highly depend on node label
information, which is expensive to obtain in the real world. As to unsupervised
network embedding approaches, they overemphasize node proximity instead, whose
learned representations can hardly be used in downstream application tasks
directly. In recent years, emerging self-supervised learning provides a
potential solution to address the aforementioned problems. However, existing
self-supervised works also operate on the complete graph data and are biased to
fit either global or very local (1-hop neighborhood) graph structures in
defining the mutual information based loss terms.
In this paper, a novel self-supervised representation learning method via
Subgraph Contrast, namely \textsc{Subg-Con}, is proposed by utilizing the
strong correlation between central nodes and their sampled subgraphs to capture
regional structure information. Instead of learning on the complete input graph
data, with a novel data augmentation strategy, \textsc{Subg-Con} learns node
representations through a contrastive loss defined based on subgraphs sampled
from the original graph instead. Compared with existing graph representation
learning approaches, \textsc{Subg-Con} has prominent performance advantages in
weaker supervision requirements, model learning scalability, and
parallelization. Extensive experiments verify both the effectiveness and the
efficiency of our work compared with both classic and state-of-the-art graph
representation learning approaches on multiple real-world large-scale benchmark
datasets from different domains. | [
"cs.LG",
"stat.ML"
] |
In this work, we present a generalized and robust facial manipulation
detection method based on color distribution analysis of the vertical region of
edge in a manipulated image. Most of the contemporary facial manipulation
method involves pixel correction procedures for reducing awkwardness of pixel
value differences along the facial boundary in a synthesized image. For this
procedure, there are distinctive differences in the facial boundary between
face manipulated image and unforged natural image. Also, in the forged image,
there should be distinctive and unnatural features in the gap distribution
between facial boundary and background edge region because it tends to damage
the natural effect of lighting. We design the neural network for detecting
face-manipulated image with these distinctive features in facial boundary and
background edge. Our extensive experiments show that our method outperforms
other existing face manipulation detection methods on detecting synthesized
face image in various datasets regardless of whether it has participated in
training. | [
"cs.CV",
"cs.AI"
] |
Deep learning's success has led to larger and larger models to handle more
and more complex tasks; trained models can contain millions of parameters.
These large models are compute- and memory-intensive, which makes it a
challenge to deploy them with minimized latency, throughput, and storage
requirements. Some model compression methods have been successfully applied to
image classification and detection or language models, but there has been very
little work compressing generative adversarial networks (GANs) performing
complex tasks. In this paper, we show that a standard model compression
technique, weight pruning, cannot be applied to GANs using existing methods. We
then develop a self-supervised compression technique which uses the trained
discriminator to supervise the training of a compressed generator. We show that
this framework has a compelling performance to high degrees of sparsity, can be
easily applied to new tasks and models, and enables meaningful comparisons
between different pruning granularities. | [
"cs.LG",
"cs.CV",
"eess.IV"
] |
\begin{abstract}
In recent years, the Finger Texture (FT) has attracted considerable attention
as a biometric characteristic. It can provide efficient human recognition
performance, because it has different human-specific features of apparent
lines, wrinkles and ridges distributed along the inner surface of all fingers.
Also, such pattern structures are reliable, unique and remain stable throughout
a human's life. Efficient biometric systems can be established based only on
FTs. In this paper, a comprehensive survey of the relevant FT studies is
presented. We also summarise the main drawbacks and obstacles of employing the
FT as a biometric characteristic, and provide useful suggestions to further
improve the work on FT. \end{abstract} | [
"cs.CV"
] |
Temporal Point Processes (TPPs) are often used to represent the sequence of
events ordered as per the time of occurrence. Owing to their flexible nature,
TPPs have been used to model different scenarios and have shown applicability
in various real-world applications. While TPPs focus on modeling the event
occurrence, Marked Temporal Point Process (MTPP) focuses on modeling the
category/class of the event as well (termed as the marker). Research in MTPP
has garnered substantial attention over the past few years, with an extensive
focus on supervised algorithms. Despite the research focus, limited attention
has been given to the challenging problem of developing solutions in
semi-supervised settings, where algorithms have access to a mix of labeled and
unlabeled data. This research proposes a novel algorithm for Semi-supervised
Learning for Marked Temporal Point Processes (SSL-MTPP) applicable in such
scenarios. The proposed SSL-MTPP algorithm utilizes a combination of labeled
and unlabeled data for learning a robust marker prediction model. The proposed
algorithm utilizes an RNN-based Encoder-Decoder module for learning effective
representations of the time sequence. The efficacy of the proposed algorithm
has been demonstrated via multiple protocols on the Retweet dataset, where the
proposed SSL-MTPP demonstrates improved performance in comparison to the
traditional supervised learning approach. | [
"cs.LG",
"cs.AI"
] |
Point cloud semantic segmentation is a crucial task in 3D scene
understanding. Existing methods mainly focus on employing a large number of
annotated labels for supervised semantic segmentation. Nonetheless, manually
labeling such large point clouds for the supervised segmentation task is
time-consuming. In order to reduce the number of annotated labels, we propose a
semi-supervised semantic point cloud segmentation network, named SSPC-Net,
where we train the semantic segmentation network by inferring the labels of
unlabeled points from the few annotated 3D points. In our method, we first
partition the whole point cloud into superpoints and build superpoint graphs to
mine the long-range dependencies in point clouds. Based on the constructed
superpoint graph, we then develop a dynamic label propagation method to
generate the pseudo labels for the unsupervised superpoints. Particularly, we
adopt a superpoint dropout strategy to dynamically select the generated pseudo
labels. In order to fully exploit the generated pseudo labels of the
unsupervised superpoints, we furthermore propose a coupled attention mechanism
for superpoint feature embedding. Finally, we employ the cross-entropy loss to
train the semantic segmentation network with the labels of the supervised
superpoints and the pseudo labels of the unsupervised superpoints. Experiments
on various datasets demonstrate that our semi-supervised segmentation method
can achieve better performance than the current semi-supervised segmentation
method with fewer annotated 3D points. Our code is available at
https://github.com/MMCheng/SSPC-Net. | [
"cs.CV"
] |
The influence of class orderings in the evaluation of incremental learning
has received very little attention. In this paper, we investigate the impact of
class orderings for incrementally learned classifiers. We propose a method to
compute various orderings for a dataset. The orderings are derived by simulated
annealing optimization from the confusion matrix and reflect different
incremental learning scenarios, including maximally and minimally confusing
tasks. We evaluate a wide range of state-of-the-art incremental learning
methods on the proposed orderings. Results show that orderings can have a
significant impact on performance and the ranking of the methods. | [
"cs.CV"
] |
The adversarial vulnerability of deep neural networks has attracted
significant attention in machine learning. From a causal viewpoint, adversarial
attacks can be considered as a specific type of distribution change on natural
data. As causal reasoning has an instinct for modeling distribution change, we
propose to incorporate causality into mitigating adversarial vulnerability.
However, causal formulations of the intuition of adversarial attack and the
development of robust DNNs are still lacking in the literature. To bridge this
gap, we construct a causal graph to model the generation process of adversarial
examples and define the adversarial distribution to formalize the intuition of
adversarial attacks. From a causal perspective, we find that the label is
spuriously correlated with the style (content-independent) information when an
instance is given. The spurious correlation implies that the adversarial
distribution is constructed via making the statistical conditional association
between style information and labels drastically different from that in natural
distribution. Thus, DNNs that fit the spurious correlation are vulnerable to
the adversarial distribution. Inspired by the observation, we propose the
adversarial distribution alignment method to eliminate the difference between
the natural distribution and the adversarial distribution. Extensive
experiments demonstrate the efficacy of the proposed method. Our method can be
seen as the first attempt to leverage causality for mitigating adversarial
vulnerability. | [
"cs.LG"
] |
Machine learning on tree data has been mostly focused on trees as input. Much
less research has investigates trees as output, like in molecule optimization
for drug discovery or hint generation for intelligent tutoring systems. In this
work, we propose a novel autoencoder approach, called recursive tree grammar
autoencoder (RTG-AE), which encodes trees via a bottom-up parser and decodes
trees via a tree grammar, both controlled by neural networks that minimize the
variational autoencoder loss. The resulting encoding and decoding functions can
then be employed in subsequent tasks, such as optimization and time series
prediction. RTG-AE combines variational autoencoders, grammatical knowledge,
and recursive processing. Our key message is that this combination improves
performance compared to only combining two of these three components. In
particular, we show experimentally that our proposed method improves the
autoencoding error, training time, and optimization score on four benchmark
datasets compared to baselines from the literature. | [
"cs.LG",
"cs.NE"
] |
Point clouds have been recognized as a crucial data structure for 3D content
and are essential in a number of applications such as virtual and mixed
reality, autonomous driving, cultural heritage, etc. In this paper, we propose
a set of contributions to improve deep point cloud compression, i.e.: using a
scale hyperprior model for entropy coding; employing deeper transforms; a
different balancing weight in the focal loss; optimal thresholding for
decoding; and sequential model training. In addition, we present an extensive
ablation study on the impact of each of these factors, in order to provide a
better understanding about why they improve RD performance. An optimal
combination of the proposed improvements achieves BD-PSNR gains over G-PCC
trisoup and octree of 5.50 (6.48) dB and 6.84 (5.95) dB, respectively, when
using the point-to-point (point-to-plane) metric. Code is available at
https://github.com/mauriceqch/pcc_geo_cnn_v2 . | [
"cs.CV",
"cs.LG",
"eess.IV",
"eess.SP",
"stat.ML"
] |
Data transmission between two or more digital devices in industry and
government demands secure and agile technology. Digital information
distribution often requires deployment of Internet of Things (IoT) devices and
Data Fusion techniques which have also gained popularity in both, civilian and
military environments, such as, emergence of Smart Cities and Internet of
Battlefield Things (IoBT). This usually requires capturing and consolidating
data from multiple sources. Because datasets do not necessarily originate from
identical sensors, fused data typically results in a complex Big Data problem.
Due to potentially sensitive nature of IoT datasets, Blockchain technology is
used to facilitate secure sharing of IoT datasets, which allows digital
information to be distributed, but not copied. However, blockchain has several
limitations related to complexity, scalability, and excessive energy
consumption. We propose an approach to hide information (sensor signal) by
transforming it to an image or an audio signal. In one of the latest attempts
to the military modernization, we investigate sensor fusion approach by
investigating the challenges of enabling an intelligent identification and
detection operation and demonstrates the feasibility of the proposed Deep
Learning and Anomaly Detection models that can support future application for
specific hand gesture alert system from wearable devices. | [
"cs.LG",
"cs.IT",
"math.IT",
"stat.AP"
] |
For the high dimensional data representation, nonnegative tensor ring (NTR)
decomposition equipped with manifold learning has become a promising model to
exploit the multi-dimensional structure and extract the feature from tensor
data. However, the existing methods such as graph regularized tensor ring
decomposition (GNTR) only models the pair-wise similarities of objects. For
tensor data with complex manifold structure, the graph can not exactly
construct similarity relationships. In this paper, in order to effectively
utilize the higher-dimensional and complicated similarities among objects, we
introduce hypergraph to the framework of NTR to further enhance the feature
extraction, upon which a hypergraph regularized nonnegative tensor ring
decomposition (HGNTR) method is developed. To reduce the computational
complexity and suppress the noise, we apply the low-rank approximation trick to
accelerate HGNTR (called LraHGNTR). Our experimental results show that compared
with other state-of-the-art algorithms, the proposed HGNTR and LraHGNTR can
achieve higher performance in clustering tasks, in addition, LraHGNTR can
greatly reduce running time without decreasing accuracy. | [
"cs.LG",
"cs.NA",
"math.NA"
] |
Video segmentation for the human head and shoulders is essential in creating
elegant media for videoconferencing and virtual reality applications. The main
challenge is to process high-quality background subtraction in a real-time
manner and address the segmentation issues under motion blurs, e.g., shaking
the head or waving hands during conference video. To overcome the motion blur
problem in video segmentation, we propose a novel flow-based encoder-decoder
network (FUNet) that combines both traditional Horn-Schunck optical-flow
estimation technique and convolutional neural networks to perform robust
real-time video segmentation. We also introduce a video and image segmentation
dataset: ConferenceVideoSegmentationDataset. Code and pre-trained models are
available on our GitHub repository:
\url{https://github.com/kuangzijian/Flow-Based-Video-Matting}. | [
"cs.CV"
] |
We propose interpretable graph neural networks for sampling and recovery of
graph signals, respectively. To take informative measurements, we propose a new
graph neural sampling module, which aims to select those vertices that
maximally express their corresponding neighborhoods. Such expressiveness can be
quantified by the mutual information between vertices' features and
neighborhoods' features, which are estimated via a graph neural network. To
reconstruct an original graph signal from the sampled measurements, we propose
a graph neural recovery module based on the algorithm-unrolling technique.
Compared to previous analytical sampling and recovery, the proposed methods are
able to flexibly learn a variety of graph signal models from data by leveraging
the learning ability of neural networks; compared to previous
neural-network-based sampling and recovery, the proposed methods are designed
through exploiting specific graph properties and provide interpretability. We
further design a new multiscale graph neural network, which is a trainable
multiscale graph filter bank and can handle various graph-related learning
tasks. The multiscale network leverages the proposed graph neural sampling and
recovery modules to achieve multiscale representations of a graph. In the
experiments, we illustrate the effects of the proposed graph neural sampling
and recovery modules and find that the modules can flexibly adapt to various
graph structures and graph signals. In the task of active-sampling-based
semi-supervised learning, the graph neural sampling module improves the
classification accuracy over 10% in Cora dataset. We further validate the
proposed multiscale graph neural network on several standard datasets for both
vertex and graph classification. The results show that our method consistently
improves the classification accuracies. | [
"cs.LG",
"cs.SI",
"eess.SP"
] |
We use Generative Adversarial Networks (GANs) to design a class conditional
label noise (CCN) robust scheme for binary classification. It first generates a
set of correctly labelled data points from noisy labelled data and 0.1% or 1%
clean labels such that the generated and true (clean) labelled data
distributions are close; generated labelled data is used to learn a good
classifier. The mode collapse problem while generating correct feature-label
pairs and the problem of skewed feature-label dimension ratio ($\sim$ 784:1)
are avoided by using Wasserstein GAN and a simple data representation change.
Another WGAN with information-theoretic flavour on top of the new
representation is also proposed. The major advantage of both schemes is their
significant improvement over the existing ones in presence of very high CCN
rates, without either estimating or cross-validating over the noise rates. We
proved that KL divergence between clean and noisy distribution increases w.r.t.
noise rates in symmetric label noise model; can be extended to high CCN rates.
This implies that our schemes perform well due to the adversarial nature of
GANs. Further, use of generative approach (learning clean joint distribution)
while handling noise enables our schemes to perform better than discriminative
approaches like GLC, LDMI and GCE; even when the classes are highly imbalanced.
Using Friedman F test and Nemenyi posthoc test, we showed that on high
dimensional binary class synthetic, MNIST and Fashion MNIST datasets, our
schemes outperform the existing methods and demonstrate consistent performance
across noise rates. | [
"cs.LG"
] |
Vehicle re-identification (reID) often requires recognize a target vehicle in
large datasets captured from multi-cameras. It plays an important role in the
automatic analysis of the increasing urban surveillance videos, which has
become a hot topic in recent years. However, the appearance of vehicle images
is easily affected by the environment that various illuminations, different
backgrounds and viewpoints, which leads to the large bias between different
cameras. To address this problem, this paper proposes a cross-camera adaptation
framework (CCA), which smooths the bias by exploiting the common space between
cameras for all samples. CCA first transfers images from multi-cameras into one
camera to reduce the impact of the illumination and resolution, which generates
the samples with the similar distribution. Then, to eliminate the influence of
background and focus on the valuable parts, we propose an attention alignment
network (AANet) to learn powerful features for vehicle reID. Specially, in
AANet, the spatial transfer network with attention module is introduced to
locate a series of the most discriminative regions with high-attention weights
and suppress the background. Moreover, comprehensive experimental results have
demonstrated that our proposed CCA can achieve excellent performances on
benchmark datasets VehicleID and VeRi-776. | [
"cs.CV"
] |
Various saliency detection algorithms from color images have been proposed to
mimic eye fixation or attentive object detection response of human observers
for the same scenes. However, developments on hyperspectral imaging systems
enable us to obtain redundant spectral information of the observed scenes from
the reflected light source from objects. A few studies using low-level features
on hyperspectral images demonstrated that salient object detection can be
achieved. In this work, we proposed a salient object detection model on
hyperspectral images by applying manifold ranking (MR) on self-supervised
Convolutional Neural Network (CNN) features (high-level features) from
unsupervised image segmentation task. Self-supervision of CNN continues until
clustering loss or saliency maps converges to a defined error between each
iteration. Finally, saliency estimations is done as the saliency map at last
iteration when the self-supervision procedure terminates with convergence.
Experimental evaluations demonstrated that proposed saliency detection
algorithm on hyperspectral images is outperforming state-of-the-arts
hyperspectral saliency models including the original MR based saliency model. | [
"cs.CV"
] |
Visual arts are of inestimable importance for the cultural, historic and
economic growth of our society. One of the building blocks of most analysis in
visual arts is to find similarity relationships among paintings of different
artists and painting schools. To help art historians better understand visual
arts, this paper presents a framework for visual link retrieval and knowledge
discovery in digital painting datasets. Visual link retrieval is accomplished
by using a deep convolutional neural network to perform feature extraction and
a fully unsupervised nearest neighbor mechanism to retrieve links among
digitized paintings. Historical knowledge discovery is achieved by performing a
graph analysis that makes it possible to study influences among artists. An
experimental evaluation on a database collecting paintings by very popular
artists shows the effectiveness of the method. The unsupervised strategy makes
the method interesting especially in cases where metadata are scarce,
unavailable or difficult to collect. | [
"cs.CV"
] |
Representation learning and unsupervised learning are two central topics of
machine learning and signal processing. Deep learning is one of the most
effective unsupervised representation learning approach. The main contributions
of this paper to the topics are as follows. (i) We propose to view the
representative deep learning approaches as special cases of the knowledge reuse
framework of clustering ensemble. (ii) We propose to view sparse coding when
used as a feature encoder as the consensus function of clustering ensemble, and
view dictionary learning as the training process of the base clusterings of
clustering ensemble. (ii) Based on the above two views, we propose a very
simple deep learning algorithm, named deep random model ensemble (DRME). It is
a stack of random model ensembles. Each random model ensemble is a special
k-means ensemble that discards the expectation-maximization optimization of
each base k-means but only preserves the default initialization method of the
base k-means. (iv) We propose to select the most powerful representation among
the layers by applying DRME to clustering where the single-linkage is used as
the clustering algorithm. Moreover, the DRME based clustering can also detect
the number of the natural clusters accurately. Extensive experimental
comparisons with 5 representation learning methods on 19 benchmark data sets
demonstrate the effectiveness of DRME. | [
"cs.LG"
] |
In deep representational learning, it is often desired to isolate a
particular factor (termed {\em content}) from other factors (referred to as
{\em style}). What constitutes the content is typically specified by users
through explicit labels in the data, while all unlabeled/unknown factors are
regarded as style. Recently, it has been shown that such content-labeled data
can be effectively exploited by modifying the deep latent factor models (e.g.,
VAE) such that the style and content are well separated in the latent
representations. However, the approach assumes that the content factor is
categorical-valued (e.g., subject ID in face image data, or digit class in the
MNIST dataset). In certain situations, the content is ordinal-valued, that is,
the values the content factor takes are {\em ordered} rather than categorical,
making content-labeled VAEs, including the latent space they infer, suboptimal.
In this paper, we propose a novel extension of VAE that imposes a partially
ordered set (poset) structure in the content latent space, while simultaneously
making it aligned with the ordinal content values. To this end, instead of the
iid Gaussian latent prior adopted in prior approaches, we introduce a
conditional Gaussian spacing prior model. This model admits a tractable joint
Gaussian prior, but also effectively places negligible density values on the
content latent configurations that violate the poset constraint. To evaluate
this model, we consider two specific ordinal structured problems: estimating a
subject's age in a face image and elucidating the calorie amount in a food meal
image. We demonstrate significant improvements in content-style separation over
previous non-ordinal approaches. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Graph-structured data arise in many scenarios. A fundamental problem is to
quantify the similarities of graphs for tasks such as classification. Graph
kernels are positive-semidefinite functions that decompose graphs into
substructures and compare them. One problem in the effective implementation of
this idea is that the substructures are not independent, which leads to
high-dimensional feature space. In addition, graph kernels cannot capture the
high-order complex interactions between vertices. To mitigate these two
problems, we propose a framework called DeepMap to learn deep representations
for graph feature maps. The learnt deep representation for a graph is a dense
and low-dimensional vector that captures complex high-order interactions in a
vertex neighborhood. DeepMap extends Convolutional Neural Networks (CNNs) to
arbitrary graphs by aligning vertices across graphs and building the receptive
field for each vertex. We empirically validate DeepMap on various graph
classification benchmarks and demonstrate that it achieves state-of-the-art
performance. | [
"cs.LG",
"stat.ML"
] |
Cell complexes are topological spaces constructed from simple blocks called
cells. They generalize graphs, simplicial complexes, and polyhedral complexes
that form important domains for practical applications. They also provide a
combinatorial formalism that allows the inclusion of complicated relationships
of restrictive structures such as graphs and meshes. In this paper, we propose
\textbf{Cell Complexes Neural Networks (CXNs)}, a general, combinatorial and
unifying construction for performing neural network-type computations on cell
complexes. We introduce an inter-cellular message passing scheme on cell
complexes that takes the topology of the underlying space into account and
generalizes message passing scheme to graphs. Finally, we introduce a unified
cell complex encoder-decoder framework that enables learning representation of
cells for a given complex inside the Euclidean spaces. In particular, we show
how our cell complex autoencoder construction can give, in the special case
\textbf{cell2vec}, a generalization for node2vec. | [
"cs.LG",
"cs.CG",
"cs.CV",
"math.AT",
"stat.ML"
] |
Drug discovery aims at designing novel molecules with specific desired
properties for clinical trials. Over past decades, drug discovery and
development have been a costly and time consuming process. Driven by big
chemical data and AI, deep generative models show great potential to accelerate
the drug discovery process. Existing works investigate different deep
generative frameworks for molecular generation, however, less attention has
been paid to the visualization tools to quickly demo and evaluate model's
results. Here, we propose a visualization framework which provides interactive
visualization tools to visualize molecules generated during the encoding and
decoding process of deep graph generative models, and provide real time
molecular optimization functionalities. Our work tries to empower black box AI
driven drug discovery models with some visual interpretabilities. | [
"cs.LG",
"cs.HC",
"stat.ML",
"I.2.1"
] |
We study Label-Smoothing as a means for improving adversarial robustness of
supervised deep-learning models. After establishing a thorough and unified
framework, we propose several variations to this general method: adversarial,
Boltzmann and second-best Label-Smoothing methods, and we explain how to
construct your own one. On various datasets (MNIST, CIFAR10, SVHN) and models
(linear models, MLPs, LeNet, ResNet), we show that Label-Smoothing in general
improves adversarial robustness against a variety of attacks (FGSM, BIM,
DeepFool, Carlini-Wagner) by better taking account of the dataset geometry. The
proposed Label-Smoothing methods have two main advantages: they can be
implemented as a modified cross-entropy loss, thus do not require any
modifications of the network architecture nor do they lead to increased
training times, and they improve both standard and adversarial accuracy. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Deep learning classifiers are known to be inherently vulnerable to
manipulation by intentionally perturbed inputs, named adversarial examples. In
this work, we establish that reinforcement learning techniques based on Deep
Q-Networks (DQNs) are also vulnerable to adversarial input perturbations, and
verify the transferability of adversarial examples across different DQN models.
Furthermore, we present a novel class of attacks based on this vulnerability
that enable policy manipulation and induction in the learning process of DQNs.
We propose an attack mechanism that exploits the transferability of adversarial
examples to implement policy induction attacks on DQNs, and demonstrate its
efficacy and impact through experimental study of a game-learning scenario. | [
"cs.LG",
"cs.AI"
] |
Visual perception entails solving a wide set of tasks, e.g., object
detection, depth estimation, etc. The predictions made for multiple tasks from
the same image are not independent, and therefore, are expected to be
consistent. We propose a broadly applicable and fully computational method for
augmenting learning with Cross-Task Consistency. The proposed formulation is
based on inference-path invariance over a graph of arbitrary tasks. We observe
that learning with cross-task consistency leads to more accurate predictions
and better generalization to out-of-distribution inputs. This framework also
leads to an informative unsupervised quantity, called Consistency Energy, based
on measuring the intrinsic consistency of the system. Consistency Energy
correlates well with the supervised error (r=0.67), thus it can be employed as
an unsupervised confidence metric as well as for detection of
out-of-distribution inputs (ROC-AUC=0.95). The evaluations are performed on
multiple datasets, including Taskonomy, Replica, CocoDoom, and ApolloScape, and
they benchmark cross-task consistency versus various baselines including
conventional multi-task learning, cycle consistency, and analytical
consistency. | [
"cs.CV",
"cs.GR",
"cs.LG"
] |
Performance evaluations are critical for quantifying algorithmic advances in
reinforcement learning. Recent reproducibility analyses have shown that
reported performance results are often inconsistent and difficult to replicate.
In this work, we argue that the inconsistency of performance stems from the use
of flawed evaluation metrics. Taking a step towards ensuring that reported
results are consistent, we propose a new comprehensive evaluation methodology
for reinforcement learning algorithms that produces reliable measurements of
performance both on a single environment and when aggregated across
environments. We demonstrate this method by evaluating a broad class of
reinforcement learning algorithms on standard benchmark tasks. | [
"cs.LG",
"stat.ML"
] |
The adoption of machine learning in materials science has rapidly transformed
materials property prediction. Hurdles limiting full capitalization of recent
advancements in machine learning include the limited development of methods to
learn the underlying interactions of multiple elements, as well as the
relationships among multiple properties, to facilitate property prediction in
new composition spaces. To address these issues, we introduce the Hierarchical
Correlation Learning for Multi-property Prediction (H-CLMP) framework that
seamlessly integrates (i) prediction using only a material's composition, (ii)
learning and exploitation of correlations among target properties in
multi-target regression, and (iii) leveraging training data from tangential
domains via generative transfer learning. The model is demonstrated for
prediction of spectral optical absorption of complex metal oxides spanning 69
3-cation metal oxide composition spaces. H-CLMP accurately predicts non-linear
composition-property relationships in composition spaces for which no training
data is available, which broadens the purview of machine learning to the
discovery of materials with exceptional properties. This achievement results
from the principled integration of latent embedding learning, property
correlation learning, generative transfer learning, and attention models. The
best performance is obtained using H-CLMP with Transfer learning (H-CLMP(T))
wherein a generative adversarial network is trained on computational density of
states data and deployed in the target domain to augment prediction of optical
absorption from composition. H-CLMP(T) aggregates multiple knowledge sources
with a framework that is well-suited for multi-target regression across the
physical sciences. | [
"cs.LG",
"cs.AI",
"65Z05",
"I.2"
] |
In the quest for efficient and robust reinforcement learning methods, both
model-free and model-based approaches offer advantages. In this paper we
propose a new way of explicitly bridging both approaches via a shared
low-dimensional learned encoding of the environment, meant to capture
summarizing abstractions. We show that the modularity brought by this approach
leads to good generalization while being computationally efficient, with
planning happening in a smaller latent state space. In addition, this approach
recovers a sufficient low-dimensional representation of the environment, which
opens up new strategies for interpretable AI, exploration and transfer
learning. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Data augmentation technique from computer vision has been widely considered
as a regularization method to improve data efficiency and generalization
performance in vision-based reinforcement learning. We variate the timing of
using augmentation, which is, in turn, critical depending on tasks to be solved
in training and testing. According to our experiments on Open AI Procgen
Benchmark, if the regularization imposed by augmentation is helpful only in
testing, it is better to procrastinate the augmentation after training than to
use it during training in terms of sample and computation complexity. We note
that some of such augmentations can disturb the training process. Conversely,
an augmentation providing regularization useful in training needs to be used
during the whole training period to fully utilize its benefit in terms of not
only generalization but also data efficiency. These phenomena suggest a useful
timing control of data augmentation in reinforcement learning. | [
"cs.LG",
"cs.AI"
] |
Semantic segmentation is one of the basic topics in computer vision, it aims
to assign semantic labels to every pixel of an image. Unbalanced semantic label
distribution could have a negative influence on segmentation accuracy. In this
paper, we investigate using data augmentation approach to balance the semantic
label distribution in order to improve segmentation performance. We propose
using generative adversarial networks (GANs) to generate realistic images for
improving the performance of semantic segmentation networks. Experimental
results show that the proposed method can not only improve segmentation
performance on those classes with low accuracy, but also obtain 1.3% to 2.1%
increase in average segmentation accuracy. It shows that this augmentation
method can boost accuracy and be easily applicable to any other segmentation
models. | [
"cs.CV"
] |
Hand segmentation and detection in truly unconstrained RGB-based settings is
important for many applications. However, existing datasets are far from
sufficient both in terms of size and variety due to the infeasibility of manual
annotation of large amounts of segmentation and detection data. As a result,
current methods are limited by many underlying assumptions such as constrained
environment, consistent skin color and lighting. In this work, we present a
large-scale RGB-based egocentric hand segmentation/detection dataset Ego2Hands
that is automatically annotated and a color-invariant compositing-based data
generation technique capable of creating unlimited training data with variety.
For quantitative analysis, we manually annotated an evaluation set that
significantly exceeds existing benchmarks in quantity, diversity and annotation
accuracy. We provide cross-dataset evaluation as well as thorough analysis on
the performance of state-of-the-art models on Ego2Hands to show that our
dataset and data generation technique can produce models that generalize to
unseen environments without domain adaptation. | [
"cs.CV"
] |
In this paper, we propose a novel local descriptor-based framework, called
You Only Hypothesize Once (YOHO), for the registration of two unaligned point
clouds. In contrast to most existing local descriptors which rely on a fragile
local reference frame to gain rotation invariance, the proposed descriptor
achieves the rotation invariance by recent technologies of group equivariant
feature learning, which brings more robustness to point density and noise.
Meanwhile, the descriptor in YOHO also has a rotation equivariant part, which
enables us to estimate the registration from just one correspondence
hypothesis. Such property reduces the searching space for feasible
transformations, thus greatly improves both the accuracy and the efficiency of
YOHO. Extensive experiments show that YOHO achieves superior performances with
much fewer needed RANSAC iterations on four widely-used datasets, the
3DMatch/3DLoMatch datasets, the ETH dataset and the WHU-TLS dataset. More
details are shown in our project page: https://hpwang-whu.github.io/YOHO/. | [
"cs.CV"
] |
To alleviate the cost of obtaining accurate bounding boxes for training
today's state-of-the-art object detection models, recent weakly supervised
detection work has proposed techniques to learn from image-level labels.
However, requiring discrete image-level labels is both restrictive and
suboptimal. Real-world "supervision" usually consists of more unstructured
text, such as captions. In this work we learn association maps between images
and captions. We then use a novel objectness criterion to rank the resulting
candidate boxes, such that high-ranking boxes have strong gradients along all
edges. Thus, we can detect objects beyond a fixed object category vocabulary,
if those objects are frequent and distinctive enough. We show that our
objectness criterion improves the proposed bounding boxes in relation to prior
weakly supervised detection methods. Further, we show encouraging results on
object detection from image-level captions only. | [
"cs.CV"
] |
The Active Contour Model (ACM) is a standard image analysis technique whose
numerous variants have attracted an enormous amount of research attention
across multiple fields. Incorrectly, however, the ACM's
differential-equation-based formulation and prototypical dependence on user
initialization have been regarded as being largely incompatible with the
recently popular deep learning approaches to image segmentation. This paper
introduces the first tight unification of these two paradigms. In particular,
we devise Deep Convolutional Active Contours (DCAC), a truly end-to-end
trainable image segmentation framework comprising a Convolutional Neural
Network (CNN) and an ACM with learnable parameters. The ACM's Eulerian energy
functional includes per-pixel parameter maps predicted by the backbone CNN,
which also initializes the ACM. Importantly, both the CNN and ACM components
are fully implemented in TensorFlow, and the entire DCAC architecture is
end-to-end automatically differentiable and backpropagation trainable without
user intervention. As a challenging test case, we tackle the problem of
building instance segmentation in aerial images and evaluate DCAC on two
publicly available datasets, Vaihingen and Bing Huts. Our reseults demonstrate
that, for building segmentation, the DCAC establishes a new state-of-the-art
performance by a wide margin. | [
"cs.CV"
] |
Exploration and adaptation to new tasks in a transfer learning setup is a
central challenge in reinforcement learning. In this work, we build on the idea
of modeling a distribution over policies in a Bayesian deep reinforcement
learning setup to propose a transfer strategy. Recent works have shown to
induce diversity in the learned policies by maximizing the entropy of a
distribution of policies (Bachman et al., 2018; Garnelo et al., 2018) and thus,
we postulate that our proposed approach leads to faster exploration resulting
in improved transfer learning. We support our hypothesis by demonstrating
favorable experimental results on a variety of settings on fully-observable
GridWorld and partially observable MiniGrid (Chevalier-Boisvert et al., 2018)
environments. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Computer vision relies on labeled datasets for training and evaluation in
detecting and recognizing objects. The popular computer vision program, YOLO
("You Only Look Once"), has been shown to accurately detect objects in many
major image datasets. However, the images found in those datasets, are
independent of one another and cannot be used to test YOLO's consistency at
detecting the same object as its environment (e.g. ambient lighting) changes.
This paper describes a novel effort to evaluate YOLO's consistency for
large-scale applications. It does so by working (a) at large scale and (b) by
using consecutive images from a curated network of public video cameras
deployed in a variety of real-world situations, including traffic
intersections, national parks, shopping malls, university campuses, etc. We
specifically examine YOLO's ability to detect objects in different scenarios
(e.g., daytime vs. night), leveraging the cameras' ability to rapidly retrieve
many successive images for evaluating detection consistency. Using our camera
network and advanced computing resources (supercomputers), we analyzed more
than 5 million images captured by 140 network cameras in 24 hours. Compared
with labels marked by humans (considered as "ground truth"), YOLO struggles to
consistently detect the same humans and cars as their positions change from one
frame to the next; it also struggles to detect objects at night time. Our
findings suggest that state-of-the art vision solutions should be trained by
data from network camera with contextual information before they can be
deployed in applications that demand high consistency on object detection. | [
"cs.CV"
] |
Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | [
"cs.LG"
] |
We present a method for improving human design of chairs. The goal of the
method is generating enormous chair candidates in order to facilitate human
designer by creating sketches and 3d models accordingly based on the generated
chair design. It consists of an image synthesis module, which learns the
underlying distribution of training dataset, a super-resolution module, which
improve quality of generated image and human involvements. Finally, we manually
pick one of the generated candidates to create a real life chair for
illustration. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Using deep latent variable models in causal inference has attracted
considerable interest recently, but an essential open question is their ability
to yield consistent causal estimates. While they have demonstrated promising
results and theory exists on some simple model formulations, we also know that
causal effects are not even identifiable in general with latent variables. We
investigate this gap between theory and empirical results with analytical
considerations and extensive experiments under multiple synthetic and
real-world data sets, using the causal effect variational autoencoder (CEVAE)
as a case study. While CEVAE seems to work reliably under some simple
scenarios, it does not estimate the causal effect correctly with a misspecified
latent variable or a complex data distribution, as opposed to its original
motivation. Hence, our results show that more attention should be paid to
ensuring the correctness of causal estimates with deep latent variable models. | [
"cs.LG"
] |
Studies of object detection and localization, particularly pedestrian
detection have received considerable attention in recent times due to its
several prospective applications such as surveillance, driving assistance,
autonomous cars, etc. Also, a significant trend of latest research studies in
related problem areas is the use of sophisticated Deep Learning based
approaches to improve the benchmark performance on various standard datasets. A
trade-off between the speed (number of video frames processed per second) and
detection accuracy has often been reported in the existing literature. In this
article, we present a new but simple deep learning based strategy for
pedestrian detection that improves this trade-off. Since training of similar
models using publicly available sample datasets failed to improve the detection
performance to some significant extent, particularly for the instances of
pedestrians of smaller sizes, we have developed a new sample dataset consisting
of more than 80K annotated pedestrian figures in videos recorded under varying
traffic conditions. Performance of the proposed model on the test samples of
the new dataset and two other existing datasets, namely Caltech Pedestrian
Dataset (CPD) and CityPerson Dataset (CD) have been obtained. Our proposed
system shows nearly 16\% improvement over the existing state-of-the-art result. | [
"cs.CV"
] |
There has been significant research done on developing methods for improving
robustness to distributional shift and uncertainty estimation. In contrast,
only limited work has examined developing standard datasets and benchmarks for
assessing these approaches. Additionally, most work on uncertainty estimation
and robustness has developed new techniques based on small-scale regression or
image classification tasks. However, many tasks of practical interest have
different modalities, such as tabular data, audio, text, or sensor data, which
offer significant challenges involving regression and discrete or continuous
structured prediction. Thus, given the current state of the field, a
standardized large-scale dataset of tasks across a range of modalities affected
by distributional shifts is necessary. This will enable researchers to
meaningfully evaluate the plethora of recently developed uncertainty
quantification methods, as well as assessment criteria and state-of-the-art
baselines. In this work, we propose the \emph{Shifts Dataset} for evaluation of
uncertainty estimates and robustness to distributional shift. The dataset,
which has been collected from industrial sources and services, is composed of
three tasks, with each corresponding to a particular data modality: tabular
weather prediction, machine translation, and self-driving car (SDC) vehicle
motion prediction. All of these data modalities and tasks are affected by real,
`in-the-wild' distributional shifts and pose interesting challenges with
respect to uncertainty estimation. In this work we provide a description of the
dataset and baseline results for all tasks. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Demonstration-guided reinforcement learning (RL) is a promising approach for
learning complex behaviors by leveraging both reward feedback and a set of
target task demonstrations. Prior approaches for demonstration-guided RL treat
every new task as an independent learning problem and attempt to follow the
provided demonstrations step-by-step, akin to a human trying to imitate a
completely unseen behavior by following the demonstrator's exact muscle
movements. Naturally, such learning will be slow, but often new behaviors are
not completely unseen: they share subtasks with behaviors we have previously
learned. In this work, we aim to exploit this shared subtask structure to
increase the efficiency of demonstration-guided RL. We first learn a set of
reusable skills from large offline datasets of prior experience collected
across many tasks. We then propose Skill-based Learning with Demonstrations
(SkiLD), an algorithm for demonstration-guided RL that efficiently leverages
the provided demonstrations by following the demonstrated skills instead of the
primitive actions, resulting in substantial performance improvements over prior
demonstration-guided RL approaches. We validate the effectiveness of our
approach on long-horizon maze navigation and complex robot manipulation tasks. | [
"cs.LG",
"cs.AI",
"cs.RO"
] |
Limited annotated data available for the recognition of facial expression and
action units embarrasses the training of deep networks, which can learn
disentangled invariant features. However, a linear model with just several
parameters normally is not demanding in terms of training data. In this paper,
we propose an elegant linear model to untangle confounding factors in
challenging realistic multichannel signals such as 2D face videos. The simple
yet powerful model does not rely on huge training data and is natural for
recognizing facial actions without explicitly disentangling the identity. Base
on well-understood intuitive linear models such as Sparse Representation based
Classification (SRC), previous attempts require a prepossessing of explicit
decoupling which is practically inexact. Instead, we exploit the low-rank
property across frames to subtract the underlying neutral faces which are
modeled jointly with sparse representation on the action components with group
sparsity enforced. On the extended Cohn-Kanade dataset (CK+), our one-shot
automatic method on raw face videos performs as competitive as SRC applied on
manually prepared action components and performs even better than SRC in terms
of true positive rate. We apply the model to the even more challenging task of
facial action unit recognition, verified on the MPI Face Video Database
(MPI-VDB) achieving a decent performance. All the programs and data have been
made publicly available. | [
"cs.CV",
"cs.AI",
"cs.LG",
"stat.ML"
] |
Correspondence-based shape models are key to various medical imaging
applications that rely on a statistical analysis of anatomies. Such shape
models are expected to represent consistent anatomical features across the
population for population-specific shape statistics. Early approaches for
correspondence placement rely on nearest neighbor search for simpler anatomies.
Coordinate transformations for shape correspondence hold promise to address the
increasing anatomical complexities. Nonetheless, due to the inherent
shape-level geometric complexity and population-level shape variation, the
coordinate-wise correspondence often does not translate to the anatomical
correspondence. An alternative, group-wise approach for correspondence
placement explicitly models the trade-off between geometric description and the
population's statistical compactness. However, these models achieve limited
success in resolving nonlinear shape correspondence. Recent works have
addressed this limitation by adopting an application-specific notion of
correspondence through lifting positional data to a higher dimensional feature
space. However, they heavily rely on manual expertise to create domain-specific
features and consistent landmarks. This paper proposes an automated feature
learning approach, using deep convolutional neural networks to extract
correspondence-friendly features from shape ensembles. Further, an unsupervised
domain adaptation scheme is introduced to augment the pretrained geometric
features with new anatomies. Results on anatomical datasets of human scapula,
femur, and pelvis bones demonstrate that features learned in supervised fashion
show improved performance for correspondence estimation compared to the manual
features. Further, unsupervised learning is demonstrated to learn complex
anatomy features using the supervised domain adaptation from features learned
on simpler anatomy. | [
"cs.CV",
"cs.LG"
] |
We present a novel approach to reconstruct RGB-D indoor scene with plane
primitives. Our approach takes as input a RGB-D sequence and a dense coarse
mesh reconstructed by some 3D reconstruction method on the sequence, and
generate a lightweight, low-polygonal mesh with clear face textures and sharp
features without losing geometry details from the original scene. To achieve
this, we firstly partition the input mesh with plane primitives, simplify it
into a lightweight mesh next, then optimize plane parameters, camera poses and
texture colors to maximize the photometric consistency across frames, and
finally optimize mesh geometry to maximize consistency between geometry and
planes. Compared to existing planar reconstruction methods which only cover
large planar regions in the scene, our method builds the entire scene by
adaptive planes without losing geometry details and preserves sharp features in
the final mesh. We demonstrate the effectiveness of our approach by applying it
onto several RGB-D scans and comparing it to other state-of-the-art
reconstruction methods. | [
"cs.CV"
] |
Instance-level human parsing towards real-world human analysis scenarios is
still under-explored due to the absence of sufficient data resources and
technical difficulty in parsing multiple instances in a single pass. Several
related works all follow the "parsing-by-detection" pipeline that heavily
relies on separately trained detection models to localize instances and then
performs human parsing for each instance sequentially. Nonetheless, two
discrepant optimization targets of detection and parsing lead to suboptimal
representation learning and error accumulation for final results. In this work,
we make the first attempt to explore a detection-free Part Grouping Network
(PGN) for efficiently parsing multiple people in an image in a single pass. Our
PGN reformulates instance-level human parsing as two twinned sub-tasks that can
be jointly learned and mutually refined via a unified network: 1) semantic part
segmentation for assigning each pixel as a human part (e.g., face, arms); 2)
instance-aware edge detection to group semantic parts into distinct person
instances. Thus the shared intermediate representation would be endowed with
capabilities in both characterizing fine-grained parts and inferring instance
belongings of each part. Finally, a simple instance partition process is
employed to get final results during inference. We conducted experiments on
PASCAL-Person-Part dataset and our PGN outperforms all state-of-the-art
methods. Furthermore, we show its superiority on a newly collected multi-person
parsing dataset (CIHP) including 38,280 diverse images, which is the largest
dataset so far and can facilitate more advanced human analysis. The CIHP
benchmark and our source code are available at http://sysu-hcp.net/lip/. | [
"cs.CV"
] |
We propose and study a task we name panoptic segmentation (PS). Panoptic
segmentation unifies the typically distinct tasks of semantic segmentation
(assign a class label to each pixel) and instance segmentation (detect and
segment each object instance). The proposed task requires generating a coherent
scene segmentation that is rich and complete, an important step toward
real-world vision systems. While early work in computer vision addressed
related image/scene parsing tasks, these are not currently popular, possibly
due to lack of appropriate metrics or associated recognition challenges. To
address this, we propose a novel panoptic quality (PQ) metric that captures
performance for all classes (stuff and things) in an interpretable and unified
manner. Using the proposed metric, we perform a rigorous study of both human
and machine performance for PS on three existing datasets, revealing
interesting insights about the task. The aim of our work is to revive the
interest of the community in a more unified view of image segmentation. | [
"cs.CV"
] |
Anomaly detection is a classical but worthwhile problem, and many deep
learning-based anomaly detection algorithms have been proposed, which can
usually achieve better detection results than traditional methods. In view of
reconstruct ability of the model and the calculation of anomaly score, this
paper proposes a time series anomaly detection method based on Variational
AutoEncoder model(VAE) with re-Encoder and Latent Constraint network(VELC). In
order to modify reconstruct ability of the model to prevent it from
reconstructing abnormal samples well, we add a constraint network in the latent
space of the VAE to force it generate new latent variables that are similar
with that of training samples. To be able to calculate anomaly score in two
feature spaces, we train a re-encoder to transform the generated data to a new
latent space. For better handling the time series, we use the LSTM as the
encoder and decoder part of the VAE framework. Experimental results of several
benchmarks show that our method outperforms state-of-the-art anomaly detection
methods. | [
"cs.LG",
"stat.ML"
] |
Scan data of urban environments often include representations of dynamic
objects, such as vehicles, pedestrians, and so forth. However, when it comes to
constructing a 3D point cloud map with sequential accumulations of the scan
data, the dynamic objects often leave unwanted traces in the map. These traces
of dynamic objects act as obstacles and thus impede mobile vehicles from
achieving good localization and navigation performances. To tackle the problem,
this paper presents a novel static map building method called ERASOR,
Egocentric RAtio of pSeudo Occupancy-based dynamic object Removal, which is
fast and robust to motion ambiguity. Our approach directs its attention to the
nature of most dynamic objects in urban environments being inevitably in
contact with the ground. Accordingly, we propose the novel concept called
pseudo occupancy to express the occupancy of unit space and then discriminate
spaces of varying occupancy. Finally, Region-wise Ground Plane Fitting (R-GPF)
is adopted to distinguish static points from dynamic points within the
candidate bins that potentially contain dynamic points. As experimentally
verified on SemanticKITTI, our proposed method yields promising performance
against state-of-the-art methods overcoming the limitations of existing ray
tracing-based and visibility-based methods. | [
"cs.CV",
"cs.RO"
] |
To generate new images for a given category, most deep generative models
require abundant training images from this category, which are often too
expensive to acquire. To achieve the goal of generation based on only a few
images, we propose matching-based Generative Adversarial Network (GAN) for
few-shot generation, which includes a matching generator and a matching
discriminator. Matching generator can match random vectors with a few
conditional images from the same category and generate new images for this
category based on the fused features. The matching discriminator extends
conventional GAN discriminator by matching the feature of generated image with
the fused feature of conditional images. Extensive experiments on three
datasets demonstrate the effectiveness of our proposed method. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
While classical planning has been an active branch of AI, its applicability
is limited to the tasks precisely modeled by humans. Fully automated high-level
agents should be instead able to find a symbolic representation of an unknown
environment without supervision, otherwise it exhibits the knowledge
acquisition bottleneck. Meanwhile, Latplan (Asai and Fukunaga 2018) partially
resolves the bottleneck with a neural network called State AutoEncoder (SAE).
SAE obtains the propositional representation of the image-based puzzle domains
with unsupervised learning, generates a state space and performs classical
planning. In this paper, we identify the problematic, stochastic behavior of
the SAE-produced propositions as a new sub-problem of symbol grounding problem,
the symbol stability problem. Informally, symbols are stable when their
referents (e.g. propositional values) do not change against small perturbation
of the observation, and unstable symbols are harmful for symbolic reasoning. We
analyze the problem in Latplan both formally and empirically, and propose
"Zero-Suppressed SAE", an enhancement that stabilizes the propositions using
the idea of closed-world assumption as a prior for NN optimization. We show
that it finds the more stable propositions and the more compact
representations, resulting in an improved success rate of Latplan. It is robust
against various hyperparameters and eases the tuning effort, and also provides
a weight pruning capability as a side effect. | [
"cs.LG",
"cs.AI"
] |
After their successful debut in natural language processing, Transformer
architectures are now becoming the de-facto standard in many domains. An
obstacle for their deployment over new modalities is the architectural
configuration: the optimal depth-to-width ratio has been shown to dramatically
vary across data types (e.g., $10$x larger over images than over language). We
theoretically predict the existence of an embedding rank bottleneck that limits
the contribution of self-attention width to the Transformer expressivity. We
thus directly tie the input vocabulary size and rank to the optimal
depth-to-width ratio, since a small vocabulary size or rank dictates an added
advantage of depth over width. We empirically demonstrate the existence of this
bottleneck and its implications on the depth-to-width interplay of Transformer
architectures, linking the architecture variability across domains to the often
glossed-over usage of different vocabulary sizes or embedding ranks in
different domains. As an additional benefit, our rank bottlenecking framework
allows us to identify size redundancies of $25\%-50\%$ in leading NLP models
such as ALBERT and T5. | [
"cs.LG",
"cs.CL"
] |
Learning quickly is of great importance for machine intelligence deployed in
online platforms. With the capability of transferring knowledge from learned
tasks, meta-learning has shown its effectiveness in online scenarios by
continuously updating the model with the learned prior. However, current online
meta-learning algorithms are limited to learn a globally-shared meta-learner,
which may lead to sub-optimal results when the tasks contain heterogeneous
information that are distinct by nature and difficult to share. We overcome
this limitation by proposing an online structured meta-learning (OSML)
framework. Inspired by the knowledge organization of human and hierarchical
feature representation, OSML explicitly disentangles the meta-learner as a
meta-hierarchical graph with different knowledge blocks. When a new task is
encountered, it constructs a meta-knowledge pathway by either utilizing the
most relevant knowledge blocks or exploring new blocks. Through the
meta-knowledge pathway, the model is able to quickly adapt to the new task. In
addition, new knowledge is further incorporated into the selected blocks.
Experiments on three datasets demonstrate the effectiveness and
interpretability of our proposed framework in the context of both homogeneous
and heterogeneous tasks. | [
"cs.LG"
] |
Temporal feature extraction is an important issue in video-based action
recognition. Optical flow is a popular method to extract temporal feature,
which produces excellent performance thanks to its capacity of capturing
pixel-level correlation information between consecutive frames. However, such a
pixel-level correlation is extracted at the cost of high computational
complexity and large storage resource. In this paper, we propose a novel
temporal feature extraction method, named Attentive Correlated Temporal Feature
(ACTF), by exploring inter-frame correlation within a certain region. The
proposed ACTF exploits both bilinear and linear correlation between successive
frames on the regional level. Our method has the advantage of achieving
performance comparable to or better than optical flow-based methods while
avoiding the introduction of optical flow. Experimental results demonstrate our
proposed method achieves the state-of-the-art performances of 96.3% on UCF101
and 76.3% on HMDB51 benchmark datasets. | [
"cs.CV"
] |
For many years, link prediction on knowledge graphs (KGs) has been a purely
transductive task, not allowing for reasoning on unseen entities. Recently,
increasing efforts are put into exploring semi- and fully inductive scenarios,
enabling inference over unseen and emerging entities. Still, all these
approaches only consider triple-based \glspl{kg}, whereas their richer
counterparts, hyper-relational KGs (e.g., Wikidata), have not yet been properly
studied. In this work, we classify different inductive settings and study the
benefits of employing hyper-relational KGs on a wide range of semi- and fully
inductive link prediction tasks powered by recent advancements in graph neural
networks. Our experiments on a novel set of benchmarks show that qualifiers
over typed edges can lead to performance improvements of 6% of absolute gains
(for the Hits@10 metric) compared to triple-only baselines. Our code is
available at \url{https://github.com/mali-git/hyper_relational_ilp}. | [
"cs.LG"
] |
Clustering of time series data exhibits a number of challenges not present in
other settings, notably the problem of registration (alignment) of observed
signals. Typical approaches include pre-registration to a user-specified
template or time warping approaches which attempt to optimally align series
with a minimum of distortion. For many signals obtained from recording or
sensing devices, these methods may be unsuitable as a template signal is not
available for pre-registration, while the distortion of warping approaches may
obscure meaningful temporal information. We propose a new method for automatic
time series alignment within a clustering problem. Our approach, Temporal
Registration using Optimal Unitary Transformations (TROUT), is based on a novel
dissimilarity measure between time series that is easy to compute and
automatically identifies optimal alignment between pairs of time series. By
embedding our new measure in a optimization formulation, we retain well-known
advantages of computational and statistical performance. We provide an
efficient algorithm for TROUT-based clustering and demonstrate its superior
performance over a range of competitors. | [
"stat.ML",
"cs.LG",
"stat.ME"
] |
Data augmentation has greatly contributed to improving the performance in
image recognition tasks, and a lot of related studies have been conducted.
However, data augmentation on 3D point cloud data has not been much explored.
3D label has more sophisticated and rich structural information than the 2D
label, so it enables more diverse and effective data augmentation. In this
paper, we propose part-aware data augmentation (PA-AUG) that can better utilize
rich information of 3D label to enhance the performance of 3D object detectors.
PA-AUG divides objects into partitions and stochastically applies five
augmentation methods to each local region. It is compatible with existing point
cloud data augmentation methods and can be used universally regardless of the
detector's architecture. PA-AUG has improved the performance of
state-of-the-art 3D object detector for all classes of the KITTI dataset and
has the equivalent effect of increasing the train data by about 2.5$\times$. We
also show that PA-AUG not only increases performance for a given dataset but
also is robust to corrupted data. The code is available at
https://github.com/sky77764/pa-aug.pytorch | [
"cs.CV"
] |
Detecting objects such as cars and pedestrians in 3D plays an indispensable
role in autonomous driving. Existing approaches largely rely on expensive LiDAR
sensors for accurate depth information. While recently pseudo-LiDAR has been
introduced as a promising alternative, at a much lower cost based solely on
stereo images, there is still a notable performance gap. In this paper we
provide substantial advances to the pseudo-LiDAR framework through improvements
in stereo depth estimation. Concretely, we adapt the stereo network
architecture and loss function to be more aligned with accurate depth
estimation of faraway objects --- currently the primary weakness of
pseudo-LiDAR. Further, we explore the idea to leverage cheaper but extremely
sparse LiDAR sensors, which alone provide insufficient information for 3D
detection, to de-bias our depth estimation. We propose a depth-propagation
algorithm, guided by the initial depth estimates, to diffuse these few exact
measurements across the entire depth map. We show on the KITTI object detection
benchmark that our combined approach yields substantial improvements in depth
estimation and stereo-based 3D object detection --- outperforming the previous
state-of-the-art detection accuracy for faraway objects by 40%. Our code is
available at https://github.com/mileyan/Pseudo_Lidar_V2. | [
"cs.CV"
] |
We introduce the notion of pointwise coverage to measure the explainability
properties of machine learning classifiers. An explanation for a prediction is
a definably simple region of the feature space sharing the same label as the
prediction, and the coverage of an explanation measures its size or
generalizability. With this notion of explanation, we investigate whether or
not there is a natural characterization of the most explainable classifier.
According with our intuitions, we prove that the binary linear classifier is
uniquely the most explainable classifier up to negligible sets. | [
"cs.LG",
"cs.HC",
"stat.ML"
] |
Accurate and fast extraction of lung volumes from computed tomography (CT)
scans remains in a great demand in the clinical environment because the
available methods fail to provide a generic solution due to wide anatomical
variations of lungs and existence of pathologies. Manual annotation, current
gold standard, is time consuming and often subject to human bias. On the other
hand, current state-of-the-art fully automated lung segmentation methods fail
to make their way into the clinical practice due to their inability to
efficiently incorporate human input for handling misclassifications and praxis.
This paper presents a lung annotation tool for CT images that is interactive,
efficient, and robust. The proposed annotation tool produces an "as accurate as
possible" initial annotation based on the fuzzy-connectedness image
segmentation, followed by efficient manual fixation of the initial extraction
if deemed necessary by the practitioner. To provide maximum flexibility to the
users, our annotation tool is supported in three major operating systems
(Windows, Linux, and the Mac OS X). The quantitative results comparing our free
software with commercially available lung segmentation tools show higher degree
of consistency and precision of our software with a considerable potential to
enhance the performance of routine clinical tasks. | [
"cs.CV"
] |
The success of deep learning heavily depends on the availability of large
labeled training sets. However, it is hard to get large labeled datasets in
medical image domain because of the strict privacy concern and costly labeling
efforts. Contrastive learning, an unsupervised learning technique, has been
proved powerful in learning image-level representations from unlabeled data.
The learned encoder can then be transferred or fine-tuned to improve the
performance of downstream tasks with limited labels. A critical step in
contrastive learning is the generation of contrastive data pairs, which is
relatively simple for natural image classification but quite challenging for
medical image segmentation due to the existence of the same tissue or organ
across the dataset. As a result, when applied to medical image segmentation,
most state-of-the-art contrastive learning frameworks inevitably introduce a
lot of false-negative pairs and result in degraded segmentation quality. To
address this issue, we propose a novel positional contrastive learning (PCL)
framework to generate contrastive data pairs by leveraging the position
information in volumetric medical images. Experimental results on CT and MRI
datasets demonstrate that the proposed PCL method can substantially improve the
segmentation performance compared to existing methods in both semi-supervised
setting and transfer learning setting. | [
"cs.CV"
] |
Anomaly detection of time series plays an important role in reliability
systems engineering. However, in practical application, there is no precisely
defined boundary between normal and anomalous behaviors in different
application scenarios. Therefore, different anomaly detection algorithms and
processes ought to be adopted for time series in different situation. Although
such strategy improve the accuracy of anomaly detection, it takes a lot of time
for practitioners to configure various algorithms to millions of series, which
greatly increases the development and maintenance cost of anomaly detection
processes. In this paper, we propose CRATOS which is a self-adapt algorithms
that extract features from time series, and then cluster series with similar
features into one group. For each group we utilize evolutionary algorithm to
search the best anomaly detection methods and processes. Our methods can
significantly reduce the cost of development and maintenance of anomaly
detection. According to experiments, our clustering methods achieves the
state-of-art results. The accuracy of the anomaly detection algorithms in this
paper is 85.1%. | [
"cs.LG",
"cs.NE",
"stat.ML"
] |
A new unsupervised learning method of depth and ego-motion using multiple
masks from monocular video is proposed in this paper. The depth estimation
network and the ego-motion estimation network are trained according to the
constraints of depth and ego-motion without truth values. The main contribution
of our method is to carefully consider the occlusion of the pixels generated
when the adjacent frames are projected to each other, and the blank problem
generated in the projection target imaging plane. Two fine masks are designed
to solve most of the image pixel mismatch caused by the movement of the camera.
In addition, some relatively rare circumstances are considered, and repeated
masking is proposed. To some extent, the method is to use a geometric
relationship to filter the mismatched pixels for training, making unsupervised
learning more efficient and accurate. The experiments on KITTI dataset show our
method achieves good performance in terms of depth and ego-motion. The
generalization capability of our method is demonstrated by training on the
low-quality uncalibrated bike video dataset and evaluating on KITTI dataset,
and the results are still good. | [
"cs.CV",
"cs.RO"
] |
Recent advances of network architecture for point cloud processing are mainly
driven by new designs of local aggregation operators. However, the impact of
these operators to network performance is not carefully investigated due to
different overall network architecture and implementation details in each
solution. Meanwhile, most of operators are only applied in shallow
architectures. In this paper, we revisit the representative local aggregation
operators and study their performance using the same deep residual
architecture. Our investigation reveals that despite the different designs of
these operators, all of these operators make surprisingly similar contributions
to the network performance under the same network input and feature numbers and
result in the state-of-the-art accuracy on standard benchmarks. This finding
stimulate us to rethink the necessity of sophisticated design of local
aggregation operator for point cloud processing. To this end, we propose a
simple local aggregation operator without learnable weights, named Position
Pooling (PosPool), which performs similarly or slightly better than existing
sophisticated operators. In particular, a simple deep residual network with
PosPool layers achieves outstanding performance on all benchmarks, which
outperforms the previous state-of-the methods on the challenging PartNet
datasets by a large margin (7.4 mIoU). The code is publicly available at
https://github.com/zeliu98/CloserLook3D | [
"cs.CV",
"cs.LG"
] |
State-of-the-art deep learning algorithms mostly rely on gradient
backpropagation to train a deep artificial neural network, which is generally
regarded to be biologically implausible. For a network of stochastic units
trained on a reinforcement learning task or a supervised learning task, one
biologically plausible way of learning is to train each unit by REINFORCE. In
this case, only a global reward signal has to be broadcast to all units, and
the learning rule given is local, which can be interpreted as reward-modulated
spike-timing-dependent plasticity (R-STDP) that is observed biologically.
Although this learning rule follows the gradient of return in expectation, it
suffers from high variance and cannot be used to train a deep network in
practice. In this paper, we propose an algorithm called MAP propagation that
can reduce this variance significantly while retaining the local property of
learning rule. Different from prior works on local learning rules (e.g.
Contrastive Divergence) which mostly applies to undirected models in
unsupervised learning tasks, our proposed algorithm applies to directed models
in reinforcement learning tasks. We show that the newly proposed algorithm can
solve common reinforcement learning tasks at a speed similar to that of
backpropagation when applied to an actor-critic network. | [
"cs.LG",
"cs.AI",
"I.2.8"
] |
Compared to many other dense prediction tasks, e.g., semantic segmentation,
it is the arbitrary number of instances that has made instance segmentation
much more challenging. In order to predict a mask for each instance, mainstream
approaches either follow the 'detect-then-segment' strategy (e.g., Mask R-CNN),
or predict embedding vectors first then cluster pixels into individual
instances. In this paper, we view the task of instance segmentation from a
completely new perspective by introducing the notion of "instance categories",
which assigns categories to each pixel within an instance according to the
instance's location. With this notion, we propose segmenting objects by
locations (SOLO), a simple, direct, and fast framework for instance
segmentation with strong performance. We derive a few SOLO variants (e.g.,
Vanilla SOLO, Decoupled SOLO, Dynamic SOLO) following the basic principle. Our
method directly maps a raw input image to the desired object categories and
instance masks, eliminating the need for the grouping post-processing or the
bounding box detection. Our approach achieves state-of-the-art results for
instance segmentation in terms of both speed and accuracy, while being
considerably simpler than the existing methods. Besides instance segmentation,
our method yields state-of-the-art results in object detection (from our mask
byproduct) and panoptic segmentation. We further demonstrate the flexibility
and high-quality segmentation of SOLO by extending it to perform one-stage
instance-level image matting. Code is available at: https://git.io/AdelaiDet | [
"cs.CV"
] |
Deep learning's success has led to larger and larger models to handle more
and more complex tasks; trained models can contain millions of parameters.
These large models are compute- and memory-intensive, which makes it a
challenge to deploy them with minimized latency, throughput, and storage
requirements. Some model compression methods have been successfully applied to
image classification and detection or language models, but there has been very
little work compressing generative adversarial networks (GANs) performing
complex tasks. In this paper, we show that a standard model compression
technique, weight pruning, cannot be applied to GANs using existing methods. We
then develop a self-supervised compression technique which uses the trained
discriminator to supervise the training of a compressed generator. We show that
this framework has a compelling performance to high degrees of sparsity, can be
easily applied to new tasks and models, and enables meaningful comparisons
between different pruning granularities. | [
"cs.LG",
"cs.CV",
"eess.IV"
] |
Detecting carried objects is one of the requirements for developing systems
to reason about activities involving people and objects. We present an approach
to detect carried objects from a single video frame with a novel method that
incorporates features from multiple scales. Initially, a foreground mask in a
video frame is segmented into multi-scale superpixels. Then the human-like
regions in the segmented area are identified by matching a set of extracted
features from superpixels against learned features in a codebook. A carried
object probability map is generated using the complement of the matching
probabilities of superpixels to human-like regions and background information.
A group of superpixels with high carried object probability and strong edge
support is then merged to obtain the shape of the carried object. We applied
our method to two challenging datasets, and results show that our method is
competitive with or better than the state-of-the-art. | [
"cs.CV"
] |
We propose a novel multi-task learning architecture, which allows learning of
task-specific feature-level attention. Our design, the Multi-Task Attention
Network (MTAN), consists of a single shared network containing a global feature
pool, together with a soft-attention module for each task. These modules allow
for learning of task-specific features from the global features, whilst
simultaneously allowing for features to be shared across different tasks. The
architecture can be trained end-to-end and can be built upon any feed-forward
neural network, is simple to implement, and is parameter efficient. We evaluate
our approach on a variety of datasets, across both image-to-image predictions
and image classification tasks. We show that our architecture is
state-of-the-art in multi-task learning compared to existing methods, and is
also less sensitive to various weighting schemes in the multi-task loss
function. Code is available at https://github.com/lorenmt/mtan. | [
"cs.CV"
] |
A common dilemma in 3D object detection for autonomous driving is that
high-quality, dense point clouds are only available during training, but not
testing. We use knowledge distillation to bridge the gap between a model
trained on high-quality inputs at training time and another tested on
low-quality inputs at inference time. In particular, we design a two-stage
training pipeline for point cloud object detection. First, we train an object
detection model on dense point clouds, which are generated from multiple frames
using extra information only available at training time. Then, we train the
model's identical counterpart on sparse single-frame point clouds with
consistency regularization on features from both models. We show that this
procedure improves performance on low-quality data during testing, without
additional overhead. | [
"cs.CV",
"cs.LG"
] |
In this work, we study semi-supervised multi-label node classification
problem in attributed graphs. Classic solutions to multi-label node
classification follow two steps, first learn node embedding and then build a
node classifier on the learned embedding. To improve the discriminating power
of the node embedding, we propose a novel collaborative graph walk, named
Multi-Label-Graph-Walk, to finely tune node representations with the available
label assignments in attributed graphs via reinforcement learning. The proposed
method formulates the multi-label node classification task as simultaneous
graph walks conducted by multiple label-specific agents. Furthermore, policies
of the label-wise graph walks are learned in a cooperative way to capture first
the predictive relation between node labels and structural attributes of
graphs; and second, the correlation among the multiple label-specific
classification tasks. A comprehensive experimental study demonstrates that the
proposed method can achieve significantly better multi-label classification
performance than the state-of-the-art approaches and conduct more efficient
graph exploration. | [
"cs.LG",
"stat.ML"
] |
Subsets and Splits