text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Gait data captured by inertial sensors have demonstrated promising results on
user authentication. However, most existing approaches stored the enrolled gait
pattern insecurely for matching with the validating pattern, thus, posed
critical security and privacy issues. In this study, we present a gait
cryptosystem that generates from gait data the random key for user
authentication, meanwhile, secures the gait pattern. First, we propose a
revocable and random binary string extraction method using a deep neural
network followed by feature-wise binarization. A novel loss function for
network optimization is also designed, to tackle not only the intrauser
stability but also the inter-user randomness. Second, we propose a new
biometric key generation scheme, namely Irreversible Error Correct and
Obfuscate (IECO), improved from the Error Correct and Obfuscate (ECO) scheme,
to securely generate from the binary string the random and irreversible key.
The model was evaluated with two benchmark datasets as OU-ISIR and whuGAIT. We
showed that our model could generate the key of 139 bits from 5-second data
sequence with zero False Acceptance Rate (FAR) and False Rejection Rate (FRR)
smaller than 5.441%. In addition, the security and user privacy analyses showed
that our model was secure against existing attacks on biometric template
protection, and fulfilled irreversibility and unlinkability. | [
"cs.CV"
] |
In recent years, multi-view subspace clustering has achieved impressive
performance due to the exploitation of complementary imformation across
multiple views. However, multi-view data can be very complicated and are not
easy to cluster in real-world applications. Most existing methods operate on
raw data and may not obtain the optimal solution. In this work, we propose a
novel multi-view clustering method named smoothed multi-view subspace
clustering (SMVSC) by employing a novel technique, i.e., graph filtering, to
obtain a smooth representation for each view, in which similar data points have
similar feature values. Specifically, it retains the graph geometric features
through applying a low-pass filter. Consequently, it produces a
``clustering-friendly" representation and greatly facilitates the downstream
clustering task. Extensive experiments on benchmark datasets validate the
superiority of our approach. Analysis shows that graph filtering increases the
separability of classes. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
A person is usually characterized by descriptors like age, gender, height,
cloth type, pattern, color, etc. Such descriptors are known as attributes
and/or soft-biometrics. They link the semantic gap between a person's
description and retrieval in video surveillance. Retrieving a specific person
with the query of semantic description has an important application in video
surveillance. Using computer vision to fully automate the person retrieval task
has been gathering interest within the research community. However, the
Current, trend mainly focuses on retrieving persons with image-based queries,
which have major limitations for practical usage. Instead of using an image
query, in this paper, we study the problem of person retrieval in video
surveillance with a semantic description. To solve this problem, we develop a
deep learning-based cascade filtering approach (PeR-ViS), which uses Mask R-CNN
[14] (person detection and instance segmentation) and DenseNet-161 [16]
(soft-biometric classification). On the standard person retrieval dataset of
SoftBioSearch [6], we achieve 0.566 Average IoU and 0.792 %w $IoU > 0.4$,
surpassing the current state-of-the-art by a large margin. We hope our simple,
reproducible, and effective approach will help ease future research in the
domain of person retrieval in video surveillance. The source code and
pretrained weights available at https://parshwa1999.github.io/PeR-ViS/. | [
"cs.CV"
] |
We propose a method at the intersection of Computer Vision and Computer
Graphics fields, which automatically generates RGBD images using neural
networks, based on previously seen and synchronized video, depth and pose
signals. Since the models must be able to reconstruct both texture (RGB) and
structure (Depth), it creates an implicit representation of the scene, as
opposed to explicit ones, such as meshes or point clouds. The process can be
thought of as neural rendering, where we obtain a function f : Pose -> RGBD,
which we can use to navigate through the generated scene, similarly to graphics
simulations. We introduce two new datasets, one based on synthetic data with
full ground truth information, while the other one being recorded from a drone
flight in an university campus, using only video and GPS signals. Finally, we
propose a fully unsupervised method of generating datasets from videos alone,
in order to train the Pose2RGBD networks. Code and datasets are available at::
https://gitlab.com/mihaicristianpirvu/pose2rgbd. | [
"cs.CV",
"eess.IV"
] |
In this paper, we propose an efficient and discriminative model for salient
object detection. Our method is carried out in a stepwise mechanism based on
both divergence background and compact foreground cues. In order to effectively
enhance the distinction between nodes along object boundaries and the
similarity among object regions, a graph is constructed by introducing the
concept of virtual node. To remove incorrect outputs, a scheme for selecting
background seeds and a method for generating compactness foreground regions are
introduced, respectively. Different from prior methods, we calculate the
saliency value of each node based on the relationship between the corresponding
node and the virtual node. In order to achieve significant performance
improvement consistently, we propose an Extended Manifold Ranking (EMR)
algorithm, which subtly combines suppressed / active nodes and mid-level
information. Extensive experimental results demonstrate that the proposed
algorithm performs favorably against the state-of-art saliency detection
methods in terms of different evaluation metrics on several benchmark datasets. | [
"cs.CV"
] |
This paper studies the task of estimating the 3D human poses of multiple
persons from multiple calibrated camera views. Following the top-down paradigm,
we decompose the task into two stages, i.e. person localization and pose
estimation. Both stages are processed in coarse-to-fine manners. And we propose
three task-specific graph neural networks for effective message passing. For 3D
person localization, we first use Multi-view Matching Graph Module (MMG) to
learn the cross-view association and recover coarse human proposals. The Center
Refinement Graph Module (CRG) further refines the results via flexible
point-based prediction. For 3D pose estimation, the Pose Regression Graph
Module (PRG) learns both the multi-view geometry and structural relations
between human joints. Our approach achieves state-of-the-art performance on CMU
Panoptic and Shelf datasets with significantly lower computation complexity. | [
"cs.CV"
] |
We introduce a new high resolution, high frame rate stereo video dataset,
which we call SPIN, for tracking and action recognition in the game of ping
pong. The corpus consists of ping pong play with three main annotation streams
that can be used to learn tracking and action recognition models -- tracking of
the ping pong ball and poses of humans in the videos and the spin of the ball
being hit by humans. The training corpus consists of 53 hours of data with
labels derived from previous models in a semi-supervised method. The testing
corpus contains 1 hour of data with the same information, except that crowd
compute was used to obtain human annotations of the ball position, from which
ball spin has been derived. Along with the dataset we introduce several
baseline models that were trained on this data. The models were specifically
chosen to be able to perform inference at the same rate as the images are
generated -- specifically 150 fps. We explore the advantages of multi-task
training on this data, and also show interesting properties of ping pong ball
trajectories that are derived from our observational data, rather than from
prior physics models. To our knowledge this is the first large scale dataset of
ping pong; we offer it to the community as a rich dataset that can be used for
a large variety of machine learning and vision tasks such as tracking, pose
estimation, semi-supervised and unsupervised learning and generative modeling. | [
"cs.CV",
"cs.LG"
] |
This paper addresses semi-supervised semantic segmentation by exploiting a
small set of images with pixel-level annotations (strong supervisions) and a
large set of images with only image-level annotations (weak supervisions). Most
existing approaches aim to generate accurate pixel-level labels from weak
supervisions. However, we observe that those generated labels still inevitably
contain noisy labels. Motivated by this observation, we present a novel
perspective and formulate this task as a problem of learning with pixel-level
label noise. Existing noisy label methods, nevertheless, mainly aim at
image-level tasks, which can not capture the relationship between neighboring
labels in one image. Therefore, we propose a graph based label noise detection
and correction framework to deal with pixel-level noisy labels. In particular,
for the generated pixel-level noisy labels from weak supervisions by Class
Activation Map (CAM), we train a clean segmentation model with strong
supervisions to detect the clean labels from these noisy labels according to
the cross-entropy loss. Then, we adopt a superpixel-based graph to represent
the relations of spatial adjacency and semantic similarity between pixels in
one image. Finally we correct the noisy labels using a Graph Attention Network
(GAT) supervised by detected clean labels. We comprehensively conduct
experiments on PASCAL VOC 2012, PASCAL-Context and MS-COCO datasets. The
experimental results show that our proposed semi supervised method achieves the
state-of-the-art performances and even outperforms the fully-supervised models
on PASCAL VOC 2012 and MS-COCO datasets in some cases. | [
"cs.CV"
] |
Human motion retargeting aims to transfer the motion of one person in a
"driving" video or set of images to another person. Existing efforts leverage a
long training video from each target person to train a subject-specific motion
transfer model. However, the scalability of such methods is limited, as each
model can only generate videos for the given target subject, and such training
videos are labor-intensive to acquire and process. Few-shot motion transfer
techniques, which only require one or a few images from a target, have recently
drawn considerable attention. Methods addressing this task generally use either
2D or explicit 3D representations to transfer motion, and in doing so,
sacrifice either accurate geometric modeling or the flexibility of an
end-to-end learned representation. Inspired by the Transformable Bottleneck
Network, which renders novel views and manipulations of rigid objects, we
propose an approach based on an implicit volumetric representation of the image
content, which can then be spatially manipulated using volumetric flow fields.
We address the challenging question of how to aggregate information across
different body poses, learning flow fields that allow for combining content
from the appropriate regions of input images of highly non-rigid human subjects
performing complex motions into a single implicit volumetric representation.
This allows us to learn our 3D representation solely from videos of moving
people. Armed with both 3D object understanding and end-to-end learned
rendering, this categorically novel representation delivers state-of-the-art
image generation quality, as shown by our quantitative and qualitative
evaluations. | [
"cs.CV"
] |
Experience replay is widely used in deep reinforcement learning algorithms
and allows agents to remember and learn from experiences from the past. In an
effort to learn more efficiently, researchers proposed prioritized experience
replay (PER) which samples important transitions more frequently. In this
paper, we propose Prioritized Sequence Experience Replay (PSER) a framework for
prioritizing sequences of experience in an attempt to both learn more
efficiently and to obtain better performance. We compare the performance of PER
and PSER sampling techniques in a tabular Q-learning environment and in DQN on
the Atari 2600 benchmark. We prove theoretically that PSER is guaranteed to
converge faster than PER and empirically show PSER substantially improves upon
PER. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
We investigate the problem of learning a probabilistic distribution over
three-dimensional shapes given two-dimensional views of multiple objects taken
from unknown viewpoints. Our approach called projective generative adversarial
network (PrGAN) trains a deep generative model of 3D shapes whose projections
(or renderings) match the distributions of the provided 2D distribution. The
addition of a differentiable projection module allows us to infer the
underlying 3D shape distribution without access to any explicit 3D or viewpoint
annotation during the learning phase. We show that our approach produces 3D
shapes of comparable quality to GANs trained directly on 3D data. %for a number
of shape categoriesincluding chairs, airplanes, and cars. Experiments also show
that the disentangled representation of 2D shapes into geometry and viewpoint
leads to a good generative model of 2D shapes. The key advantage of our model
is that it estimates 3D shape, viewpoint, and generates novel views from an
input image in a completely unsupervised manner. We further investigate how the
generative models can be improved if additional information such as depth,
viewpoint or part segmentations is available at training time. To this end, we
present new differentiable projection operators that can be used by PrGAN to
learn better 3D generative models. Our experiments show that our method can
successfully leverage extra visual cues to create more diverse and accurate
shapes. | [
"cs.CV",
"cs.GR",
"cs.LG"
] |
Autonomous driving has achieved significant progress in recent years, but
autonomous cars are still unable to tackle high-risk situations where a
potential accident is likely. In such near-accident scenarios, even a minor
change in the vehicle's actions may result in drastically different
consequences. To avoid unsafe actions in near-accident scenarios, we need to
fully explore the environment. However, reinforcement learning (RL) and
imitation learning (IL), two widely-used policy learning methods, cannot model
rapid phase transitions and are not scalable to fully cover all the states. To
address driving in near-accident scenarios, we propose a hierarchical
reinforcement and imitation learning (H-ReIL) approach that consists of
low-level policies learned by IL for discrete driving modes, and a high-level
policy learned by RL that switches between different driving modes. Our
approach exploits the advantages of both IL and RL by integrating them into a
unified learning framework. Experimental results and user studies suggest our
approach can achieve higher efficiency and safety compared to other methods.
Analyses of the policies demonstrate our high-level policy appropriately
switches between different low-level policies in near-accident driving
situations. | [
"cs.LG",
"cs.AI",
"cs.RO",
"cs.SY",
"eess.SY",
"stat.ML"
] |
In this work, we perform unsupervised learning of representations by
maximizing mutual information between an input and the output of a deep neural
network encoder. Importantly, we show that structure matters: incorporating
knowledge about locality of the input to the objective can greatly influence a
representation's suitability for downstream tasks. We further control
characteristics of the representation by matching to a prior distribution
adversarially. Our method, which we call Deep InfoMax (DIM), outperforms a
number of popular unsupervised learning methods and competes with
fully-supervised learning on several classification tasks. DIM opens new
avenues for unsupervised learning of representations and is an important step
towards flexible formulations of representation-learning objectives for
specific end-goals. | [
"stat.ML",
"cs.LG"
] |
It is a common paradigm in object detection frameworks to treat all samples
equally and target at maximizing the performance on average. In this work, we
revisit this paradigm through a careful study on how different samples
contribute to the overall performance measured in terms of mAP. Our study
suggests that the samples in each mini-batch are neither independent nor
equally important, and therefore a better classifier on average does not
necessarily mean higher mAP. Motivated by this study, we propose the notion of
Prime Samples, those that play a key role in driving the detection performance.
We further develop a simple yet effective sampling and learning strategy called
PrIme Sample Attention (PISA) that directs the focus of the training process
towards such samples. Our experiments demonstrate that it is often more
effective to focus on prime samples than hard samples when training a detector.
Particularly, On the MSCOCO dataset, PISA outperforms the random sampling
baseline and hard mining schemes, e.g., OHEM and Focal Loss, consistently by
around 2% on both single-stage and two-stage detectors, even with a strong
backbone ResNeXt-101. | [
"cs.CV"
] |
Time dependent data is a main source of information in today's data driven
world. Generating this type of data though has shown its challenges and made it
an interesting research area in the field of generative machine learning. One
such approach was that by Smith et al. who developed Time Series Generative
Adversarial Network (TSGAN) which showed promising performance in generating
time dependent data and the ability of few shot generation though being flawed
in certain aspects of training and learning. This paper looks to improve on the
results from TSGAN and address those flaws by unifying the training of the
independent networks in TSGAN and creating a dependency both in training and
learning. This improvement, called unified TSGAN (uTSGAN) was tested and
comapred both quantitatively and qualitatively to its predecessor on 70
benchmark time series data sets used in the community. uTSGAN showed to
outperform TSGAN in 80\% of the data sets by the same number of training epochs
and 60\% of the data sets in 3/4th the amount of training time or less while
maintaining the few shot generation ability with better FID scores across those
data sets. | [
"cs.LG",
"stat.ML"
] |
Health management is getting increasing attention all over the world.
However, existing health management mainly relies on hospital examination and
treatment, which are complicated and untimely. The emerging of mobile devices
provides the possibility to manage people's health status in a convenient and
instant way. Estimation of health status can be achieved with various kinds of
data streams continuously collected from wearable sensors. However, these data
streams are multi-source and heterogeneous, containing complex temporal
structures with local contextual and global temporal aspects, which makes the
feature learning and data joint utilization challenging. We propose to model
the behavior-related multi-source data streams with a local-global graph, which
contains multiple local context sub-graphs to learn short term local context
information with heterogeneous graph neural networks and a global temporal
sub-graph to learn long term dependency with self-attention networks. Then
health status is predicted based on the structure-aware representation learned
from the local-global behavior graph. We take experiments on StudentLife
dataset, and extensive results demonstrate the effectiveness of our proposed
model. | [
"cs.LG",
"cs.MM"
] |
Imitation learning in a high-dimensional environment is challenging. Most
inverse reinforcement learning (IRL) methods fail to outperform the
demonstrator in such a high-dimensional environment, e.g., Atari domain. To
address this challenge, we propose a novel reward learning module to generate
intrinsic reward signals via a generative model. Our generative method can
perform better forward state transition and backward action encoding, which
improves the module's dynamics modeling ability in the environment. Thus, our
module provides the imitation agent both the intrinsic intention of the
demonstrator and a better exploration ability, which is critical for the agent
to outperform the demonstrator. Empirical results show that our method
outperforms state-of-the-art IRL methods on multiple Atari games, even with
one-life demonstration. Remarkably, our method achieves performance that is up
to 5 times the performance of the demonstration. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Learning global features by aggregating information over multiple views has
been shown to be effective for 3D shape analysis. For view aggregation in deep
learning models, pooling has been applied extensively. However, pooling leads
to a loss of the content within views, and the spatial relationship among
views, which limits the discriminability of learned features. We propose
3DViewGraph to resolve this issue, which learns 3D global features by more
effectively aggregating unordered views with attention. Specifically, unordered
views taken around a shape are regarded as view nodes on a view graph.
3DViewGraph first learns a novel latent semantic mapping to project low-level
view features into meaningful latent semantic embeddings in a lower dimensional
space, which is spanned by latent semantic patterns. Then, the content and
spatial information of each pair of view nodes are encoded by a novel spatial
pattern correlation, where the correlation is computed among latent semantic
patterns. Finally, all spatial pattern correlations are integrated with
attention weights learned by a novel attention mechanism. This further
increases the discriminability of learned features by highlighting the
unordered view nodes with distinctive characteristics and depressing the ones
with appearance ambiguity. We show that 3DViewGraph outperforms
state-of-the-art methods under three large-scale benchmarks. | [
"cs.CV"
] |
Generalization is a central challenge for the deployment of reinforcement
learning (RL) systems in the real world. In this paper, we show that the
sequential structure of the RL problem necessitates new approaches to
generalization beyond the well-studied techniques used in supervised learning.
While supervised learning methods can generalize effectively without explicitly
accounting for epistemic uncertainty, we show that, perhaps surprisingly, this
is not the case in RL. We show that generalization to unseen test conditions
from a limited number of training conditions induces implicit partial
observability, effectively turning even fully-observed MDPs into POMDPs.
Informed by this observation, we recast the problem of generalization in RL as
solving the induced partially observed Markov decision process, which we call
the epistemic POMDP. We demonstrate the failure modes of algorithms that do not
appropriately handle this partial observability, and suggest a simple
ensemble-based technique for approximately solving the partially observed
problem. Empirically, we demonstrate that our simple algorithm derived from the
epistemic POMDP achieves significant gains in generalization over current
methods on the Procgen benchmark suite. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Transliteration involves transformation of one script to another based on
phonetic similarities between the characters of two distinctive scripts. In
this paper, we present a novel technique for automatic transliteration of
Devanagari script using character recognition. One of the first tasks performed
to isolate the constituent characters is segmentation. Line segmentation
methodology in this manuscript discusses the case of overlapping lines.
Character segmentation algorithm is designed to segment conjuncts and separate
shadow characters. Presented shadow character segmentation scheme employs
connected component method to isolate the character, keeping the constituent
characters intact. Statistical features namely different order moments like
area, variance, skewness and kurtosis along with structural features of
characters are employed in two phase recognition process. After recognition,
constituent Devanagari characters are mapped to corresponding roman alphabets
in way that resulting roman alphabets have similar pronunciation to source
characters. | [
"cs.CV"
] |
We study the problem of learning to rank from multiple information sources.
Though multi-view learning and learning to rank have been studied extensively
leading to a wide range of applications, multi-view learning to rank as a
synergy of both topics has received little attention. The aim of the paper is
to propose a composite ranking method while keeping a close correlation with
the individual rankings simultaneously. We present a generic framework for
multi-view subspace learning to rank (MvSL2R), and two novel solutions are
introduced under the framework. The first solution captures information of
feature mappings from within each view as well as across views using
autoencoder-like networks. Novel feature embedding methods are formulated in
the optimization of multi-view unsupervised and discriminant autoencoders.
Moreover, we introduce an end-to-end solution to learning towards both the
joint ranking objective and the individual rankings. The proposed solution
enhances the joint ranking with minimum view-specific ranking loss, so that it
can achieve the maximum global view agreements in a single optimization
process. The proposed method is evaluated on three different ranking problems,
i.e. university ranking, multi-view lingual text ranking and image data
ranking, providing superior results compared to related methods. | [
"cs.LG",
"stat.ML"
] |
This paper introduces the 3DCapsule, which is a 3D extension of the recently
introduced Capsule concept that makes it applicable to unordered point sets.
The original Capsule relies on the existence of a spatial relationship between
the elements in the feature map it is presented with, whereas in point
permutation invariant formulations of 3D point set classification methods, such
relationships are typically lost. Here, a new layer called ComposeCaps is
introduced that, in lieu of a spatially relevant feature mapping, learns a new
mapping that can be exploited by the 3DCapsule. Previous works in the 3D point
set classification domain have focused on other parts of the architecture,
whereas instead, the 3DCapsule is a drop-in replacement of the commonly used
fully connected classifier. It is demonstrated via an ablation study, that when
the 3DCapsule is applied to recent 3D point set classification architectures,
it consistently shows an improvement, in particular when subjected to noisy
data. Similarly, the ComposeCaps layer is evaluated and demonstrates an
improvement over the baseline. In an apples-to-apples comparison against
state-of-the-art methods, again, better performance is demonstrated by the
3DCapsule. | [
"cs.CV"
] |
Many real-world sequential decision-making problems involve critical systems
with financial risks and human-life risks. While several works in the past have
proposed methods that are safe for deployment, they assume that the underlying
problem is stationary. However, many real-world problems of interest exhibit
non-stationarity, and when stakes are high, the cost associated with a false
stationarity assumption may be unacceptable. We take the first steps towards
ensuring safety, with high confidence, for smoothly-varying non-stationary
decision problems. Our proposed method extends a type of safe algorithm, called
a Seldonian algorithm, through a synthesis of model-free reinforcement learning
with time-series analysis. Safety is ensured using sequential hypothesis
testing of a policy's forecasted performance, and confidence intervals are
obtained using wild bootstrap. | [
"cs.LG",
"cs.AI"
] |
Single image super-resolution aims to generate a high-resolution image from a
single low-resolution image, which is of great significance in extensive
applications. As an ill-posed problem, numerous methods have been proposed to
reconstruct the missing image details based on exemplars or priors. In this
paper, we propose a fast and simple single image super-resolution strategy
utilizing patch-wise sigmoid transformation as an imposed sharpening
regularization term in the reconstruction, which realizes amazing
reconstruction performance. Extensive experiments compared with other
state-of-the-art approaches demonstrate the superior effectiveness and
efficiency of the proposed algorithm. | [
"cs.CV"
] |
Intelligent Object manipulation for grasping is a challenging problem for
robots. Unlike robots, humans almost immediately know how to manipulate objects
for grasping due to learning over the years. A grown woman can grasp objects
more skilfully than a child because of learning skills developed over years,
the absence of which in the present day robotic grasping compels it to perform
well below the human object grasping benchmarks. In this paper we have taken up
the challenge of developing learning based pose estimation by decomposing the
problem into both position and orientation learning. More specifically, for
grasp position estimation, we explore three different methods - a Genetic
Algorithm (GA) based optimization method to minimize error between calculated
image points and predicted end-effector (EE) position, a regression based
method (RM) where collected data points of robot EE and image points have been
regressed with a linear model, a PseudoInverse (PI) model which has been
formulated in the form of a mapping matrix with robot EE position and image
points for several observations. Further for grasp orientation learning, we
develop a deep reinforcement learning (DRL) model which we name as Grasp Deep
Q-Network (GDQN) and benchmarked our results with Modified VGG16 (MVGG16).
Rigorous experimentations show that due to inherent capability of producing
very high-quality solutions for optimization problems and search problems, GA
based predictor performs much better than the other two models for position
estimation. For orientation learning results indicate that off policy learning
through GDQN outperforms MVGG16, since GDQN architecture is specially made
suitable for the reinforcement learning. Based on our proposed architectures
and algorithms, the robot is capable of grasping all rigid body objects having
regular shapes. | [
"cs.LG",
"cs.RO",
"stat.ML"
] |
As an emerging data modal with precise distance sensing, LiDAR point clouds
have been placed great expectations on 3D scene understanding. However, point
clouds are always sparsely distributed in the 3D space, and with unstructured
storage, which makes it difficult to represent them for effective 3D object
detection. To this end, in this work, we regard point clouds as hollow-3D data
and propose a new architecture, namely Hallucinated Hollow-3D R-CNN
($\text{H}^2$3D R-CNN), to address the problem of 3D object detection. In our
approach, we first extract the multi-view features by sequentially projecting
the point clouds into the perspective view and the bird-eye view. Then, we
hallucinate the 3D representation by a novel bilaterally guided multi-view
fusion block. Finally, the 3D objects are detected via a box refinement module
with a novel Hierarchical Voxel RoI Pooling operation. The proposed
$\text{H}^2$3D R-CNN provides a new angle to take full advantage of
complementary information in the perspective view and the bird-eye view with an
efficient framework. We evaluate our approach on the public KITTI Dataset and
Waymo Open Dataset. Extensive experiments demonstrate the superiority of our
method over the state-of-the-art algorithms with respect to both effectiveness
and efficiency. The code will be made available at
\url{https://github.com/djiajunustc/H-23D_R-CNN}. | [
"cs.CV"
] |
The goal of salient region detection is to identify the regions of an image
that attract the most attention. Many methods have achieved state-of-the-art
performance levels on this task. Recently, salient instance segmentation has
become an even more challenging task than traditional salient region detection;
however, few of the existing methods have concentrated on this underexplored
problem. Unlike the existing methods, which usually employ object proposals to
roughly count and locate object instances, our method applies salient objects
subitizing to predict an accurate number of instances for salient instance
segmentation. In this paper, we propose a multitask densely connected neural
network (MDNN) to segment salient instances in an image. In contrast to
existing approaches, our framework is proposal-free and category-independent.
The MDNN contains two parallel branches: the first is a densely connected
subitizing network (DSN) used for subitizing prediction; the second is a
densely connected fully convolutional network (DFCN) used for salient region
detection. The MDNN simultaneously outputs saliency maps and salient object
subitizing. Then, an adaptive deep feature-based spectral clustering operation
segments the salient regions into instances based on the subitizing and
saliency maps. The experimental results on both salient region detection and
salient instance segmentation datasets demonstrate the satisfactory performance
of our framework. Notably, its [email protected] and [email protected] reaches 73.46% and 60.14% in
the salient instance dataset, substantially higher than the results achieved by
the state-of-the-art algorithm. | [
"cs.CV"
] |
Combinatorial optimization problems are notoriously challenging for neural
networks, especially in the absence of labeled instances. This work proposes an
unsupervised learning framework for CO problems on graphs that can provide
integral solutions of certified quality. Inspired by Erdos' probabilistic
method, we use a neural network to parametrize a probability distribution over
sets. Crucially, we show that when the network is optimized w.r.t. a suitably
chosen loss, the learned distribution contains, with controlled probability, a
low-cost integral solution that obeys the constraints of the combinatorial
problem. The probabilistic proof of existence is then derandomized to decode
the desired solutions. We demonstrate the efficacy of this approach to obtain
valid solutions to the maximum clique problem and to perform local graph
clustering. Our method achieves competitive results on both real datasets and
synthetic hard instances. | [
"cs.LG",
"stat.ML"
] |
Delineation of line patterns in images is a basic step required in various
applications such as blood vessel detection in medical images, segmentation of
rivers or roads in aerial images, detection of cracks in walls or pavements,
etc. In this paper we present trainable B-COSFIRE filters, which are a model of
some neurons in area V1 of the primary visual cortex, and apply it to the
delineation of line patterns in different kinds of images. B-COSFIRE filters
are trainable as their selectivity is determined in an automatic configuration
process given a prototype pattern of interest. They are configurable to detect
any preferred line structure (e.g. segments, corners, cross-overs, etc.), so
usable for automatic data representation learning. We carried out experiments
on two data sets, namely a line-network data set from INRIA and a data set of
retinal fundus images named IOSTAR. The results that we achieved confirm the
robustness of the proposed approach and its effectiveness in the delineation of
line structures in different kinds of images. | [
"cs.CV"
] |
The performance of financial market prediction systems depends heavily on the
quality of features it is using. While researchers have used various techniques
for enhancing the stock specific features, less attention has been paid to
extracting features that represent general mechanism of financial markets. In
this paper, we investigate the importance of extracting such general features
in stock market prediction domain and show how it can improve the performance
of financial market prediction. We present a framework called U-CNNpred, that
uses a CNN-based structure. A base model is trained in a specially designed
layer-wise training procedure over a pool of historical data from many
financial markets, in order to extract the common patterns from different
markets. Our experiments, in which we have used hundreds of stocks in S\&P 500
as well as 14 famous indices around the world, show that this model can
outperform baseline algorithms when predicting the directional movement of the
markets for which it has been trained for. We also show that the base model can
be fine-tuned for predicting new markets and achieve a better performance
compared to the state of the art baseline algorithms that focus on constructing
market-specific models from scratch. | [
"cs.LG",
"q-fin.CP",
"stat.ML"
] |
Corrosion detection on metal constructions is a major challenge in civil
engineering for quick, safe and effective inspection. Existing image analysis
approaches tend to place bounding boxes around the defected region which is not
adequate both for structural analysis and pre-fabrication, an innovative
construction concept which reduces maintenance cost, time and improves safety.
In this paper, we apply three semantic segmentation-oriented deep learning
models (FCN, U-Net and Mask R-CNN) for corrosion detection, which perform
better in terms of accuracy and time and require a smaller number of annotated
samples compared to other deep models, e.g. CNN. However, the final images
derived are still not sufficiently accurate for structural analysis and
pre-fabrication. Thus, we adopt a novel data projection scheme that fuses the
results of color segmentation, yielding accurate but over-segmented contours of
a region, with a processed area of the deep masks, resulting in high-confidence
corroded pixels. | [
"cs.CV",
"cs.LG",
"eess.IV",
"68T07 (Primary) 68T45 (Secondary)",
"I.2.10; I.4.6"
] |
This paper proposes a joint segmentation and deconvolution Bayesian method
for medical ultrasound (US) images. Contrary to piecewise homogeneous images,
US images exhibit heavy characteristic speckle patterns correlated with the
tissue structures. The generalized Gaussian distribution (GGD) has been shown
to be one of the most relevant distributions for characterizing the speckle in
US images. Thus, we propose a GGD-Potts model defined by a label map coupling
US image segmentation and deconvolution. The Bayesian estimators of the unknown
model parameters, including the US image, the label map and all the
hyperparameters are difficult to be expressed in closed form. Thus, we
investigate a Gibbs sampler to generate samples distributed according to the
posterior of interest. These generated samples are finally used to compute the
Bayesian estimators of the unknown parameters. The performance of the proposed
Bayesian model is compared with existing approaches via several experiments
conducted on realistic synthetic data and in vivo US images. | [
"cs.CV"
] |
An unsupervised point cloud registration method, called salient points
analysis (SPA), is proposed in this work. The proposed SPA method can register
two point clouds effectively using only a small subset of salient points. It
first applies the PointHop++ method to point clouds, finds corresponding
salient points in two point clouds based on the local surface characteristics
of points and performs registration by matching the corresponding salient
points. The SPA method offers several advantages over the recent deep learning
based solutions for registration. Deep learning methods such as PointNetLK and
DCP train end-to-end networks and rely on full supervision (namely, ground
truth transformation matrix and class label). In contrast, the SPA is
completely unsupervised. Furthermore, SPA's training time and model size are
much less. The effectiveness of the SPA method is demonstrated by experiments
on seen and unseen classes and noisy point clouds from the ModelNet-40 dataset. | [
"cs.CV"
] |
Weakly supervised object detection (WSOD), where a detector is trained with
only image-level annotations, is attracting more and more attention. As a
method to obtain a well-performing detector, the detector and the instance
labels are updated iteratively. In this study, for more efficient iterative
updating, we focus on the instance labeling problem, a problem of which label
should be annotated to each region based on the last localization result.
Instead of simply labeling the top-scoring region and its highly overlapping
regions as positive and others as negative, we propose more effective instance
labeling methods as follows. First, to solve the problem that regions covering
only some parts of the object tend to be labeled as positive, we find regions
covering the whole object focusing on the context classification loss. Second,
considering the situation where the other objects contained in the image can be
labeled as negative, we impose a spatial restriction on regions labeled as
negative. Using these instance labeling methods, we train the detector on the
PASCAL VOC 2007 and 2012 and obtain significantly improved results compared
with other state-of-the-art approaches. | [
"cs.CV"
] |
Sepsis is a dangerous condition that is a leading cause of patient mortality.
Treating sepsis is highly challenging, because individual patients respond very
differently to medical interventions and there is no universally agreed-upon
treatment for sepsis. In this work, we explore the use of continuous
state-space model-based reinforcement learning (RL) to discover high-quality
treatment policies for sepsis patients. Our quantitative evaluation reveals
that by blending the treatment strategy discovered with RL with what clinicians
follow, we can obtain improved policies, potentially allowing for better
medical treatment for sepsis. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
To date, top-performing optical flow estimation methods only take pairs of
consecutive frames into account. While elegant and appealing, the idea of using
more than two frames has not yet produced state-of-the-art results. We present
a simple, yet effective fusion approach for multi-frame optical flow that
benefits from longer-term temporal cues. Our method first warps the optical
flow from previous frames to the current, thereby yielding multiple plausible
estimates. It then fuses the complementary information carried by these
estimates into a new optical flow field. At the time of writing, our method
ranks first among published results in the MPI Sintel and KITTI 2015
benchmarks. Our models will be available on https://github.com/NVlabs/PWC-Net. | [
"cs.CV"
] |
We propose a novel Siamese Natural Language Tracker (SNLT), which brings the
advancements in visual tracking to the tracking by natural language (NL)
descriptions task. The proposed SNLT is applicable to a wide range of Siamese
trackers, providing a new class of baselines for the tracking by NL task and
promising future improvements from the advancements of Siamese trackers. The
carefully designed architecture of the Siamese Natural Language Region Proposal
Network (SNL-RPN), together with the Dynamic Aggregation of vision and language
modalities, is introduced to perform the tracking by NL task. Empirical results
over tracking benchmarks with NL annotations show that the proposed SNLT
improves Siamese trackers by 3 to 7 percentage points with a slight tradeoff of
speed. The proposed SNLT outperforms all NL trackers to-date and is competitive
among state-of-the-art real-time trackers on LaSOT benchmarks while running at
50 frames per second on a single GPU. | [
"cs.CV"
] |
We present a differentiable soft-body physics simulator that can be composed
with neural networks as a differentiable layer. In contrast to other
differentiable physics approaches that use explicit forward models to define
state transitions, we focus on implicit state transitions defined via function
minimization. Implicit state transitions appear in implicit numerical
integration methods, which offer the benefits of large time steps and excellent
numerical stability, but require a special treatment to achieve
differentiability due to the absence of an explicit differentiable forward
pass. In contrast to other implicit differentiation approaches that require
explicit formulas for the force function and the force Jacobian matrix, we
present an energy-based approach that allows us to compute these derivatives
automatically and in a matrix-free fashion via reverse-mode automatic
differentiation. This allows for more flexibility and productivity when
defining physical models and is particularly important in the context of neural
network training, which often relies on reverse-mode automatic differentiation
(backpropagation). We demonstrate the effectiveness of our differentiable
simulator in policy optimization for locomotion tasks and show that it achieves
better sample efficiency than model-free reinforcement learning. | [
"cs.LG",
"cs.GR"
] |
We study the problem of object detection over scanned images of scientific
documents. We consider images that contain objects of varying aspect ratios and
sizes and range from coarse elements such as tables and figures to fine
elements such as equations and section headers. We find that current object
detectors fail to produce properly localized region proposals over such page
objects. We revisit the original R-CNN model and present a method for
generating fine-grained proposals over document elements. We also present a
region embedding model that uses the convolutional maps of a proposal's
neighbors as context to produce an embedding for each proposal. This region
embedding is able to capture the semantic relationships between a target region
and its surrounding context. Our end-to-end model produces an embedding for
each proposal, then classifies each proposal by using a multi-head attention
model that attends to the most important neighbors of a proposal. To evaluate
our model, we collect and annotate a dataset of publications from heterogeneous
journals. We show that our model, referred to as Attentive-RCNN, yields a 17%
mAP improvement compared to standard object detection models. | [
"cs.CV",
"cs.LG",
"eess.IV",
"stat.ML"
] |
Continuously learning to solve unseen tasks with limited experience has been
extensively pursued in meta-learning and continual learning, but with
restricted assumptions such as accessible task distributions, independently and
identically distributed tasks, and clear task delineations. However, real-world
physical tasks frequently violate these assumptions, resulting in performance
degradation. This paper proposes a continual online model-based reinforcement
learning approach that does not require pre-training to solve task-agnostic
problems with unknown task boundaries. We maintain a mixture of experts to
handle nonstationarity, and represent each different type of dynamics with a
Gaussian Process to efficiently leverage collected data and expressively model
uncertainty. We propose a transition prior to account for the temporal
dependencies in streaming data and update the mixture online via sequential
variational inference. Our approach reliably handles the task distribution
shift by generating new models for never-before-seen dynamics and reusing old
models for previously seen dynamics. In experiments, our approach outperforms
alternative methods in non-stationary tasks, including classic control with
changing dynamics and decision making in different driving scenarios. | [
"cs.LG",
"cs.AI",
"cs.RO",
"stat.ML"
] |
Understanding the behaviors and intentions of pedestrians is still one of the
main challenges for vehicle autonomy, as accurate predictions of their
intentions can guarantee their safety and driving comfort of vehicles. In this
paper, we address pedestrian crossing prediction in urban traffic environments
by linking the dynamics of a pedestrian's skeleton to a binary crossing
intention. We introduce TrouSPI-Net: a context-free, lightweight, multi-branch
predictor. TrouSPI-Net extracts spatio-temporal features for different time
resolutions by encoding pseudo-images sequences of skeletal joints' positions
and processes them with parallel attention modules and atrous convolutions. The
proposed approach is then enhanced by processing features such as relative
distances of skeletal joints, bounding box positions, or ego-vehicle speed with
U-GRUs. Using the newly proposed evaluation procedures for two large public
naturalistic data sets for studying pedestrian behavior in traffic: JAAD and
PIE, we evaluate TrouSPI-Net and analyze its performance. Experimental results
show that TrouSPI-Net achieved 0.76 F1 score on JAAD and 0.80 F1 score on PIE,
therefore outperforming current state-of-the-art while being lightweight and
context-free. | [
"cs.CV",
"cs.AI"
] |
We study learners (computable devices) inferring formal languages, a setting
referred to as language learning in the limit or inductive inference. In
particular, we require the learners we investigate to be witness-based, that
is, to justify each of their mind changes. Besides being a natural requirement
for a learning task, this restriction deserves special attention as it is a
specialization of various important learning paradigms. In particular, with the
help of witness-based learning, explanatory learners are shown to be equally
powerful under these seemingly incomparable paradigms. Nonetheless, until now,
witness-based learners have only been studied sparsely.
In this work, we conduct a thorough study of these learners both when
requiring syntactic and semantic convergence and obtain normal forms thereof.
In the former setting, we extend known results such that they include
witness-based learning and generalize these to hold for a variety of learners.
Transitioning to behaviourally correct learning, we also provide normal forms
for semantically witness-based learners. Most notably, we show that set-driven
globally semantically witness-based learners are equally powerful as their
Gold-style semantically conservative counterpart. Such results are key to
understanding the, yet undiscovered, mutual relation between various important
learning paradigms when learning behaviourally correctly. | [
"cs.LG",
"cs.FL"
] |
Synthetic visual data can provide practically infinite diversity and rich
labels, while avoiding ethical issues with privacy and bias. However, for many
tasks, current models trained on synthetic data generalize poorly to real data.
The task of 3D human pose estimation is a particularly interesting example of
this sim2real problem, because learning-based approaches perform reasonably
well given real training data, yet labeled 3D poses are extremely difficult to
obtain in the wild, limiting scalability. In this paper, we show that standard
neural-network approaches, which perform poorly when trained on synthetic RGB
images, can perform well when the data is pre-processed to extract cues about
the person's motion, notably as optical flow and the motion of 2D keypoints.
Therefore, our results suggest that motion can be a simple way to bridge a
sim2real gap when video is available. We evaluate on the 3D Poses in the Wild
dataset, the most challenging modern benchmark for 3D pose estimation, where we
show full 3D mesh recovery that is on par with state-of-the-art methods trained
on real 3D sequences, despite training only on synthetic humans from the
SURREAL dataset. | [
"cs.CV"
] |
Multi-Instance Learning(MIL) aims to learn the mapping between a bag of
instances and the bag-level label. Therefore, the relationships among instances
are very important for learning the mapping. In this paper, we propose an MIL
algorithm based on a graph built by structural relationship among instances
within a bag. Then, Graph Convolutional Network(GCN) and the graph-attention
mechanism are used to learn bag-embedding. In the task of medical image
classification, our GCN-based MIL algorithm makes full use of the structural
relationships among patches(instances) in an original image space domain, and
experimental results verify that our method is more suitable for handling
medical high-resolution images. We also verify experimentally that the proposed
method achieves better results than previous methods on five bechmark MIL
datasets and four medical image datasets. | [
"cs.LG",
"cs.CV"
] |
Autonomous driving is becoming one of the leading industrial research areas.
Therefore many automobile companies are coming up with semi to fully autonomous
driving solutions. Among these solutions, lane detection is one of the vital
driver-assist features that play a crucial role in the decision-making process
of the autonomous vehicle. A variety of solutions have been proposed to detect
lanes on the road, which ranges from using hand-crafted features to the
state-of-the-art end-to-end trainable deep learning architectures. Most of
these architectures are trained in a traffic constrained environment. In this
paper, we propose a novel solution to multi-lane detection, which outperforms
state of the art methods in terms of both accuracy and speed. To achieve this,
we also offer a dataset with a more intuitive labeling scheme as compared to
other benchmark datasets. Using our approach, we are able to obtain a lane
segmentation accuracy of 99.87% running at 54.53 fps (average). | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
In this paper, we propose a novel loss function for training Generative
Adversarial Networks (GANs) aiming towards deeper theoretical understanding as
well as improved stability and performance for the underlying optimization
problem. The new loss function is based on cumulant generating functions giving
rise to \emph{Cumulant GAN}. Relying on a recently-derived variational formula,
we show that the corresponding optimization problem is equivalent to R{\'e}nyi
divergence minimization, thus offering a (partially) unified perspective of GAN
losses: the R{\'e}nyi family encompasses Kullback-Leibler divergence (KLD),
reverse KLD, Hellinger distance and $\chi^2$-divergence. Wasserstein GAN is
also a member of cumulant GAN. In terms of stability, we rigorously prove the
linear convergence of cumulant GAN to the Nash equilibrium for a linear
discriminator, Gaussian distributions and the standard gradient descent ascent
algorithm. Finally, we experimentally demonstrate that image generation is more
robust relative to Wasserstein GAN and it is substantially improved in terms of
both inception score and Fr\'echet inception distance when both weaker and
stronger discriminators are considered. | [
"cs.LG",
"cs.IT",
"math.IT",
"stat.ML"
] |
Generative adversarial networks (GANs) have emerged as a powerful
unsupervised method to model the statistical patterns of real-world data sets,
such as natural images. These networks are trained to map random inputs in
their latent space to new samples representative of the learned data. However,
the structure of the latent space is hard to intuit due to its high
dimensionality and the non-linearity of the generator, which limits the
usefulness of the models. Understanding the latent space requires a way to
identify input codes for existing real-world images (inversion), and a way to
identify directions with known image transformations (interpretability). Here,
we use a geometric framework to address both issues simultaneously. We develop
an architecture-agnostic method to compute the Riemannian metric of the image
manifold created by GANs. The eigen-decomposition of the metric isolates axes
that account for different levels of image variability. An empirical analysis
of several pretrained GANs shows that image variation around each position is
concentrated along surprisingly few major axes (the space is highly
anisotropic) and the directions that create this large variation are similar at
different positions in the space (the space is homogeneous). We show that many
of the top eigenvectors correspond to interpretable transforms in the image
space, with a substantial part of eigenspace corresponding to minor transforms
which could be compressed out. This geometric understanding unifies key
previous results related to GAN interpretability. We show that the use of this
metric allows for more efficient optimization in the latent space (e.g. GAN
inversion) and facilitates unsupervised discovery of interpretable axes. Our
results illustrate that defining the geometry of the GAN image manifold can
serve as a general framework for understanding GANs. | [
"cs.LG",
"cs.NA",
"cs.NE",
"math.NA",
"I.2.10; I.3.3; I.3.5; G.1.4"
] |
The most significant barrier to the advancement of Neural Architecture Search
(NAS) is its demand for large computational resources, which hinders
scientifically sound empirical evaluations. As a remedy, several tabular NAS
benchmarks were proposed to simulate runs of NAS methods in seconds. However,
all existing tabular NAS benchmarks are limited to extremely small
architectural spaces since they rely on exhaustive evaluations of the space.
This leads to unrealistic results that do not transfer to larger search spaces.
To overcome this fundamental limitation, we propose NAS-Bench-301, the first
surrogate NAS benchmark, using a search space containing $10^{18}$
architectures, many orders of magnitude larger than any previous tabular NAS
benchmark. After motivating the benefits of a surrogate benchmark over a
tabular one, we fit various regression models on our dataset, which consists of
$\sim$60k architecture evaluations, and build surrogates via deep ensembles to
also model uncertainty. We benchmark a wide range of NAS algorithms using
NAS-Bench-301 and obtain comparable results to the true benchmark at a fraction
of the real cost. Finally, we show how NAS-Bench-301 can be used to generate
new scientific insights. | [
"cs.LG"
] |
Multivariate time series (MTS) data are becoming increasingly ubiquitous in
diverse domains, e.g., IoT systems, health informatics, and 5G networks. To
obtain an effective representation of MTS data, it is not only essential to
consider unpredictable dynamics and highly variable lengths of these data but
also important to address the irregularities in the sampling rates of MTS.
Existing parametric approaches rely on manual hyperparameter tuning and may
cost a huge amount of labor effort. Therefore, it is desirable to learn the
representation automatically and efficiently. To this end, we propose an
autonomous representation learning approach for multivariate time series
(TimeAutoML) with irregular sampling rates and variable lengths. As opposed to
previous works, we first present a representation learning pipeline in which
the configuration and hyperparameter optimization are fully automatic and can
be tailored for various tasks, e.g., anomaly detection, clustering, etc. Next,
a negative sample generation approach and an auxiliary classification task are
developed and integrated within TimeAutoML to enhance its representation
capability. Extensive empirical studies on real-world datasets demonstrate that
the proposed TimeAutoML outperforms competing approaches on various tasks by a
large margin. In fact, it achieves the best anomaly detection performance among
all comparison algorithms on 78 out of all 85 UCR datasets, acquiring up to 20%
performance improvement in terms of AUC score. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
The study of adversarial examples and their activation has attracted
significant attention for secure and robust learning with deep neural networks
(DNNs). Different from existing works, in this paper, we highlight two new
characteristics of adversarial examples from the channel-wise activation
perspective: 1) the activation magnitudes of adversarial examples are higher
than that of natural examples; and 2) the channels are activated more uniformly
by adversarial examples than natural examples. We find that the
state-of-the-art defense adversarial training has addressed the first issue of
high activation magnitudes via training on adversarial examples, while the
second issue of uniform activation remains. This motivates us to suppress
redundant activation from being activated by adversarial perturbations via a
Channel-wise Activation Suppressing (CAS) strategy. We show that CAS can train
a model that inherently suppresses adversarial activation, and can be easily
applied to existing defense methods to further improve their robustness. Our
work provides a simple but generic training strategy for robustifying the
intermediate layer activation of DNNs. | [
"cs.LG"
] |
Despite the rapid progress of generative adversarial networks (GANs) in image
synthesis in recent years, the existing image synthesis approaches work in
either geometry domain or appearance domain alone which often introduces
various synthesis artifacts. This paper presents an innovative Hierarchical
Composition GAN (HIC-GAN) that incorporates image synthesis in geometry and
appearance domains into an end-to-end trainable network and achieves superior
synthesis realism in both domains simultaneously. We design an innovative
hierarchical composition mechanism that is capable of learning realistic
composition geometry and handling occlusions while multiple foreground objects
are involved in image composition. In addition, we introduce a novel attention
mask mechanism that guides to adapt the appearance of foreground objects which
also helps to provide better training reference for learning in geometry
domain. Extensive experiments on scene text image synthesis, portrait editing
and indoor rendering tasks show that the proposed HIC-GAN achieves superior
synthesis performance qualitatively and quantitatively. | [
"cs.CV"
] |
Understanding the 3D world from 2D projected natural images is a fundamental
challenge in computer vision and graphics. Recently, an unsupervised learning
approach has garnered considerable attention owing to its advantages in data
collection. However, to mitigate training limitations, typical methods need to
impose assumptions for viewpoint distribution (e.g., a dataset containing
various viewpoint images) or object shape (e.g., symmetric objects). These
assumptions often restrict applications; for instance, the application to
non-rigid objects or images captured from similar viewpoints (e.g., flower or
bird images) remains a challenge. To complement these approaches, we propose
aperture rendering generative adversarial networks (AR-GANs), which equip
aperture rendering on top of GANs, and adopt focus cues to learn the depth and
depth-of-field (DoF) effect of unlabeled natural images. To address the
ambiguities triggered by unsupervised setting (i.e., ambiguities between smooth
texture and out-of-focus blurs, and between foreground and background blurs),
we develop DoF mixture learning, which enables the generator to learn real
image distribution while generating diverse DoF images. In addition, we devise
a center focus prior to guiding the learning direction. In the experiments, we
demonstrate the effectiveness of AR-GANs in various datasets, such as flower,
bird, and face images, demonstrate their portability by incorporating them into
other 3D representation learning GANs, and validate their applicability in
shallow DoF rendering. | [
"cs.CV",
"cs.LG",
"eess.IV",
"stat.ML"
] |
Graph classification has recently received a lot of attention from various
fields of machine learning e.g. kernel methods, sequential modeling or graph
embedding. All these approaches offer promising results with different
respective strengths and weaknesses. However, most of them rely on complex
mathematics and require heavy computational power to achieve their best
performance. We propose a simple and fast algorithm based on the spectral
decomposition of graph Laplacian to perform graph classification and get a
first reference score for a dataset. We show that this method obtains
competitive results compared to state-of-the-art algorithms. | [
"cs.LG",
"stat.ML"
] |
As a widely deployed security scheme, text-based CAPTCHAs have become more
and more difficult to resist machine learning-based attacks. So far, many
researchers have conducted attacking research on text-based CAPTCHAs deployed
by different companies (such as Microsoft, Amazon, and Apple) and achieved
certain results.However, most of these attacks have some shortcomings, such as
poor portability of attack methods, requiring a series of data preprocessing
steps, and relying on large amounts of labeled CAPTCHAs. In this paper, we
propose an efficient and simple end-to-end attack method based on
cycle-consistent generative adversarial networks. Compared with previous
studies, our method greatly reduces the cost of data labeling. In addition,
this method has high portability. It can attack common text-based CAPTCHA
schemes only by modifying a few configuration parameters, which makes the
attack easier. Firstly, we train CAPTCHA synthesizers based on the cycle-GAN to
generate some fake samples. Basic recognizers based on the convolutional
recurrent neural network are trained with the fake data. Subsequently, an
active transfer learning method is employed to optimize the basic recognizer
utilizing tiny amounts of labeled real-world CAPTCHA samples. Our approach
efficiently cracked the CAPTCHA schemes deployed by 10 popular websites,
indicating that our attack is likely very general. Additionally, we analyzed
the current most popular anti-recognition mechanisms. The results show that the
combination of more anti-recognition mechanisms can improve the security of
CAPTCHA, but the improvement is limited. Conversely, generating more complex
CAPTCHAs may cost more resources and reduce the availability of CAPTCHAs. | [
"cs.CV"
] |
We analyse multimodal time-series data corresponding to weight, sleep and
steps measurements. We focus on predicting whether a user will successfully
achieve his/her weight objective. For this, we design several deep long
short-term memory (LSTM) architectures, including a novel cross-modal LSTM
(X-LSTM), and demonstrate their superiority over baseline approaches. The
X-LSTM improves parameter efficiency by processing each modality separately and
allowing for information flow between them by way of recurrent
cross-connections. We present a general hyperparameter optimisation technique
for X-LSTMs, which allows us to significantly improve on the LSTM and a prior
state-of-the-art cross-modal approach, using a comparable number of parameters.
Finally, we visualise the model's predictions, revealing implications about
latent variables in this task. | [
"stat.ML",
"cs.AI",
"cs.LG",
"q-bio.QM"
] |
Learning powerful data embeddings has become a center piece in machine
learning, especially in natural language processing and computer vision
domains. The crux of these embeddings is that they are pretrained on huge
corpus of data in a unsupervised fashion, sometimes aided with transfer
learning. However currently in the graph learning domain, embeddings learned
through existing graph neural networks (GNNs) are task dependent and thus
cannot be shared across different datasets. In this paper, we present a first
powerful and theoretically guaranteed graph neural network that is designed to
learn task-independent graph embeddings, thereafter referred to as deep
universal graph embedding (DUGNN). Our DUGNN model incorporates a novel graph
neural network (as a universal graph encoder) and leverages rich Graph Kernels
(as a multi-task graph decoder) for both unsupervised learning and
(task-specific) adaptive supervised learning. By learning task-independent
graph embeddings across diverse datasets, DUGNN also reaps the benefits of
transfer learning. Through extensive experiments and ablation studies, we show
that the proposed DUGNN model consistently outperforms both the existing
state-of-art GNN models and Graph Kernels by an increased accuracy of 3% - 8%
on graph classification benchmark datasets. | [
"cs.LG",
"stat.ML"
] |
Purpose: Lesion segmentation in medical imaging is key to evaluating
treatment response. We have recently shown that reinforcement learning can be
applied to radiological images for lesion localization. Furthermore, we
demonstrated that reinforcement learning addresses important limitations of
supervised deep learning; namely, it can eliminate the requirement for large
amounts of annotated training data and can provide valuable intuition lacking
in supervised approaches. However, we did not address the fundamental task of
lesion/structure-of-interest segmentation. Here we introduce a method combining
unsupervised deep learning clustering with reinforcement learning to segment
brain lesions on MRI.
Materials and Methods: We initially clustered images using unsupervised deep
learning clustering to generate candidate lesion masks for each MRI image. The
user then selected the best mask for each of 10 training images. We then
trained a reinforcement learning algorithm to select the masks. We tested the
corresponding trained deep Q network on a separate testing set of 10 images.
For comparison, we also trained and tested a U-net supervised deep learning
network on the same set of training/testing images.
Results: Whereas the supervised approach quickly overfit the training data
and predictably performed poorly on the testing set (16% average Dice score),
the unsupervised deep clustering and reinforcement learning achieved an average
Dice score of 83%.
Conclusion: We have demonstrated a proof-of-principle application of
unsupervised deep clustering and reinforcement learning to segment brain
tumors. The approach represents human-allied AI that requires minimal input
from the radiologist without the need for hand-traced annotation. | [
"cs.CV",
"cs.AI"
] |
The explosive growth in video streaming gives rise to challenges on
performing video understanding at high accuracy and low computation cost.
Conventional 2D CNNs are computationally cheap but cannot capture temporal
relationships; 3D CNN based methods can achieve good performance but are
computationally intensive, making it expensive to deploy. In this paper, we
propose a generic and effective Temporal Shift Module (TSM) that enjoys both
high efficiency and high performance. Specifically, it can achieve the
performance of 3D CNN but maintain 2D CNN's complexity. TSM shifts part of the
channels along the temporal dimension; thus facilitate information exchanged
among neighboring frames. It can be inserted into 2D CNNs to achieve temporal
modeling at zero computation and zero parameters. We also extended TSM to
online setting, which enables real-time low-latency online video recognition
and video object detection. TSM is accurate and efficient: it ranks the first
place on the Something-Something leaderboard upon publication; on Jetson Nano
and Galaxy Note8, it achieves a low latency of 13ms and 35ms for online video
recognition. The code is available at:
https://github.com/mit-han-lab/temporal-shift-module. | [
"cs.CV"
] |
In many real-world decision making problems, reaching an optimal decision
requires taking into account a variable number of objects around the agent.
Autonomous driving is a domain in which this is especially relevant, since the
number of cars surrounding the agent varies considerably over time and affects
the optimal action to be taken. Classical methods that process object lists can
deal with this requirement. However, to take advantage of recent
high-performing methods based on deep reinforcement learning in modular
pipelines, special architectures are necessary. For these, a number of options
exist, but a thorough comparison of the different possibilities is missing. In
this paper, we elaborate limitations of fully-connected neural networks and
other established approaches like convolutional and recurrent neural networks
in the context of reinforcement learning problems that have to deal with
variable sized inputs. We employ the structure of Deep Sets in off-policy
reinforcement learning for high-level decision making, highlight their
capabilities to alleviate these limitations, and show that Deep Sets not only
yield the best overall performance but also offer better generalization to
unseen situations than the other approaches. | [
"cs.LG",
"cs.RO",
"stat.ML"
] |
3D object detection is a common function within the perception system of an
autonomous vehicle and outputs a list of 3D bounding boxes around objects of
interest. Various 3D object detection methods have relied on fusion of
different sensor modalities to overcome limitations of individual sensors.
However, occlusion, limited field-of-view and low-point density of the sensor
data cannot be reliably and cost-effectively addressed by multi-modal sensing
from a single point of view. Alternatively, cooperative perception incorporates
information from spatially diverse sensors distributed around the environment
as a way to mitigate these limitations. This article proposes two schemes for
cooperative 3D object detection using single modality sensors. The early fusion
scheme combines point clouds from multiple spatially diverse sensing points of
view before detection. In contrast, the late fusion scheme fuses the
independently detected bounding boxes from multiple spatially diverse sensors.
We evaluate the performance of both schemes, and their hybrid combination,
using a synthetic cooperative dataset created in two complex driving scenarios,
a T-junction and a roundabout. The evaluation shows that the early fusion
approach outperforms late fusion by a significant margin at the cost of higher
communication bandwidth. The results demonstrate that cooperative perception
can recall more than 95% of the objects as opposed to 30% for single-point
sensing in the most challenging scenario. To provide practical insights into
the deployment of such system, we report how the number of sensors and their
configuration impact the detection performance of the system. | [
"cs.CV",
"cs.LG",
"cs.MA",
"cs.RO",
"stat.ML"
] |
Image inpainting techniques have shown promising improvement with the
assistance of generative adversarial networks (GANs) recently. However, most of
them often suffered from completed results with unreasonable structure or
blurriness. To mitigate this problem, in this paper, we present a one-stage
model that utilizes dense combinations of dilated convolutions to obtain larger
and more effective receptive fields. Benefited from the property of this
network, we can more easily recover large regions in an incomplete image. To
better train this efficient generator, except for frequently-used VGG feature
matching loss, we design a novel self-guided regression loss for concentrating
on uncertain areas and enhancing the semantic details. Besides, we devise a
geometrical alignment constraint item to compensate for the pixel-based
distance between prediction features and ground-truth ones. We also employ a
discriminator with local and global branches to ensure local-global contents
consistency. To further improve the quality of generated images, discriminator
feature matching on the local branch is introduced, which dynamically minimizes
the similarity of intermediate features between synthetic and ground-truth
patches. Extensive experiments on several public datasets demonstrate that our
approach outperforms current state-of-the-art methods. Code is available at
https://github.com/Zheng222/DMFN. | [
"cs.CV",
"cs.MM"
] |
The empirical results suggest that the learnability of a neural network is
directly related to its size. To mathematically prove this, we borrow a tool in
topological algebra: Betti numbers to measure the topological geometric
complexity of input data and the neural network. By characterizing the
expressive capacity of a neural network with its topological complexity, we
conduct a thorough analysis and show that the network's expressive capacity is
limited by the scale of its layers. Further, we derive the upper bounds of the
Betti numbers on each layer within the network. As a result, the problem of
architecture selection of a neural network is transformed to determining the
scale of the network that can represent the input data complexity. With the
presented results, the architecture selection of a fully connected network
boils down to choosing a suitable size of the network such that it equips the
Betti numbers that are not smaller than the Betti numbers of the input data. We
perform the experiments on a real-world dataset MNIST and the results verify
our analysis and conclusion. The code will be publicly available. | [
"cs.LG",
"cs.NE"
] |
Gradient Boosting Machines (GBM) are hugely popular for solving tabular data
problems. However, practitioners are not only interested in point predictions,
but also in probabilistic predictions in order to quantify the uncertainty of
the predictions. Creating such probabilistic predictions is difficult with
existing GBM-based solutions: they either require training multiple models or
they become too computationally expensive to be useful for large-scale
settings. We propose Probabilistic Gradient Boosting Machines (PGBM), a method
to create probabilistic predictions with a single ensemble of decision trees in
a computationally efficient manner. PGBM approximates the leaf weights in a
decision tree as a random variable, and approximates the mean and variance of
each sample in a dataset via stochastic tree ensemble update equations. These
learned moments allow us to subsequently sample from a specified distribution
after training. We empirically demonstrate the advantages of PGBM compared to
existing state-of-the-art methods: (i) PGBM enables probabilistic estimates
without compromising on point performance in a single model, (ii) PGBM learns
probabilistic estimates via a single model only (and without requiring
multi-parameter boosting), and thereby offers a speedup of up to several orders
of magnitude over existing state-of-the-art methods on large datasets, and
(iii) PGBM achieves accurate probabilistic estimates in tasks with complex
differentiable loss functions, such as hierarchical time series problems, where
we observed up to 10% improvement in point forecasting performance and up to
300% improvement in probabilistic forecasting performance. | [
"cs.LG",
"stat.ML",
"I.2"
] |
Weakly supervised temporal action localization aims to detect and localize
actions in untrimmed videos with only video-level labels during training.
However, without frame-level annotations, it is challenging to achieve
localization completeness and relieve background interference. In this paper,
we present an Action Unit Memory Network (AUMN) for weakly supervised temporal
action localization, which can mitigate the above two challenges by learning an
action unit memory bank. In the proposed AUMN, two attention modules are
designed to update the memory bank adaptively and learn action units specific
classifiers. Furthermore, three effective mechanisms (diversity, homogeneity
and sparsity) are designed to guide the updating of the memory network. To the
best of our knowledge, this is the first work to explicitly model the action
units with a memory network. Extensive experimental results on two standard
benchmarks (THUMOS14 and ActivityNet) demonstrate that our AUMN performs
favorably against state-of-the-art methods. Specifically, the average mAP of
IoU thresholds from 0.1 to 0.5 on the THUMOS14 dataset is significantly
improved from 47.0% to 52.1%. | [
"cs.CV"
] |
Recent advances in both machine learning and Internet-of-Things have
attracted attention to automatic Activity Recognition, where users wear a
device with sensors and their outputs are mapped to a predefined set of
activities. However, few studies have considered the balance between wearable
power consumption and activity recognition accuracy. This is particularly
important when part of the computational load happens on the wearable device.
In this paper, we present a new methodology to perform feature selection on the
device based on Reinforcement Learning (RL) to find the optimum balance between
power consumption and accuracy. To accelerate the learning speed, we extend the
RL algorithm to address multiple sources of feedback, and use them to tailor
the policy in conjunction with estimating the feedback accuracy. We evaluated
our system on the SPHERE challenge dataset, a publicly available research
dataset. The results show that our proposed method achieves a good trade-off
between wearable power consumption and activity recognition accuracy. | [
"cs.LG",
"stat.ML"
] |
We provide the first global optimization landscape analysis of
$Neural\;Collapse$ -- an intriguing empirical phenomenon that arises in the
last-layer classifiers and features of neural networks during the terminal
phase of training. As recently reported by Papyan et al., this phenomenon
implies that ($i$) the class means and the last-layer classifiers all collapse
to the vertices of a Simplex Equiangular Tight Frame (ETF) up to scaling, and
($ii$) cross-example within-class variability of last-layer activations
collapses to zero. We study the problem based on a simplified
$unconstrained\;feature\;model$, which isolates the topmost layers from the
classifier of the neural network. In this context, we show that the classical
cross-entropy loss with weight decay has a benign global landscape, in the
sense that the only global minimizers are the Simplex ETFs while all other
critical points are strict saddles whose Hessian exhibit negative curvature
directions. In contrast to existing landscape analysis for deep neural networks
which is often disconnected from practice, our analysis of the simplified model
not only does it explain what kind of features are learned in the last layer,
but it also shows why they can be efficiently optimized in the simplified
settings, matching the empirical observations in practical deep network
architectures. These findings could have profound implications for
optimization, generalization, and robustness of broad interests. For example,
our experiments demonstrate that one may set the feature dimension equal to the
number of classes and fix the last-layer classifier to be a Simplex ETF for
network training, which reduces memory cost by over $20\%$ on ResNet18 without
sacrificing the generalization performance. | [
"cs.LG",
"cs.AI",
"cs.IT",
"math.IT",
"math.OC",
"stat.ML"
] |
With the ongoing pandemic, virtual concerts and live events using digitized
performances of musicians are getting traction on massive multiplayer online
worlds. However, well choreographed dance movements are extremely complex to
animate and would involve an expensive and tedious production process. In
addition to the use of complex motion capture systems, it typically requires a
collaborative effort between animators, dancers, and choreographers. We
introduce a complete system for dance motion synthesis, which can generate
complex and highly diverse dance sequences given an input music sequence. As
motion capture data is limited for the range of dance motions and styles, we
introduce a massive dance motion data set that is created from YouTube videos.
We also present a novel two-stream motion transformer generative model, which
can generate motion sequences with high flexibility. We also introduce new
evaluation metrics for the quality of synthesized dance motions, and
demonstrate that our system can outperform state-of-the-art methods. Our system
provides high-quality animations suitable for large crowds for virtual concerts
and can also be used as reference for professional animation pipelines. Most
importantly, we show that vast online videos can be effective in training dance
motion models. | [
"cs.CV",
"cs.GR"
] |
Over the past few years, we have seen fundamental breakthroughs in core
problems in machine learning, largely driven by advances in deep neural
networks. At the same time, the amount of data collected in a wide array of
scientific domains is dramatically increasing in both size and complexity.
Taken together, this suggests many exciting opportunities for deep learning
applications in scientific settings. But a significant challenge to this is
simply knowing where to start. The sheer breadth and diversity of different
deep learning techniques makes it difficult to determine what scientific
problems might be most amenable to these methods, or which specific combination
of methods might offer the most promising first approach. In this survey, we
focus on addressing this central issue, providing an overview of many widely
used deep learning models, spanning visual, sequential and graph structured
data, associated tasks and different training methods, along with techniques to
use deep learning with less data and better interpret these complex models ---
two central considerations for many scientific use cases. We also include
overviews of the full design process, implementation tips, and links to a
plethora of tutorials, research summaries and open-sourced deep learning
pipelines and pretrained models, developed by the community. We hope that this
survey will help accelerate the use of deep learning across different
scientific domains. | [
"cs.LG",
"stat.ML"
] |
The paper proposes a Dynamic ResBlock Generative Adversarial Network
(DRB-GAN) for artistic style transfer. The style code is modeled as the shared
parameters for Dynamic ResBlocks connecting both the style encoding network and
the style transfer network. In the style encoding network, a style class-aware
attention mechanism is used to attend the style feature representation for
generating the style codes. In the style transfer network, multiple Dynamic
ResBlocks are designed to integrate the style code and the extracted CNN
semantic feature and then feed into the spatial window Layer-Instance
Normalization (SW-LIN) decoder, which enables high-quality synthetic images
with artistic style transfer. Moreover, the style collection conditional
discriminator is designed to equip our DRB-GAN model with abilities for both
arbitrary style transfer and collection style transfer during the training
stage. No matter for arbitrary style transfer or collection style transfer,
extensive experiments strongly demonstrate that our proposed DRB-GAN
outperforms state-of-the-art methods and exhibits its superior performance in
terms of visual quality and efficiency. Our source code is available at
\color{magenta}{\url{https://github.com/xuwenju123/DRB-GAN}}. | [
"cs.CV",
"eess.IV"
] |
Video segmentation for the human head and shoulders is essential in creating
elegant media for videoconferencing and virtual reality applications. The main
challenge is to process high-quality background subtraction in a real-time
manner and address the segmentation issues under motion blurs, e.g., shaking
the head or waving hands during conference video. To overcome the motion blur
problem in video segmentation, we propose a novel flow-based encoder-decoder
network (FUNet) that combines both traditional Horn-Schunck optical-flow
estimation technique and convolutional neural networks to perform robust
real-time video segmentation. We also introduce a video and image segmentation
dataset: ConferenceVideoSegmentationDataset. Code and pre-trained models are
available on our GitHub repository:
\url{https://github.com/kuangzijian/Flow-Based-Video-Matting}. | [
"cs.CV"
] |
Sign language is the primary language for people with a hearing loss. Sign
language recognition (SLR) is the automatic recognition of sign language, which
represents a challenging problem for computers, though some progress has been
made recently using deep learning. Huge amounts of data are generally required
to train deep learning models. However, corresponding datasets are missing for
the majority of sign languages. Transfer learning is a technique to utilize a
related task with an abundance of data available to help solve a target task
lacking sufficient data. Transfer learning has been applied highly successfully
in computer vision and natural language processing. However, much less research
has been conducted in the field of SLR. This paper investigates how effectively
transfer learning can be applied to isolated SLR using an inflated 3D
convolutional neural network as the deep learning architecture. Transfer
learning is implemented by pre-training a network on the American Sign Language
dataset MS-ASL and subsequently fine-tuning it separately on three different
sizes of the German Sign Language dataset SIGNUM. The results of the
experiments give clear empirical evidence that transfer learning can be
effectively applied to isolated SLR. The accuracy performances of the networks
applying transfer learning increased substantially by up to 21% as compared to
the baseline models that were not pre-trained on the MS-ASL dataset. | [
"cs.CV",
"cs.LG"
] |
One problem found when working with satellite images is the radiometric
variations across the image and different images. Intending to improve remote
sensing models for the classification of burnt areas, we set two objectives.
The first is to understand the relationship between feature spaces and the
predictive ability of the models, allowing us to explain the differences
between learning and generalization when training and testing in different
datasets. We find that training on datasets built from more than one image
provides models that generalize better. These results are explained by
visualizing the dispersion of values on the feature space. The second objective
is to evolve hyper-features that improve the performance of different
classifiers on a variety of test sets. We find the hyper-features to be
beneficial, and obtain the best models with XGBoost, even if the hyper-features
are optimized for a different method. | [
"cs.LG",
"eess.IV",
"stat.ML"
] |
Pyramidal feature representation is the common practice to address the
challenge of scale variation in object detection. However, the inconsistency
across different feature scales is a primary limitation for the single-shot
detectors based on feature pyramid. In this work, we propose a novel and data
driven strategy for pyramidal feature fusion, referred to as adaptively spatial
feature fusion (ASFF). It learns the way to spatially filter conflictive
information to suppress the inconsistency, thus improving the scale-invariance
of features, and introduces nearly free inference overhead. With the ASFF
strategy and a solid baseline of YOLOv3, we achieve the best speed-accuracy
trade-off on the MS COCO dataset, reporting 38.1% AP at 60 FPS, 42.4% AP at 45
FPS and 43.9% AP at 29 FPS. The code is available at
https://github.com/ruinmessi/ASFF | [
"cs.CV"
] |
Example-guided image synthesis has been recently attempted to synthesize an
image from a semantic label map and an exemplary image. In the task, the
additional exemplary image serves to provide style guidance that controls the
appearance of the synthesized output. Despite the controllability advantage,
the previous models are designed on datasets with specific and roughly aligned
objects. In this paper, we tackle a more challenging and general task, where
the exemplar is an arbitrary scene image that is semantically unaligned to the
given label map. To this end, we first propose a new Masked Spatial-Channel
Attention (MSCA) module which models the correspondence between two
unstructured scenes via cross-attention. Next, we propose an end-to-end network
for joint global and local feature alignment and synthesis. In addition, we
propose a novel patch-based self-supervision scheme to enable training.
Experiments on the large-scale CCOO-stuff dataset show significant improvements
over existing methods. Moreover, our approach provides interpretability and can
be readily extended to other tasks including style and spatial interpolation or
extrapolation, as well as other content manipulation. | [
"cs.CV"
] |
Medical imaging AI systems such as disease classification and segmentation
are increasingly inspired and transformed from computer vision based AI
systems. Although an array of adversarial training and/or loss function based
defense techniques have been developed and proved to be effective in computer
vision, defending against adversarial attacks on medical images remains largely
an uncharted territory due to the following unique challenges: 1) label
scarcity in medical images significantly limits adversarial generalizability of
the AI system; 2) vastly similar and dominant fore- and background in medical
images make it hard samples for learning the discriminating features between
different disease classes; and 3) crafted adversarial noises added to the
entire medical image as opposed to the focused organ target can make clean and
adversarial examples more discriminate than that between different disease
classes. In this paper, we propose a novel robust medical imaging AI framework
based on Semi-Supervised Adversarial Training (SSAT) and Unsupervised
Adversarial Detection (UAD), followed by designing a new measure for assessing
systems adversarial risk. We systematically demonstrate the advantages of our
robust medical imaging AI system over the existing adversarial defense
techniques under diverse real-world settings of adversarial attacks using a
benchmark OCT imaging data set. | [
"cs.LG",
"cs.CV",
"eess.IV"
] |
In this article, we present a Shell Language Preprocessing (SLP) library,
which implements tokenization and encoding directed on the parsing of Unix and
Linux shell commands. We describe the rationale behind the need for a new
approach with specific examples when conventional Natural Language Processing
(NLP) pipelines fail. Furthermore, we evaluate our methodology on a security
classification task against widely accepted information and communications
technology (ICT) tokenization techniques and achieve significant improvement of
an F1-score from 0.392 to 0.874. | [
"cs.LG",
"cs.PL"
] |
In this paper, we propose new problem-independent lower bounds on the sample
complexity and regret in episodic MDPs, with a particular focus on the
non-stationary case in which the transition kernel is allowed to change in each
stage of the episode. Our main contribution is a novel lower bound of
$\Omega((H^3SA/\epsilon^2)\log(1/\delta))$ on the sample complexity of an
$(\varepsilon,\delta)$-PAC algorithm for best policy identification in a
non-stationary MDP. This lower bound relies on a construction of "hard MDPs"
which is different from the ones previously used in the literature. Using this
same class of MDPs, we also provide a rigorous proof of the
$\Omega(\sqrt{H^3SAT})$ regret bound for non-stationary MDPs. Finally, we
discuss connections to PAC-MDP lower bounds. | [
"cs.LG",
"stat.ML"
] |
Recent research has made the surprising finding that state-of-the-art deep
learning models sometimes fail to generalize to small variations of the input.
Adversarial training has been shown to be an effective approach to overcome
this problem. However, its application has been limited to enforcing invariance
to analytically defined transformations like $\ell_p$-norm bounded
perturbations. Such perturbations do not necessarily cover plausible real-world
variations that preserve the semantics of the input (such as a change in
lighting conditions). In this paper, we propose a novel approach to express and
formalize robustness to these kinds of real-world transformations of the input.
The two key ideas underlying our formulation are (1) leveraging disentangled
representations of the input to define different factors of variations, and (2)
generating new input images by adversarially composing the representations of
different images. We use a StyleGAN model to demonstrate the efficacy of this
framework. Specifically, we leverage the disentangled latent representations
computed by a StyleGAN model to generate perturbations of an image that are
similar to real-world variations (like adding make-up, or changing the
skin-tone of a person) and train models to be invariant to these perturbations.
Extensive experiments show that our method improves generalization and reduces
the effect of spurious correlations (reducing the error rate of a "smile"
detector by 21% for example). | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Autonomous vehicles navigate in dynamically changing environments under a
wide variety of conditions, being continuously influenced by surrounding
objects. Modelling interactions among agents is essential for accurately
forecasting other agents' behaviour and achieving safe and comfortable motion
planning. In this work, we propose SCOUT, a novel Attention-based Graph Neural
Network that uses a flexible and generic representation of the scene as a graph
for modelling interactions, and predicts socially-consistent trajectories of
vehicles and Vulnerable Road Users (VRUs) under mixed traffic conditions. We
explore three different attention mechanisms and test our scheme with both
bird-eye-view and on-vehicle urban data, achieving superior performance than
existing state-of-the-art approaches on InD and ApolloScape Trajectory
benchmarks. Additionally, we evaluate our model's flexibility and
transferability by testing it under completely new scenarios on RounD dataset.
The importance and influence of each interaction in the final prediction is
explored by means of Integrated Gradients technique and the visualization of
the attention learned. | [
"cs.LG",
"cs.AI"
] |
In recent years, virtual makeup applications have become more and more
popular. However, it is still challenging to propose a robust makeup transfer
method in the real-world environment. Current makeup transfer methods mostly
work well on good-conditioned clean makeup images, but transferring makeup that
exhibits shadow and occlusion is not satisfying. To alleviate it, we propose a
novel makeup transfer method, called 3D-Aware Shadow and Occlusion Robust GAN
(SOGAN). Given the source and the reference faces, we first fit a 3D face model
and then disentangle the faces into shape and texture. In the texture branch,
we map the texture to the UV space and design a UV texture generator to
transfer the makeup. Since human faces are symmetrical in the UV space, we can
conveniently remove the undesired shadow and occlusion from the reference image
by carefully designing a Flip Attention Module (FAM). After obtaining cleaner
makeup features from the reference image, a Makeup Transfer Module (MTM) is
introduced to perform accurate makeup transfer. The qualitative and
quantitative experiments demonstrate that our SOGAN not only achieves superior
results in shadow and occlusion situations but also performs well in large pose
and expression variations. | [
"cs.CV",
"cs.AI"
] |
Unsupervised cross-domain person re-identification (Re-ID) faces two key
issues. One is the data distribution discrepancy between source and target
domains, and the other is the lack of labelling information in target domain.
They are addressed in this paper from the perspective of representation
learning. For the first issue, we highlight the presence of camera-level
sub-domains as a unique characteristic of person Re-ID, and develop
camera-aware domain adaptation to reduce the discrepancy not only between
source and target domains but also across these sub-domains. For the second
issue, we exploit the temporal continuity in each camera of target domain to
create discriminative information. This is implemented by dynamically
generating online triplets within each batch, in order to maximally take
advantage of the steadily improved feature representation in training process.
Together, the above two methods give rise to a novel unsupervised deep domain
adaptation framework for person Re-ID. Experiments and ablation studies on
benchmark datasets demonstrate its superiority and interesting properties. | [
"cs.CV"
] |
Methods for object detection and segmentation rely on large scale
instance-level annotations for training, which are difficult and time-consuming
to collect. Efforts to alleviate this look at varying degrees and quality of
supervision. Weakly-supervised approaches draw on image-level labels to build
detectors/segmentors, while zero/few-shot methods assume abundant
instance-level data for a set of base classes, and none to a few examples for
novel classes. This taxonomy has largely siloed algorithmic designs. In this
work, we aim to bridge this divide by proposing an intuitive and unified
semi-supervised model that is applicable to a range of supervision: from zero
to a few instance-level samples per novel class. For base classes, our model
learns a mapping from weakly-supervised to fully-supervised
detectors/segmentors. By learning and leveraging visual and lingual
similarities between the novel and base classes, we transfer those mappings to
obtain detectors/segmentors for novel classes; refining them with a few novel
class instance-level annotated samples, if available. The overall model is
end-to-end trainable and highly flexible. Through extensive experiments on
MS-COCO and Pascal VOC benchmark datasets we show improved performance in a
variety of settings. | [
"cs.CV"
] |
Most of the current scene flow methods choose to model scene flow as a per
point translation vector without differentiating between static and dynamic
components of 3D motion. In this work we present an alternative method for
end-to-end scene flow learning by joint estimation of non-rigid residual flow
and ego-motion flow for dynamic 3D scenes. We propose to learn the relative
rigid transformation from a pair of point clouds followed by an iterative
refinement. We then learn the non-rigid flow from transformed inputs with the
deducted rigid part of the flow. Furthermore, we extend the supervised
framework with self-supervisory signals based on the temporal consistency
property of a point cloud sequence. Our solution allows both training in a
supervised mode complemented by self-supervisory loss terms as well as training
in a fully self-supervised mode. We demonstrate that decomposition of scene
flow into non-rigid flow and ego-motion flow along with an introduction of the
self-supervisory signals allowed us to outperform the current state-of-the-art
supervised methods. | [
"cs.CV",
"cs.LG"
] |
This paper presents results on the detection and identification mango fruits
from colour images of trees. We evaluate the behaviour and the performances of
the Faster R-CNN network to determine whether it is robust enough to "detect
and classify" fruits under particularly heterogeneous conditions in terms of
plant cultivars, plantation scheme, and visual information acquisition
contexts. The network is trained to distinguish the 'Kent', 'Keitt', and
"Boucodiekhal" mango cultivars from 3,000 representative labelled fruit
annotations. The validation set composed of about 7,000 annotations was then
tested with a confidence threshold of 0.7 and a Non-Maximal-Suppression
threshold of 0.25. With a F1-score of 0.90, the Faster R-CNN is well suitable
to the simple fruit detection in tiles of 500x500 pixels. We then combine a
multi-tiling approach with a Jaccard matrix to merge the different parts of
objects detected several times, and thus report the detections made at the tile
scale to the native 6,000x4,000 pixel size images. Nonetheless with a F1-score
of 0.56, the cultivar identification Faster R-CNN network presents some
limitations for simultaneously detecting the mango fruits and identifying their
respective cultivars. Despite the proven errors in fruit detection, the
cultivar identification rates of the detected mango fruits are in the order of
80%. The ideal solution could combine a Mask R-CNN for the image
pre-segmentation of trees and a double-stream Faster R-CNN for detecting the
mango fruits and identifying their respective cultivar to provide predictions
more relevant to users' expectations. | [
"cs.CV",
"eess.IV",
"eess.SP"
] |
Time series research has gathered lots of interests in the last decade,
especially for Time Series Classification (TSC) and Time Series Forecasting
(TSF). Research in TSC has greatly benefited from the University of California
Riverside and University of East Anglia (UCR/UEA) Time Series Archives. On the
other hand, the advancement in Time Series Forecasting relies on time series
forecasting competitions such as the Makridakis competitions, NN3 and NN5
Neural Network competitions, and a few Kaggle competitions. Each year,
thousands of papers proposing new algorithms for TSC and TSF have utilized
these benchmarking archives. These algorithms are designed for these specific
problems, but may not be useful for tasks such as predicting the heart rate of
a person using photoplethysmogram (PPG) and accelerometer data. We refer to
this problem as Time Series Extrinsic Regression (TSER), where we are
interested in a more general methodology of predicting a single continuous
value, from univariate or multivariate time series. This prediction can be from
the same time series or not directly related to the predictor time series and
does not necessarily need to be a future value or depend heavily on recent
values. To the best of our knowledge, research into TSER has received much less
attention in the time series research community and there are no models
developed for general time series extrinsic regression problems. Most models
are developed for a specific problem. Therefore, we aim to motivate and support
the research into TSER by introducing the first TSER benchmarking archive. This
archive contains 19 datasets from different domains, with varying number of
dimensions, unequal length dimensions, and missing values. In this paper, we
introduce the datasets in this archive and did an initial benchmark on existing
models. | [
"cs.LG",
"stat.ML"
] |
Initial DR studies mainly adopt model predictive control and thus require
accurate models of the control problem (e.g., a customer behavior model), which
are to a large extent uncertain for the EV scenario. Hence, model-free
approaches, especially based on reinforcement learning (RL) are an attractive
alternative. In this paper, we propose a new Markov decision process (MDP)
formulation in the RL framework, to jointly coordinate a set of EV charging
stations. State-of-the-art algorithms either focus on a single EV, or perform
the control of an aggregate of EVs in multiple steps (e.g., aggregate load
decisions in one step, then a step translating the aggregate decision to
individual connected EVs). On the contrary, we propose an RL approach to
jointly control the whole set of EVs at once. We contribute a new MDP
formulation, with a scalable state representation that is independent of the
number of EV charging stations. Further, we use a batch reinforcement learning
algorithm, i.e., an instance of fitted Q-iteration, to learn the optimal
charging policy. We analyze its performance using simulation experiments based
on a real-world EV charging data. More specifically, we (i) explore the various
settings in training the RL policy (e.g., duration of the period with training
data), (ii) compare its performance to an oracle all-knowing benchmark (which
provides an upper bound for performance, relying on information that is not
available or at least imperfect in practice), (iii) analyze performance over
time, over the course of a full year to evaluate possible performance
fluctuations (e.g, across different seasons), and (iv) demonstrate the
generalization capacity of a learned control policy to larger sets of charging
stations. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Graph matching consists of aligning the vertices of two unlabeled graphs in
order to maximize the shared structure across networks; when the graphs are
unipartite, this is commonly formulated as minimizing their edge disagreements.
In this paper, we address the common setting in which one of the graphs to
match is a bipartite network and one is unipartite. Commonly, the bipartite
networks are collapsed or projected into a unipartite graph, and graph matching
proceeds as in the classical setting. This potentially leads to noisy edge
estimates and loss of information. We formulate the graph matching problem
between a bipartite and a unipartite graph using an undirected graphical model,
and introduce methods to find the alignment with this model without collapsing.
We theoretically demonstrate that our methodology is consistent, and provide
non-asymptotic conditions that ensure exact recovery of the matching solution.
In simulations and real data examples, we show how our methods can result in a
more accurate matching than the naive approach of transforming the bipartite
networks into unipartite, and we demonstrate the performance gains achieved by
our method in simulated and real data networks, including a
co-authorship-citation network pair, and brain structural and functional data. | [
"stat.ML",
"cs.LG",
"stat.ME"
] |
This paper generalizes the Maurer--Pontil framework of finite-dimensional
lossy coding schemes to the setting where a high-dimensional random vector is
mapped to an element of a compact set of latent representations in a
lower-dimensional Euclidean space, and the reconstruction map belongs to a
given class of nonlinear maps. Under this setup, which encompasses a broad
class of unsupervised representation learning problems, we establish a
connection to approximate generative modeling under structural constraints
using the tools from the theory of optimal transportation. Next, we consider
problem of learning a coding scheme on the basis of a finite collection of
training samples and present generalization bounds that hold with high
probability. We then illustrate the general theory in the setting where the
reconstruction maps are implemented by deep neural nets. | [
"stat.ML",
"cs.LG"
] |
Although machine learning is increasingly applied in control approaches, only
few methods guarantee certifiable safety, which is necessary for real world
applications. These approaches typically rely on well-understood learning
algorithms, which allow formal theoretical analysis. Gaussian process
regression is a prominent example among those methods, which attracts growing
attention due to its strong Bayesian foundations. Even though many problems
regarding the analysis of Gaussian processes have a similar structure, specific
approaches are typically tailored for them individually, without strong focus
on computational efficiency. Thereby, the practical applicability and
performance of these approaches is limited. In order to overcome this issue, we
propose a novel framework called GP3, general purpose computation on graphics
processing units for Gaussian processes, which allows to solve many of the
existing problems efficiently. By employing interval analysis, local Lipschitz
constants are computed in order to extend properties verified on a grid to
continuous state spaces. Since the computation is completely parallelizable,
the computational benefits of GPU processing are exploited in combination with
multi-resolution sampling in order to allow high resolution analysis. | [
"cs.LG",
"cs.SY",
"eess.SY",
"stat.ML"
] |
The motion-and-time analysis has been a popular research topic in operations
research, especially for analyzing work performances in manufacturing and
service operations. It is regaining attention as continuous improvement tools
for lean manufacturing and smart factory. This paper develops a framework for
data-driven analysis of work motions and studies their correlations to work
speeds or execution rates, using data collected from modern motion sensors. The
past analyses largely relied on manual steps involving time-consuming
stop-watching and video-taping, followed by manual data analysis. While modern
sensing devices have automated the collection of motion data, the motion
analytics that transform the new data into knowledge are largely
underdeveloped. Unsolved technical questions include: How the motion and time
information can be extracted from the motion sensor data, how work motions and
execution rates are statistically modeled and compared, and what are the
statistical correlations of motions to the rates? In this paper, we develop a
novel mathematical framework for motion and time analysis with motion sensor
data, by defining new mathematical representation spaces of human motions and
execution rates and by developing statistical tools on these new spaces. This
methodological research is demonstrated using five use cases applied to
manufacturing motion data. | [
"cs.CV",
"cs.LG",
"math.OC"
] |
Plant root research can provide a way to attain stress-tolerant crops that
produce greater yield in a diverse array of conditions. Phenotyping roots in
soil is often challenging due to the roots being difficult to access and the
use of time consuming manual methods. Rhizotrons allow visual inspection of
root growth through transparent surfaces. Agronomists currently manually label
photographs of roots obtained from rhizotrons using a line-intersect method to
obtain root length density and rooting depth measurements which are essential
for their experiments. We investigate the effectiveness of an automated image
segmentation method based on the U-Net Convolutional Neural Network (CNN)
architecture to enable such measurements. We design a data-set of 50 annotated
Chicory (Cichorium intybus L.) root images which we use to train, validate and
test the system and compare against a baseline built using the Frangi
vesselness filter. We obtain metrics using manual annotations and
line-intersect counts. Our results on the held out data show our proposed
automated segmentation system to be a viable solution for detecting and
quantifying roots. We evaluate our system using 867 images for which we have
obtained line-intersect counts, attaining a Spearman rank correlation of 0.9748
and an $r^2$ of 0.9217. We also achieve an $F_1$ of 0.7 when comparing the
automated segmentation to the manual annotations, with our automated
segmentation system producing segmentations with higher quality than the manual
annotations for large portions of the image. | [
"cs.CV"
] |
Ultra-high resolution image segmentation has raised increasing interests in
recent years due to its realistic applications. In this paper, we innovate the
widely used high-resolution image segmentation pipeline, in which an ultra-high
resolution image is partitioned into regular patches for local segmentation and
then the local results are merged into a high-resolution semantic mask. In
particular, we introduce a novel locality-aware contextual correlation based
segmentation model to process local patches, where the relevance between local
patch and its various contexts are jointly and complementarily utilized to
handle the semantic regions with large variations. Additionally, we present a
contextual semantics refinement network that associates the local segmentation
result with its contextual semantics, and thus is endowed with the ability of
reducing boundary artifacts and refining mask contours during the generation of
final high-resolution mask. Furthermore, in comprehensive experiments, we
demonstrate that our model outperforms other state-of-the-art methods in public
benchmarks. Our released codes are available at
https://github.com/liqiokkk/FCtL. | [
"cs.CV"
] |
The elementary operation of cropping underpins nearly every computer vision
system, ranging from data augmentation and translation invariance to
computational photography and representation learning. This paper investigates
the subtle traces introduced by this operation. For example, despite
refinements to camera optics, lenses will leave behind certain clues, notably
chromatic aberration and vignetting. Photographers also leave behind other
clues relating to image aesthetics and scene composition. We study how to
detect these traces, and investigate the impact that cropping has on the image
distribution. While our aim is to dissect the fundamental impact of spatial
crops, there are also a number of practical implications to our work, such as
revealing faulty photojournalism and equipping neural network researchers with
a better understanding of shortcut learning. Code is available at
https://github.com/basilevh/dissecting-image-crops. | [
"cs.CV"
] |
Cross-resolution face recognition (CRFR), which is important in intelligent
surveillance and biometric forensics, refers to the problem of matching a
low-resolution (LR) probe face image against high-resolution (HR) gallery face
images. Existing shallow learning-based and deep learning-based methods focus
on mapping the HR-LR face pairs into a joint feature space where the resolution
discrepancy is mitigated. However, little works consider how to extract and
utilize the intermediate discriminative features from the noisy LR query faces
to further mitigate the resolution discrepancy due to the resolution
limitations. In this study, we desire to fully exploit the multi-level deep
convolutional neural network (CNN) feature set for robust CRFR. In particular,
our contributions are threefold. (i) To learn more robust and discriminative
features, we desire to adaptively fuse the contextual features from different
layers. (ii) To fully exploit these contextual features, we design a feature
set-based representation learning (FSRL) scheme to collaboratively represent
the hierarchical features for more accurate recognition. Moreover, FSRL
utilizes the primitive form of feature maps to keep the latent structural
information, especially in noisy cases. (iii) To further promote the
recognition performance, we desire to fuse the hierarchical recognition outputs
from different stages. Meanwhile, the discriminability from different scales
can also be fully integrated. By exploiting these advantages, the efficiency of
the proposed method can be delivered. Experimental results on several face
datasets have verified the superiority of the presented algorithm to the other
competitive CRFR approaches. | [
"cs.CV"
] |
Recent progress of self-supervised visual representation learning has
achieved remarkable success on many challenging computer vision benchmarks.
However, whether these techniques can be used for domain adaptation has not
been explored. In this work, we propose a generic method for self-supervised
domain adaptation, using object recognition and semantic segmentation of urban
scenes as use cases. Focusing on simple pretext/auxiliary tasks (e.g. image
rotation prediction), we assess different learning strategies to improve domain
adaptation effectiveness by self-supervision. Additionally, we propose two
complementary strategies to further boost the domain adaptation accuracy on
semantic segmentation within our method, consisting of prediction layer
alignment and batch normalization calibration. The experimental results show
adaptation levels comparable to most studied domain adaptation methods, thus,
bringing self-supervision as a new alternative for reaching domain adaptation.
The code is available at https://github.com/Jiaolong/self-supervised-da. | [
"cs.CV",
"cs.LG"
] |
Unsupervised domain adaptation for object detection is a challenging problem
with many real-world applications. Unfortunately, it has received much less
attention than supervised object detection. Models that try to address this
task tend to suffer from a shortage of annotated training samples. Moreover,
existing methods of feature alignments are not sufficient to learn
domain-invariant representations. To address these limitations, we propose a
novel augmented feature alignment network (AFAN) which integrates intermediate
domain image generation and domain-adversarial training into a unified
framework. An intermediate domain image generator is proposed to enhance
feature alignments by domain-adversarial training with automatically generated
soft domain labels. The synthetic intermediate domain images progressively
bridge the domain divergence and augment the annotated source domain training
data. A feature pyramid alignment is designed and the corresponding feature
discriminator is used to align multi-scale convolutional features of different
semantic levels. Last but not least, we introduce a region feature alignment
and an instance discriminator to learn domain-invariant features for object
proposals. Our approach significantly outperforms the state-of-the-art methods
on standard benchmarks for both similar and dissimilar domain adaptations.
Further extensive experiments verify the effectiveness of each component and
demonstrate that the proposed network can learn domain-invariant
representations. | [
"cs.CV"
] |
Generative adversarial networks (GAN) approximate a target data distribution
by jointly optimizing an objective function through a "two-player game" between
a generator and a discriminator. Despite their empirical success, however, two
very basic questions on how well they can approximate the target distribution
remain unanswered. First, it is not known how restricting the discriminator
family affects the approximation quality. Second, while a number of different
objective functions have been proposed, we do not understand when convergence
to the global minima of the objective function leads to convergence to the
target distribution under various notions of distributional convergence.
In this paper, we address these questions in a broad and unified setting by
defining a notion of adversarial divergences that includes a number of recently
proposed objective functions. We show that if the objective function is an
adversarial divergence with some additional conditions, then using a restricted
discriminator family has a moment-matching effect. Additionally, we show that
for objective functions that are strict adversarial divergences, convergence in
the objective function implies weak convergence, thus generalizing previous
results. | [
"cs.LG",
"stat.ML"
] |
Super-resolution aims at increasing the resolution and level of detail within
an image. The current state of the art in general single-image super-resolution
is held by NESRGAN+, which injects a Gaussian noise after each residual layer
at training time. In this paper, we harness evolutionary methods to improve
NESRGAN+ by optimizing the noise injection at inference time. More precisely,
we use Diagonal CMA to optimize the injected noise according to a novel
criterion combining quality assessment and realism. Our results are validated
by the PIRM perceptual score and a human study. Our method outperforms NESRGAN+
on several standard super-resolution datasets. More generally, our approach can
be used to optimize any method based on noise injection. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Generative adversarial networks (GANs) studies have grown exponentially in
the past few years. Their impact has been seen mainly in the computer vision
field with realistic image and video manipulation, especially generation,
making significant advancements. While these computer vision advances have
garnered much attention, GAN applications have diversified across disciplines
such as time series and sequence generation. As a relatively new niche for
GANs, fieldwork is ongoing to develop high quality, diverse and private time
series data. In this paper, we review GAN variants designed for time series
related applications. We propose a taxonomy of discrete-variant GANs and
continuous-variant GANs, in which GANs deal with discrete time series and
continuous time series data. Here we showcase the latest and most popular
literature in this field; their architectures, results, and applications. We
also provide a list of the most popular evaluation metrics and their
suitability across applications. Also presented is a discussion of privacy
measures for these GANs and further protections and directions for dealing with
sensitive data. We aim to frame clearly and concisely the latest and
state-of-the-art research in this area and their applications to real-world
technologies. | [
"cs.LG",
"cs.AI"
] |
Among the representation learning, the low-rank representation (LRR) is one
of the hot research topics in many fields, especially in image processing and
pattern recognition. Although LRR can capture the global structure, the ability
of local structure preservation is limited because LRR lacks dictionary
learning. In this paper, we propose a novel multi-focus image fusion method
based on dictionary learning and LRR to get a better performance in both global
and local structure. Firstly, the source images are divided into several
patches by sliding window technique. Then, the patches are classified according
to the Histogram of Oriented Gradient (HOG) features. And the sub-dictionaries
of each class are learned by K-singular value decomposition (K-SVD) algorithm.
Secondly, a global dictionary is constructed by combining these
sub-dictionaries. Then, we use the global dictionary in LRR to obtain the LRR
coefficients vector for each patch. Finally, the l_1-norm and choose-max fuse
strategy for each coefficients vector is adopted to reconstruct fused image
from the fused LRR coefficients and the global dictionary. Experimental results
demonstrate that the proposed method can obtain state-of-the-art performance in
both qualitative and quantitative evaluations compared with serval classical
methods and novel methods.The Code of our fusion method is available at
https://github.com/hli1221/imagefusion_dllrr | [
"cs.CV"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.