text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Compression is at the heart of effective representation learning. However,
lossy compression is typically achieved through simple parametric models like
Gaussian noise to preserve analytic tractability, and the limitations this
imposes on learning are largely unexplored. Further, the Gaussian prior
assumptions in models such as variational autoencoders (VAEs) provide only an
upper bound on the compression rate in general. We introduce a new noise
channel, \emph{Echo noise}, that admits a simple, exact expression for mutual
information for arbitrary input distributions. The noise is constructed in a
data-driven fashion that does not require restrictive distributional
assumptions. With its complex encoding mechanism and exact rate regularization,
Echo leads to improved bounds on log-likelihood and dominates $\beta$-VAEs
across the achievable range of rate-distortion trade-offs. Further, we show
that Echo noise can outperform flow-based methods without the need to train
additional distributional transformations. | [
"cs.LG",
"cs.IT",
"math.IT",
"stat.ML"
] |
While humans easily recognize relations between data from different domains
without any supervision, learning to automatically discover them is in general
very challenging and needs many ground-truth pairs that illustrate the
relations. To avoid costly pairing, we address the task of discovering
cross-domain relations given unpaired data. We propose a method based on
generative adversarial networks that learns to discover relations between
different domains (DiscoGAN). Using the discovered relations, our proposed
network successfully transfers style from one domain to another while
preserving key attributes such as orientation and face identity. Source code
for official implementation is publicly available
https://github.com/SKTBrain/DiscoGAN | [
"cs.CV"
] |
Despite significant progress in a variety of vision-and-language problems,
developing a method capable of asking intelligent, goal-oriented questions
about images is proven to be an inscrutable challenge. Towards this end, we
propose a Deep Reinforcement Learning framework based on three new intermediate
rewards, namely goal-achieved, progressive and informativeness that encourage
the generation of succinct questions, which in turn uncover valuable
information towards the overall goal. By directly optimizing for questions that
work quickly towards fulfilling the overall goal, we avoid the tendency of
existing methods to generate long series of insane queries that add little
value. We evaluate our model on the GuessWhat?! dataset and show that the
resulting questions can help a standard Guesser identify a specific object in
an image at a much higher success rate. | [
"cs.CV",
"cs.AI",
"cs.CL"
] |
Aggregating features in terms of different convolutional blocks or contextual
embeddings has been proven to be an effective way to strengthen feature
representations for semantic segmentation. However, most of the current popular
network architectures tend to ignore the misalignment issues during the feature
aggregation process caused by 1) step-by-step downsampling operations, and 2)
indiscriminate contextual information fusion. In this paper, we explore the
principles in addressing such feature misalignment issues and inventively
propose Feature-Aligned Segmentation Networks (AlignSeg). AlignSeg consists of
two primary modules, i.e., the Aligned Feature Aggregation (AlignFA) module and
the Aligned Context Modeling (AlignCM) module. First, AlignFA adopts a simple
learnable interpolation strategy to learn transformation offsets of pixels,
which can effectively relieve the feature misalignment issue caused by
multiresolution feature aggregation. Second, with the contextual embeddings in
hand, AlignCM enables each pixel to choose private custom contextual
information in an adaptive manner, making the contextual embeddings aligned
better to provide appropriate guidance. We validate the effectiveness of our
AlignSeg network with extensive experiments on Cityscapes and ADE20K, achieving
new state-of-the-art mIoU scores of 82.6% and 45.95%, respectively. Our source
code will be made available. | [
"cs.CV",
"eess.IV"
] |
3D generative shape modeling is a fundamental research area in computer
vision and interactive computer graphics, with many real-world applications.
This paper investigates the novel problem of generating 3D shape point cloud
geometry from a symbolic part tree representation. In order to learn such a
conditional shape generation procedure in an end-to-end fashion, we propose a
conditional GAN "part tree"-to-"point cloud" model (PT2PC) that disentangles
the structural and geometric factors. The proposed model incorporates the part
tree condition into the architecture design by passing messages top-down and
bottom-up along the part tree hierarchy. Experimental results and user study
demonstrate the strengths of our method in generating perceptually plausible
and diverse 3D point clouds, given the part tree condition. We also propose a
novel structural measure for evaluating if the generated shape point clouds
satisfy the part tree conditions. | [
"cs.CV",
"cs.CG",
"cs.GR"
] |
Recently deep convolutional neural networks have achieved significant success
in salient object detection. However, existing state-of-the-art methods require
high-end GPUs to achieve real-time performance, which makes them hard to adapt
to low-cost or portable devices. Although generic network architectures have
been proposed to speed up inference on mobile devices, they are tailored to the
task of image classification or semantic segmentation, and struggle to capture
intra-channel and inter-channel correlations that are essential for contrast
modeling in salient object detection. Motivated by the above observations, we
design a new deep learning algorithm for fast salient object detection. The
proposed algorithm for the first time achieves competitive accuracy and high
inference efficiency simultaneously with a single CPU thread. Specifically, we
propose a novel depthwise non-local moudule (DNL), which implicitly models
contrast via harvesting intra-channel and inter-channel correlations in a
self-attention manner. In addition, we introduce a depthwise non-local network
architecture that incorporates both depthwise non-local modules and inverted
residual blocks. Experimental results show that our proposed network attains
very competitive accuracy on a wide range of salient object detection datasets
while achieving state-of-the-art efficiency among all existing deep learning
based algorithms. | [
"cs.CV"
] |
We propose a novel representation for dense pixel-wise estimation tasks using
CNNs that boosts accuracy and reduces training time, by explicitly exploiting
joint coarse-and-fine reasoning. The coarse reasoning is performed over a
discrete classification space to obtain a general rough solution, while the
fine details of the solution are obtained over a continuous regression space.
In our approach both components are jointly estimated, which proved to be
beneficial for improving estimation accuracy. Additionally, we propose a new
network architecture, which combines coarse and fine components by treating the
fine estimation as a refinement built on top of the coarse solution, and
therefore adding details to the general prediction. We apply our approach to
the challenging problem of optical flow estimation and empirically validate it
against state-of-the-art CNN-based solutions trained from scratch and tested on
large optical flow datasets. | [
"cs.CV"
] |
Occlusions, complex backgrounds, scale variations and non-uniform
distributions present great challenges for crowd counting in practical
applications. In this paper, we propose a novel method using an attention model
to exploit head locations which are the most important cue for crowd counting.
The attention model estimates a probability map in which high probabilities
indicate locations where heads are likely to be present. The estimated
probability map is used to suppress non-head regions in feature maps from
several multi-scale feature extraction branches of a convolution neural network
for crowd density estimation, which makes our method robust to complex
backgrounds, scale variations and non-uniform distributions. In addition, we
introduce a relative deviation loss to compensate a commonly used training
loss, Euclidean distance, to improve the accuracy of sparse crowd density
estimation. Experiments on Shanghai-Tech, UCF_CC_50 and World-Expo'10 data sets
demonstrate the effectiveness of our method. | [
"cs.CV"
] |
In this paper, we present novel sharp attention networks by adaptively
sampling feature maps from convolutional neural networks (CNNs) for person
re-identification (re-ID) problem. Due to the introduction of sampling-based
attention models, the proposed approach can adaptively generate sharper
attention-aware feature masks. This greatly differs from the gating-based
attention mechanism that relies soft gating functions to select the relevant
features for person re-ID. In contrast, the proposed sampling-based attention
mechanism allows us to effectively trim irrelevant features by enforcing the
resultant feature masks to focus on the most discriminative features. It can
produce sharper attentions that are more assertive in localizing subtle
features relevant to re-identifying people across cameras. For this purpose, a
differentiable Gumbel-Softmax sampler is employed to approximate the Bernoulli
sampling to train the sharp attention networks. Extensive experimental
evaluations demonstrate the superiority of this new sharp attention model for
person re-ID over the other state-of-the-art methods on three challenging
benchmarks including CUHK03, Market-1501, and DukeMTMC-reID. | [
"cs.CV"
] |
The pipelines of digital cameras contain a part for computational color
constancy, which aims to remove the influence of the illumination on the scene
colors. One of the best known and most widely used benchmark datasets for this
problem is the Color Checker dataset. However, due to the improper handling of
the black level in its images, this dataset has been widely misused and while
some recent publications tried to alleviate the problem, they nevertheless
erred and created additional wrong data. This paper gives a history of the
Color Checker dataset usage, it describes the origins and reasons for its
misuses, and it explains the old and new mistakes introduced in the most recent
publications that tried to handle the issue. This should, hopefully, help to
prevent similar future misuses. | [
"cs.CV"
] |
Learning on 3D scene-based point cloud has received extensive attention as
its promising application in many fields, and well-annotated and multisource
datasets can catalyze the development of those data-driven approaches. To
facilitate the research of this area, we present a richly-annotated 3D point
cloud dataset for multiple outdoor scene understanding tasks and also an
effective learning framework for its hierarchical segmentation task. The
dataset was generated via the photogrammetric processing on unmanned aerial
vehicle (UAV) images of the National University of Singapore (NUS) campus, and
has been point-wisely annotated with both hierarchical and instance-based
labels. Based on it, we formulate a hierarchical learning problem for 3D point
cloud segmentation and propose a measurement evaluating consistency across
various hierarchies. To solve this problem, a two-stage method including
multi-task (MT) learning and hierarchical ensemble (HE) with consistency
consideration is proposed. Experimental results demonstrate the superiority of
the proposed method and potential advantages of our hierarchical annotations.
In addition, we benchmark results of semantic and instance segmentation, which
is accessible online at https://3d.dataset.site with the dataset and all source
codes. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
We propose an approach to instance-level image segmentation that is built on
top of category-level segmentation. Specifically, for each pixel in a semantic
category mask, its corresponding instance bounding box is predicted using a
deep fully convolutional regression network. Thus it follows a different
pipeline to the popular detect-then-segment approaches that first predict
instances' bounding boxes, which are the current state-of-the-art in instance
segmentation. We show that, by leveraging the strength of our state-of-the-art
semantic segmentation models, the proposed method can achieve comparable or
even better results to detect-then-segment approaches. We make the following
contributions. (i) First, we propose a simple yet effective approach to
semantic instance segmentation. (ii) Second, we propose an online bootstrapping
method during training, which is critically important for achieving good
performance for both semantic category segmentation and instance-level
segmentation. (iii) As the performance of semantic category segmentation has a
significant impact on the instance-level segmentation, which is the second step
of our approach, we train fully convolutional residual networks to achieve the
best semantic category segmentation accuracy. On the PASCAL VOC 2012 dataset,
we obtain the currently best mean intersection-over-union score of 79.1%. (iv)
We also achieve state-of-the-art results for instance-level segmentation. | [
"cs.CV"
] |
Post-hoc explanation methods are gaining popularity for interpreting,
understanding, and debugging neural networks. Most analyses using such methods
explain decisions in response to inputs drawn from the test set. However, the
test set may have few examples that trigger some model behaviors, such as
high-confidence failures or ambiguous classifications. To address these
challenges, we introduce a flexible model inspection framework: Bayes-TrEx.
Given a data distribution, Bayes-TrEx finds in-distribution examples with a
specified prediction confidence. We demonstrate several use cases of
Bayes-TrEx, including revealing highly confident (mis)classifications,
visualizing class boundaries via ambiguous examples, understanding novel-class
extrapolation behavior, and exposing neural network overconfidence. We use
Bayes-TrEx to study classifiers trained on CLEVR, MNIST, and Fashion-MNIST, and
we show that this framework enables more flexible holistic model analysis than
just inspecting the test set. Code is available at
https://github.com/serenabooth/Bayes-TrEx. | [
"cs.LG",
"stat.ML"
] |
In this paper, we propose an efficient saliency map generation method, called
Group score-weighted Class Activation Mapping (Group-CAM), which adopts the
"split-transform-merge" strategy to generate saliency maps. Specifically, for
an input image, the class activations are firstly split into groups. In each
group, the sub-activations are summed and de-noised as an initial mask. After
that, the initial masks are transformed with meaningful perturbations and then
applied to preserve sub-pixels of the input (i.e., masked inputs), which are
then fed into the network to calculate the confidence scores. Finally, the
initial masks are weighted summed to form the final saliency map, where the
weights are confidence scores produced by the masked inputs. Group-CAM is
efficient yet effective, which only requires dozens of queries to the network
while producing target-related saliency maps. As a result, Group-CAM can be
served as an effective data augment trick for fine-tuning the networks. We
comprehensively evaluate the performance of Group-CAM on common-used
benchmarks, including deletion and insertion tests on ImageNet-1k, and pointing
game tests on COCO2017. Extensive experimental results demonstrate that
Group-CAM achieves better visual performance than the current state-of-the-art
explanation approaches. The code is available at
https://github.com/wofmanaf/Group-CAM. | [
"cs.CV",
"cs.AI"
] |
Many complex cyber-physical systems can be modeled as heterogeneous
components interacting with each other in real-time. We assume that the
correctness of each component can be specified as a requirement satisfied by
the output signals produced by the component, and that such an output guarantee
is expressed in a real-time temporal logic such as Signal Temporal Logic (STL).
In this paper, we hypothesize that a large subset of input signals for which
the corresponding output signals satisfy the output requirement can also be
compactly described using an STL formula that we call the environment
assumption. We propose an algorithm to mine such an environment assumption
using a supervised learning technique. Essentially, our algorithm treats the
environment assumption as a classifier that labels input signals as good if the
corresponding output signal satisfies the output requirement, and as bad
otherwise. Our learning method simultaneously learns the structure of the STL
formula as well as the values of the numeric constants appearing in the
formula. To achieve this, we combine a procedure to systematically enumerate
candidate Parametric STL (PSTL) formulas, with a decision-tree based approach
to learn parameter values. We demonstrate experimental results on real world
data from several domains including transportation and health care. | [
"cs.LG",
"stat.ML"
] |
Recently, deep neural networks have significant progress and successful
application in various fields, but they are found vulnerable to attack
instances, e.g., adversarial examples. State-of-art attack methods can generate
attack images by adding small perturbation to the source image. These attack
images can fool the classifier but have little impact to human. Therefore, such
attack instances are difficult to generate by searching the feature space. How
to design an effective and robust generating method has become a spotlight.
Inspired by adversarial examples, we propose two novel generative models to
produce adaptive attack instances directly, in which conditional generative
adversarial network is adopted and distinctive strategy is designed for
training. Compared with the common method, such as Fast Gradient Sign Method,
our models can reduce the generating cost and improve robustness and has about
one fifth running time for producing attack instance. | [
"cs.LG",
"stat.ML"
] |
Credit investigation is critical for financial services. Whereas, traditional
methods are often restricted as the employed data hardly provide sufficient,
timely and reliable information. With the prevalence of smart mobile devices,
peoples' geographic footprints can be automatically and constantly collected
nowadays, which provides an unprecedented opportunity for credit
investigations. Inspired by the observation that locations are somehow related
to peoples' credit level, this research aims to enhance credit investigation
with users' geographic footprints. To this end, a two-stage credit
investigation framework is designed, namely CreditPrint. In the first stage,
CreditPrint explores regions' credit characteristics and learns a credit-aware
embedding for each region by considering both each region's individual
characteristics and cross-region relationships with graph convolutional
networks. In the second stage, a hierarchical attention-based credit assessment
network is proposed to aggregate the credit indications from a user's multiple
trajectories covering diverse regions. The results on real-life user mobility
datasets show that CreditPrint can increase the credit investigation accuracy
by up to 10% compared to baseline methods. | [
"cs.LG",
"cs.CY"
] |
Many problems in computer vision require dealing with sparse, unordered data
in the form of point clouds. Permutation-equivariant networks have become a
popular solution-they operate on individual data points with simple perceptrons
and extract contextual information with global pooling. This can be achieved
with a simple normalization of the feature maps, a global operation that is
unaffected by the order. In this paper, we propose Attentive Context
Normalization (ACN), a simple yet effective technique to build
permutation-equivariant networks robust to outliers. Specifically, we show how
to normalize the feature maps with weights that are estimated within the
network, excluding outliers from this normalization. We use this mechanism to
leverage two types of attention: local and global-by combining them, our method
is able to find the essential data points in high-dimensional space to solve a
given task. We demonstrate through extensive experiments that our approach,
which we call Attentive Context Networks (ACNe), provides a significant leap in
performance compared to the state-of-the-art on camera pose estimation, robust
fitting, and point cloud classification under noise and outliers. Source code:
https://github.com/vcg-uvic/acne. | [
"cs.CV"
] |
To properly contrast the Deepfake phenomenon the need to design new Deepfake
detection algorithms arises; the misuse of this formidable A.I. technology
brings serious consequences in the private life of every involved person.
State-of-the-art proliferates with solutions using deep neural networks to
detect a fake multimedia content but unfortunately these algorithms appear to
be neither generalizable nor explainable. However, traces left by Generative
Adversarial Network (GAN) engines during the creation of the Deepfakes can be
detected by analyzing ad-hoc frequencies. For this reason, in this paper we
propose a new pipeline able to detect the so-called GAN Specific Frequencies
(GSF) representing a unique fingerprint of the different generative
architectures. By employing Discrete Cosine Transform (DCT), anomalous
frequencies were detected. The \BETA statistics inferred by the AC coefficients
distribution have been the key to recognize GAN-engine generated data.
Robustness tests were also carried out in order to demonstrate the
effectiveness of the technique using different attacks on images such as JPEG
Compression, mirroring, rotation, scaling, addition of random sized rectangles.
Experiments demonstrated that the method is innovative, exceeds the state of
the art and also give many insights in terms of explainability. | [
"cs.CV"
] |
Transferring knowledge from a teacher neural network pretrained on the same
or a similar task to a student neural network can significantly improve the
performance of the student neural network. Existing knowledge transfer
approaches match the activations or the corresponding hand-crafted features of
the teacher and the student networks. We propose an information-theoretic
framework for knowledge transfer which formulates knowledge transfer as
maximizing the mutual information between the teacher and the student networks.
We compare our method with existing knowledge transfer methods on both
knowledge distillation and transfer learning tasks and show that our method
consistently outperforms existing methods. We further demonstrate the strength
of our method on knowledge transfer across heterogeneous network architectures
by transferring knowledge from a convolutional neural network (CNN) to a
multi-layer perceptron (MLP) on CIFAR-10. The resulting MLP significantly
outperforms the-state-of-the-art methods and it achieves similar performance to
the CNN with a single convolutional layer. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
In Bayesian learning of Gaussian graphical model structure, it is common to
restrict attention to certain classes of graphs and approximate the posterior
distribution by repeatedly moving from one graph to another, using MCMC or
methods such as stochastic shotgun search (SSS). I give two corrected versions
of an algorithm for non-decomposable graphs and discuss random graph
distributions, in particular as prior distributions. The main topic of the
thesis is Bayesian structure-learning with forests or trees. Restricting
attention to these graphs can be justified using theorems on random graphs. I
describe how to use the Chow$\unicode{x2013}$Liu algorithm and the Matrix Tree
Theorem to find the MAP forest and certain quantities in the posterior
distribution on trees. I give adapted versions of MCMC and SSS for
approximating the posterior distribution for forests and trees, and systems for
storing these graphs so that it is easy to choose moves to neighbouring graphs.
Experiments show that SSS with trees does well when the true graph is a tree or
sparse graph. SSS with trees or forests does better than SSS with decomposable
graphs in certain cases. Graph priors improve detection of hubs but need large
ranges of probabilities. MCMC on forests fails to mix well and MCMC on trees is
slower than SSS. (For a longer abstract see the thesis.) | [
"stat.ML",
"cs.LG"
] |
Data augmentation refers to a wide range of techniques for improving model
generalization by augmenting training examples. Oftentimes such methods require
domain knowledge about the dataset at hand, spawning a plethora of recent
literature surrounding automated techniques for data augmentation. In this work
we apply one such method, bilevel optimization, to tackle the problem of graph
classification on the ogbg-molhiv dataset. Our best performing augmentation
achieved a test ROCAUC score of 77.77 % with a GIN+virtual classifier, which
makes it the most effective augmenter for this classifier on the leaderboard.
This framework combines a GIN layer augmentation generator with a bias
transformation and outperforms the same classifier augmented using the
state-of-the-art FLAG augmentation. | [
"cs.LG",
"cs.AI"
] |
In this study, we demonstrate that the linear combination of atomic orbitals
(LCAO), an approximation of quantum physics introduced by Pauling and
Lennard-Jones in the 1920s, corresponds to graph convolutional networks (GCNs)
for molecules. However, GCNs involve unnecessary nonlinearity and deep
architecture. We also verify that molecular GCNs are based on a poor basis
function set compared with the standard one used in theoretical calculations or
quantum chemical simulations. From these observations, we describe the quantum
deep field (QDF), a machine learning (ML) model based on an underlying quantum
physics, in particular the density functional theory (DFT). We believe that the
QDF model can be easily understood because it can be regarded as a single
linear layer GCN. Moreover, it uses two vanilla feedforward neural networks to
learn an energy functional and a Hohenberg--Kohn map that have nonlinearities
inherent in quantum physics and the DFT. For molecular energy prediction tasks,
we demonstrated the viability of an ``extrapolation,'' in which we trained a
QDF model with small molecules, tested it with large molecules, and achieved
high extrapolation performance. This will lead to reliable and practical
applications for discovering effective materials. The implementation is
available at https://github.com/masashitsubaki/QuantumDeepField_molecule. | [
"cs.LG",
"cond-mat.mtrl-sci",
"physics.chem-ph"
] |
Traditionally, community detection in graphs can be solved using spectral
methods or posterior inference under probabilistic graphical models. Focusing
on random graph families such as the stochastic block model, recent research
has unified both approaches and identified both statistical and computational
detection thresholds in terms of the signal-to-noise ratio. By recasting
community detection as a node-wise classification problem on graphs, we can
also study it from a learning perspective. We present a novel family of Graph
Neural Networks (GNNs) for solving community detection problems in a supervised
learning setting. We show that, in a data-driven manner and without access to
the underlying generative models, they can match or even surpass the
performance of the belief propagation algorithm on binary and multi-class
stochastic block models, which is believed to reach the computational
threshold. In particular, we propose to augment GNNs with the non-backtracking
operator defined on the line graph of edge adjacencies. Our models also achieve
good performance on real-world datasets. In addition, we perform the first
analysis of the optimization landscape of training linear GNNs for community
detection problems, demonstrating that under certain simplifications and
assumptions, the loss values at local and global minima are not far apart. | [
"stat.ML"
] |
Reinforcement learning (RL) algorithms have been around for decades and
employed to solve various sequential decision-making problems. These algorithms
however have faced great challenges when dealing with high-dimensional
environments. The recent development of deep learning has enabled RL methods to
drive optimal policies for sophisticated and capable agents, which can perform
efficiently in these challenging environments. This paper addresses an
important aspect of deep RL related to situations that require multiple agents
to communicate and cooperate to solve complex tasks. A survey of different
approaches to problems related to multi-agent deep RL (MADRL) is presented,
including non-stationarity, partial observability, continuous state and action
spaces, multi-agent training schemes, multi-agent transfer learning. The merits
and demerits of the reviewed methods will be analyzed and discussed, with their
corresponding applications explored. It is envisaged that this review provides
insights about various MADRL methods and can lead to future development of more
robust and highly useful multi-agent learning methods for solving real-world
problems. | [
"cs.LG",
"cs.AI",
"cs.MA",
"stat.ML"
] |
Detecting objects in 3D LiDAR data is a core technology for autonomous
driving and other robotics applications. Although LiDAR data is acquired over
time, most of the 3D object detection algorithms propose object bounding boxes
independently for each frame and neglect the useful information available in
the temporal domain. To address this problem, in this paper we propose a sparse
LSTM-based multi-frame 3d object detection algorithm. We use a U-Net style 3D
sparse convolution network to extract features for each frame's LiDAR
point-cloud. These features are fed to the LSTM module together with the hidden
and memory features from last frame to predict the 3d objects in the current
frame as well as hidden and memory features that are passed to the next frame.
Experiments on the Waymo Open Dataset show that our algorithm outperforms the
traditional frame by frame approach by 7.5% [email protected] and other multi-frame
approaches by 1.2% while using less memory and computation per frame. To the
best of our knowledge, this is the first work to use an LSTM for 3D object
detection in sparse point clouds. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Recent work in image processing suggests that operating on (overlapping)
patches in an image may lead to state-of-the-art results. This has been
demonstrated for a variety of problems including denoising, inpainting,
deblurring, and super-resolution. The work reported in [1,2] takes an extra
step forward by showing that ordering these patches to form an approximate
shortest path can be leveraged for better processing. The core idea is to apply
a simple filter on the resulting 1D smoothed signal obtained after the
patch-permutation. This idea has been also explored in combination with a
wavelet pyramid, leading eventually to a sophisticated and highly effective
regularizer for inverse problems in imaging. In this work we further study the
patch-permutation concept, and harness it to propose a new simple yet effective
regularization for image restoration problems. Our approach builds on the
classic Maximum A'posteriori probability (MAP), with a penalty function
consisting of a regular log-likelihood term and a novel permutation-based
regularization term. Using a plain 1D Laplacian, the proposed regularization
forces robust smoothness (L1) on the permuted pixels. Since the permutation
originates from patch-ordering, we propose to accumulate the smoothness terms
over all the patches' pixels. Furthermore, we take into account the found
distances between adjacent patches in the ordering, by weighting the Laplacian
outcome. We demonstrate the proposed scheme on a diverse set of problems: (i)
severe Poisson image denoising, (ii) Gaussian image denoising, (iii) image
deblurring, and (iv) single image super-resolution. In all these cases, we use
recent methods that handle these problems as initialization to our scheme. This
is followed by an L-BFGS optimization of the above-described penalty function,
leading to state-of-the-art results, and especially so for highly ill-posed
cases. | [
"cs.CV",
"62H35, 68U10, 94A08"
] |
In the era of big data, a large number of text data generated by the Internet
has given birth to a variety of text representation methods. In natural
language processing (NLP), text representation transforms text into vectors
that can be processed by computer without losing the original semantic
information. However, these methods are difficult to effectively extract the
semantic features among words and distinguish polysemy in language. Therefore,
a text feature representation model based on convolutional neural network (CNN)
and variational autoencoder (VAE) is proposed to extract the text features and
apply the obtained text feature representation on the text classification
tasks. CNN is used to extract the features of text vector to get the semantics
among words and VAE is introduced to make the text feature space more
consistent with Gaussian distribution. In addition, the output of the improved
word2vec model is employed as the input of the proposed model to distinguish
different meanings of the same word in different contexts. The experimental
results show that the proposed model outperforms in k-nearest neighbor (KNN),
random forest (RF) and support vector machine (SVM) classification algorithms. | [
"cs.LG",
"stat.ML"
] |
For a considerable time, deep convolutional neural networks (DCNNs) have
reached human benchmark performance in object recognition. On that account,
computational neuroscience and the field of machine learning have started to
attribute numerous similarities and differences to artificial and biological
vision. This study aims towards a behavioral comparison of visual core object
recognition performance between humans and feedforward neural networks in a
classification learning paradigm on an ImageNet data set. For this purpose,
human participants (n = 65) competed in an online experiment against different
feedforward DCNNs. The designed approach based on a typical learning process of
seven different monkey categories included a training and validation phase with
natural examples, as well as a testing phase with novel, unexperienced shape
and color manipulations. Analyses of accuracy revealed that humans not only
outperform DCNNs on all conditions, but also display significantly greater
robustness towards shape and most notably color alterations. Furthermore, a
precise examination of behavioral patterns highlights these findings by
revealing independent classification errors between the groups. The obtained
results show that humans contrast strongly with artificial feedforward
architectures when it comes to visual core object recognition of manipulated
images. In general, these findings are in line with a growing body of
literature, that hints towards recurrence as a crucial factor for adequate
generalization abilities. | [
"cs.CV",
"cs.LG",
"eess.IV",
"q-bio.NC"
] |
Action recognition and detection in the context of long untrimmed video
sequences has seen an increased attention from the research community. However,
annotation of complex activities is usually time consuming and challenging in
practice. Therefore, recent works started to tackle the problem of unsupervised
learning of sub-actions in complex activities. This paper proposes a novel
approach for unsupervised sub-action learning in complex activities. The
proposed method maps both visual and temporal representations to a latent space
where the sub-actions are learnt discriminatively in an end-to-end fashion. To
this end, we propose to learn sub-actions as latent concepts and a novel
discriminative latent concept learning (DLCL) module aids in learning
sub-actions. The proposed DLCL module lends on the idea of latent concepts to
learn compact representations in the latent embedding space in an unsupervised
way. The result is a set of latent vectors that can be interpreted as cluster
centers in the embedding space. The latent space itself is formed by a joint
visual and temporal embedding capturing the visual similarity and temporal
ordering of the data. Our joint learning with discriminative latent concept
module is novel which eliminates the need for explicit clustering. We validate
our approach on three benchmark datasets and show that the proposed combination
of visual-temporal embedding and discriminative latent concepts allow to learn
robust action representations in an unsupervised setting. | [
"cs.CV"
] |
We propose a novel attentive sequence to sequence translator (ASST) for clip
localization in videos by natural language descriptions. We make two
contributions. First, we propose a bi-directional Recurrent Neural Network
(RNN) with a finely calibrated vision-language attentive mechanism to
comprehensively understand the free-formed natural language descriptions. The
RNN parses natural language descriptions in two directions, and the attentive
model attends every meaningful word or phrase to each frame, thereby resulting
in a more detailed understanding of video content and description semantics.
Second, we design a hierarchical architecture for the network to jointly model
language descriptions and video content. Given a video-description pair, the
network generates a matrix representation, i.e., a sequence of vectors. Each
vector in the matrix represents a video frame conditioned by the description.
The 2D representation not only preserves the temporal dependencies of frames
but also provides an effective way to perform frame-level video-language
matching. The hierarchical architecture exploits video content with multiple
granularities, ranging from subtle details to global context. Integration of
the multiple granularities yields a robust representation for multi-level
video-language abstraction. We validate the effectiveness of our ASST on two
large-scale datasets. Our ASST outperforms the state-of-the-art by $4.28\%$ in
Rank$@1$ on the DiDeMo dataset. On the Charades-STA dataset, we significantly
improve the state-of-the-art by $13.41\%$ in Rank$@1,IoU=0.5$. | [
"cs.CV"
] |
Graph Neural Networks (GNNs) have achieved a lot of success on
graph-structured data. However, it is observed that the performance of graph
neural networks does not improve as the number of layers increases. This
effect, known as over-smoothing, has been analyzed mostly in linear cases. In
this paper, we build upon previous results \cite{oono2019graph} to further
analyze the over-smoothing effect in the general graph neural network
architecture. We show when the weight matrix satisfies the conditions
determined by the spectrum of augmented normalized Laplacian, the Dirichlet
energy of embeddings will converge to zero, resulting in the loss of
discriminative power. Using Dirichlet energy to measure "expressiveness" of
embedding is conceptually clean; it leads to simpler proofs than
\cite{oono2019graph} and can handle more non-linearities. | [
"cs.LG",
"stat.ML"
] |
Traffic accident anticipation is a vital function of Automated Driving
Systems (ADSs) for providing a safety-guaranteed driving experience. An
accident anticipation model aims to predict accidents promptly and accurately
before they occur. Existing Artificial Intelligence (AI) models of accident
anticipation lack a human-interpretable explanation of their decision-making.
Although these models perform well, they remain a black-box to the ADS users,
thus difficult to get their trust. To this end, this paper presents a Gated
Recurrent Unit (GRU) network that learns spatio-temporal relational features
for the early anticipation of traffic accidents from dashcam video data. A
post-hoc attention mechanism named Grad-CAM is integrated into the network to
generate saliency maps as the visual explanation of the accident anticipation
decision. An eye tracker captures human eye fixation points for generating
human attention maps. The explainability of network-generated saliency maps is
evaluated in comparison to human attention maps. Qualitative and quantitative
results on a public crash dataset confirm that the proposed explainable network
can anticipate an accident on average 4.57 seconds before it occurs, with
94.02% average precision. In further, various post-hoc attention-based XAI
methods are evaluated and compared. It confirms that the Grad-CAM chosen by
this study can generate high-quality, human-interpretable saliency maps (with
1.42 Normalized Scanpath Saliency) for explaining the crash anticipation
decision. Importantly, results confirm that the proposed AI model, with a
human-inspired design, can outperform humans in the accident anticipation. | [
"cs.CV"
] |
Recently, Expectation-maximization (EM) algorithm has been introduced as an
effective means to solve multi-view registration problem. Most of the previous
methods assume that each data point is drawn from the Gaussian Mixture Model
(GMM), which is difficult to deal with the noise with heavy-tail or outliers.
Accordingly, this paper proposed an effective registration method based on
Student's t Mixture Model (StMM). More specially, we assume that each data
point is drawn from one unique StMM, where its nearest neighbors (NNs) in other
point sets are regarded as the t-distribution centroids with equal covariances,
membership probabilities, and fixed degrees of freedom. Based on this
assumption, the multi-view registration problem is formulated into the
maximization of the likelihood function including all rigid transformations.
Subsequently, the EM algorithm is utilized to optimize rigid transformations as
well as the only t-distribution covariance for multi-view registration. Since
only a few model parameters require to be optimized, the proposed method is
more likely to obtain the desired registration results. Besides, all
t-distribution centroids can be obtained by the NN search method, it is very
efficient to achieve multi-view registration. What's more, the t-distribution
takes the noise with heavy-tail into consideration, which makes the proposed
method be inherently robust to noises and outliers. Experimental results tested
on benchmark data sets illustrate its superior performance on robustness and
accuracy over state-of-the-art methods. | [
"cs.CV"
] |
Image segmentation is to extract meaningful objects from a given image. For
degraded images due to occlusions, obscurities or noises, the accuracy of the
segmentation result can be severely affected. To alleviate this problem, prior
information about the target object is usually introduced. In [10], a
topology-preserving registration-based segmentation model was proposed, which
is restricted to segment 2D images only. In this paper, we propose a novel 3D
topology-preserving registration-based segmentation model with the hyperelastic
regularization, which can handle both 2D and 3D images. The existence of the
solution of the proposed model is established. We also propose a converging
iterative scheme to solve the proposed model. Numerical experiments have been
carried out on the synthetic and real images, which demonstrate the
effectiveness of our proposed model. | [
"cs.CV"
] |
Domain adaptation (DA) aims to transfer knowledge from a label-rich but
heterogeneous domain to a label-scare domain, which alleviates the labeling
efforts and attracts considerable attention. Different from previous methods
focusing on learning domain-invariant feature representations, some recent
methods present generic semi-supervised learning (SSL) techniques and directly
apply them to DA tasks, even achieving competitive performance. One of the most
popular SSL techniques is pseudo-labeling that assigns pseudo labels for each
unlabeled data via the classifier trained by labeled data. However, it ignores
the distribution shift in DA problems and is inevitably biased to source data.
To address this issue, we propose a new pseudo-labeling framework called
Auxiliary Target Domain-Oriented Classifier (ATDOC). ATDOC alleviates the
classifier bias by introducing an auxiliary classifier for target data only, to
improve the quality of pseudo labels. Specifically, we employ the memory
mechanism and develop two types of non-parametric classifiers, i.e. the nearest
centroid classifier and neighborhood aggregation, without introducing any
additional network parameters. Despite its simplicity in a pseudo
classification objective, ATDOC with neighborhood aggregation significantly
outperforms domain alignment techniques and prior SSL techniques on a large
variety of DA benchmarks and even scare-labeled SSL tasks. | [
"cs.CV",
"cs.LG"
] |
Few-shot classification studies the problem of quickly adapting a deep
learner to understanding novel classes based on few support images. In this
context, recent research efforts have been aimed at designing more and more
complex classifiers that measure similarities between query and support images,
but left the importance of feature embeddings seldom explored. We show that the
reliance on sophisticated classifier is not necessary and a simple classifier
applied directly to improved feature embeddings can outperform state-of-the-art
methods. To this end, we present a new method named \textbf{DCAP} in which we
investigate how one can improve the quality of embeddings by leveraging
\textbf{D}ense \textbf{C}lassification and \textbf{A}ttentive \textbf{P}ooling.
Specifically, we propose to pre-train a learner on base classes with abundant
samples to solve dense classification problem first and then fine-tune the
learner on a bunch of randomly sampled few-shot tasks to adapt it to few-shot
scenerio or the test time scenerio. We suggest to pool feature maps by applying
attentive pooling instead of the widely used global average pooling (GAP) to
prepare embeddings for few-shot classification during meta-finetuning.
Attentive pooling learns to reweight local descriptors, explaining what the
learner is looking for as evidence for decision making. Experiments on two
benchmark datasets show the proposed method to be superior in multiple few-shot
settings while being simpler and more explainable. Code is available at:
\url{https://github.com/Ukeyboard/dcap/}. | [
"cs.CV"
] |
Generalization is one of the critical issues in machine learning. However,
traditional methods like uniform convergence are not powerful enough to fully
explain generalization because they may yield vacuous bounds even in
overparameterized linear regression regimes. An alternative solution is to
analyze the generalization dynamics to derive algorithm-dependent bounds, e.g.,
stability. Unfortunately, the stability-based bound is still far from
explaining the remarkable generalization ability of neural networks due to the
coarse-grained analysis of the signal and noise. Inspired by the observation
that neural networks show a slow convergence rate when fitting noise, we
propose decomposing the excess risk dynamics and applying stability-based bound
only on the variance part (which measures how the model performs on pure
noise). We provide two applications for the framework, including a linear case
(overparameterized linear regression with gradient descent) and a non-linear
case (matrix recovery with gradient flow). Under the decomposition framework,
the new bound accords better with the theoretical and empirical evidence
compared to the stability-based bound and uniform convergence bound. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
This paper presents an overview of the evolution of local features from
handcrafted to deep-learning-based methods, followed by a discussion of several
benchmarks and papers evaluating such local features. Our investigations are
motivated by 3D reconstruction problems, where the precise location of the
features is important. As we describe these methods, we highlight and explain
the challenges of feature extraction and potential ways to overcome them. We
first present handcrafted methods, followed by methods based on classical
machine learning and finally we discuss methods based on deep-learning. This
largely chronologically-ordered presentation will help the reader to fully
understand the topic of image and region description in order to make best use
of it in modern computer vision applications. In particular, understanding
handcrafted methods and their motivation can help to understand modern
approaches and how machine learning is used to improve the results. We also
provide references to most of the relevant literature and code. | [
"cs.CV"
] |
Caricature generation is an interesting yet challenging task. The primary
goal is to generate plausible caricatures with reasonable exaggerations given
face images. Conventional caricature generation approaches mainly use low-level
geometric transformations such as image warping to generate exaggerated images,
which lack richness and diversity in terms of content and style. The recent
progress in generative adversarial networks (GANs) makes it possible to learn
an image-to-image transformation from data, so that richer contents and styles
can be generated. However, directly applying the GAN-based models to this task
leads to unsatisfactory results because there is a large variance in the
caricature distribution. Moreover, some models require strictly paired training
data which largely limits their usage scenarios. In this paper, we propose
CariGAN overcome these problems. Instead of training on paired data, CariGAN
learns transformations only from weakly paired images. Specifically, to enforce
reasonable exaggeration and facial deformation, facial landmarks are adopted as
an additional condition to constrain the generated image. Furthermore, an
attention mechanism is introduced to encourage our model to focus on the key
facial parts so that more vivid details in these regions can be generated.
Finally, a Diversity Loss is proposed to encourage the model to produce diverse
results to help alleviate the `mode collapse' problem of the conventional
GAN-based models. Extensive experiments on a new large-scale `WebCaricature'
dataset show that the proposed CariGAN can generate more plausible caricatures
with larger diversity compared with the state-of-the-art models. | [
"cs.CV"
] |
Real world applications such as economics and policy making often involve
solving multi-agent games with two unique features: (1) The agents are
inherently asymmetric and partitioned into leaders and followers; (2) The
agents have different reward functions, thus the game is general-sum. The
majority of existing results in this field focuses on either symmetric solution
concepts (e.g. Nash equilibrium) or zero-sum games. It remains vastly open how
to learn the Stackelberg equilibrium -- an asymmetric analog of the Nash
equilibrium -- in general-sum games efficiently from samples.
This paper initiates the theoretical study of sample-efficient learning of
the Stackelberg equilibrium, in the bandit feedback setting where we only
observe noisy samples of the reward. We consider three representative
two-player general-sum games: bandit games, bandit-reinforcement learning
(bandit-RL) games, and linear bandit games. In all these games, we identify a
fundamental gap between the exact value of the Stackelberg equilibrium and its
estimated version using finitely many noisy samples, which can not be closed
information-theoretically regardless of the algorithm. We then establish sharp
positive results on sample-efficient learning of Stackelberg equilibrium with
value optimal up to the gap identified above, with matching lower bounds in the
dependency on the gap, error tolerance, and the size of the action spaces.
Overall, our results unveil unique challenges in learning Stackelberg
equilibria under noisy bandit feedback, which we hope could shed light on
future research on this topic. | [
"cs.LG",
"cs.AI",
"cs.GT",
"stat.ML"
] |
We propose PD-GAN, a probabilistic diverse GAN for image inpainting. Given an
input image with arbitrary hole regions, PD-GAN produces multiple inpainting
results with diverse and visually realistic content. Our PD-GAN is built upon a
vanilla GAN which generates images based on random noise. During image
generation, we modulate deep features of input random noise from coarse-to-fine
by injecting an initially restored image and the hole regions in multiple
scales. We argue that during hole filling, the pixels near the hole boundary
should be more deterministic (i.e., with higher probability trusting the
context and initially restored image to create natural inpainting boundary),
while those pixels lie in the center of the hole should enjoy more degrees of
freedom (i.e., more likely to depend on the random noise for enhancing
diversity). To this end, we propose spatially probabilistic diversity
normalization (SPDNorm) inside the modulation to model the probability of
generating a pixel conditioned on the context information. SPDNorm dynamically
balances the realism and diversity inside the hole region, making the generated
content more diverse towards the hole center and resemble neighboring image
content more towards the hole boundary. Meanwhile, we propose a perceptual
diversity loss to further empower PD-GAN for diverse content generation.
Experiments on benchmark datasets including CelebA-HQ, Places2 and Paris Street
View indicate that PD-GAN is effective for diverse and visually realistic image
restoration. | [
"cs.CV"
] |
High-order interactive features capture the correlation between different
columns and thus are promising to enhance various learning tasks on ubiquitous
tabular data. To automate the generation of interactive features, existing
works either explicitly traverse the feature space or implicitly express the
interactions via intermediate activations of some designed models. These two
kinds of methods show that there is essentially a trade-off between feature
interpretability and search efficiency. To possess both of their merits, we
propose a novel method named Feature Interaction Via Edge Search (FIVES), which
formulates the task of interactive feature generation as searching for edges on
the defined feature graph. Specifically, we first present our theoretical
evidence that motivates us to search for useful interactive features with
increasing order. Then we instantiate this search strategy by optimizing both a
dedicated graph neural network (GNN) and the adjacency tensor associated with
the defined feature graph. In this way, the proposed FIVES method simplifies
the time-consuming traversal as a typical training course of GNN and enables
explicit feature generation according to the learned adjacency tensor.
Experimental results on both benchmark and real-world datasets show the
advantages of FIVES over several state-of-the-art methods. Moreover, the
interactive features identified by FIVES are deployed on the recommender system
of Taobao, a worldwide leading e-commerce platform. Results of an online A/B
testing further verify the effectiveness of the proposed method FIVES, and we
further provide FIVES as AI utilities for the customers of Alibaba Cloud. | [
"cs.LG",
"stat.ML"
] |
Bayesian optimization (BO) is an approach to globally optimizing black-box
objective functions that are expensive to evaluate. BO-powered experimental
design has found wide application in materials science, chemistry, experimental
physics, drug development, etc. This work aims to bring attention to the
benefits of applying BO in designing experiments and to provide a BO manual,
covering both methodology and software, for the convenience of anyone who wants
to apply or learn BO. In particular, we briefly explain the BO technique,
review all the applications of BO in additive manufacturing, compare and
exemplify the features of different open BO libraries, unlock new potential
applications of BO to other types of data (e.g., preferential output). This
article is aimed at readers with some understanding of Bayesian methods, but
not necessarily with knowledge of additive manufacturing; the software
performance overview and implementation instructions are instrumental for any
experimental-design practitioner. Moreover, our review in the field of additive
manufacturing highlights the current knowledge and technological trends of BO. | [
"cs.LG",
"cs.CE"
] |
View synthesis is usually done by an autoencoder, in which the encoder maps a
source view image into a latent content code, and the decoder transforms it
into a target view image according to the condition. However, the source
contents are often not well kept in this setting, which leads to unnecessary
changes during the view translation. Although adding skipped connections, like
Unet, alleviates the problem, but it often causes the failure on the view
conformity. This paper proposes a new architecture by performing the
source-to-target deformation in an iterative way. Instead of simply
incorporating the features from multiple layers of the encoder, we design soft
and hard deformation modules, which warp the encoder features to the target
view at different resolutions, and give results to the decoder to complement
the details. Particularly, the current warping flow is not only used to align
the feature of the same resolution, but also as an approximation to coarsely
deform the high resolution feature. Then the residual flow is estimated and
applied in the high resolution, so that the deformation is built up in the
coarse-to-fine fashion. To better constrain the model, we synthesize a rough
target view image based on the intermediate flows and their warped features.
The extensive ablation studies and the final results on two different data sets
show the effectiveness of the proposed model. | [
"cs.CV"
] |
Point cloud learning has lately attracted increasing attention due to its
wide applications in many areas, such as computer vision, autonomous driving,
and robotics. As a dominating technique in AI, deep learning has been
successfully used to solve various 2D vision problems. However, deep learning
on point clouds is still in its infancy due to the unique challenges faced by
the processing of point clouds with deep neural networks. Recently, deep
learning on point clouds has become even thriving, with numerous methods being
proposed to address different problems in this area. To stimulate future
research, this paper presents a comprehensive review of recent progress in deep
learning methods for point clouds. It covers three major tasks, including 3D
shape classification, 3D object detection and tracking, and 3D point cloud
segmentation. It also presents comparative results on several publicly
available datasets, together with insightful observations and inspiring future
research directions. | [
"cs.CV",
"cs.LG",
"cs.RO",
"eess.IV"
] |
Phishing is the simplest form of cybercrime with the objective of baiting
people into giving away delicate information such as individually recognizable
data, banking and credit card details, or even credentials and passwords. This
type of simple yet most effective cyber-attack is usually launched through
emails, phone calls, or instant messages. The credential or private data stolen
are then used to get access to critical records of the victims and can result
in extensive fraud and monetary loss. Hence, sending malicious messages to
victims is a stepping stone of the phishing procedure. A \textit{phisher}
usually setups a deceptive website, where the victims are conned into entering
credentials and sensitive information. It is therefore important to detect
these types of malicious websites before causing any harmful damages to
victims. Inspired by the evolving nature of the phishing websites, this paper
introduces a novel approach based on deep reinforcement learning to model and
detect malicious URLs. The proposed model is capable of adapting to the dynamic
behavior of the phishing websites and thus learn the features associated with
phishing website detection. | [
"cs.LG",
"cs.CR",
"stat.ML"
] |
Automated anatomical labeling plays a vital role in coronary artery disease
diagnosing procedure. The main challenge in this problem is the large
individual variability inherited in human anatomy. Existing methods usually
rely on the position information and the prior knowledge of the topology of the
coronary artery tree, which may lead to unsatisfactory performance when the
main branches are confusing. Motivated by the wide application of the graph
neural network in structured data, in this paper, we propose a conditional
partial-residual graph convolutional network (CPR-GCN), which takes both
position and CT image into consideration, since CT image contains abundant
information such as branch size and spanning direction. Two majority parts, a
Partial-Residual GCN and a conditions extractor, are included in CPR-GCN. The
conditions extractor is a hybrid model containing the 3D CNN and the LSTM,
which can extract 3D spatial image features along the branches. On the
technical side, the Partial-Residual GCN takes the position features of the
branches, with the 3D spatial image features as conditions, to predict the
label for each branches. While on the mathematical side, our approach twists
the partial differential equation (PDE) into the graph modeling. A dataset with
511 subjects is collected from the clinic and annotated by two experts with a
two-phase annotation process. According to the five-fold cross-validation, our
CPR-GCN yields 95.8% meanRecall, 95.4% meanPrecision and 0.955 meanF1, which
outperforms state-of-the-art approaches. | [
"cs.CV",
"eess.IV"
] |
We present a conceptually simple, flexible and effective framework for weight
generating networks. Our approach is general that unifies two current distinct
and extremely effective SENet and CondConv into the same framework on weight
space. The method, called WeightNet, generalizes the two methods by simply
adding one more grouped fully-connected layer to the attention activation
layer. We use the WeightNet, composed entirely of (grouped) fully-connected
layers, to directly output the convolutional weight. WeightNet is easy and
memory-conserving to train, on the kernel space instead of the feature space.
Because of the flexibility, our method outperforms existing approaches on both
ImageNet and COCO detection tasks, achieving better Accuracy-FLOPs and
Accuracy-Parameter trade-offs. The framework on the flexible weight space has
the potential to further improve the performance. Code is available at
https://github.com/megvii-model/WeightNet. | [
"cs.CV"
] |
Reinforcement learning algorithms are typically geared towards optimizing the
expected return of an agent. However, in many practical applications, low
variance in the return is desired to ensure the reliability of an algorithm. In
this paper, we propose on-policy and off-policy actor-critic algorithms that
optimize a performance criterion involving both mean and variance in the
return. Previous work uses the second moment of return to estimate the variance
indirectly. Instead, we use a much simpler recently proposed direct variance
estimator which updates the estimates incrementally using temporal difference
methods. Using the variance-penalized criterion, we guarantee the convergence
of our algorithm to locally optimal policies for finite state action Markov
decision processes. We demonstrate the utility of our algorithm in tabular and
continuous MuJoCo domains. Our approach not only performs on par with
actor-critic and prior variance-penalization baselines in terms of expected
return, but also generates trajectories which have lower variance in the
return. | [
"cs.LG",
"cs.AI"
] |
Score-based methods represented as stochastic differential equations on a
continuous time domain have recently proven successful as a non-adversarial
generative model. Training such models relies on denoising score matching,
which can be seen as multi-scale denoising autoencoders. Here, we augment the
denoising score-matching framework to enable representation learning without
any supervised signal. GANs and VAEs learn representations by directly
transforming latent codes to data samples. In contrast, score-based
representation learning relies on a new formulation of the denoising
score-matching objective and thus encodes information needed for denoising. We
show how this difference allows for manual control of the level of detail
encoded in the representation. | [
"cs.LG",
"cs.CV"
] |
Image inpainting is the task of filling-in missing regions of a damaged or
incomplete image. In this work we tackle this problem not only by using the
available visual data but also by incorporating image semantics through the use
of generative models. Our contribution is twofold: First, we learn a data
latent space by training an improved version of the Wasserstein generative
adversarial network, for which we incorporate a new generator and discriminator
architecture. Second, the learned semantic information is combined with a new
optimization loss for inpainting whose minimization infers the missing content
conditioned by the available data. It takes into account powerful contextual
and perceptual content inherent in the image itself. The benefits include the
ability to recover large regions by accumulating semantic information even it
is not fully present in the damaged image. Experiments show that the presented
method obtains qualitative and quantitative top-tier results in different
experimental situations and also achieves accurate photo-realism comparable to
state-of-the-art works. | [
"cs.CV"
] |
An online resource scheduling framework is proposed for minimizing the sum of
weighted task latency for all the Internet of things (IoT) users, by optimizing
offloading decision, transmission power and resource allocation in the
large-scale mobile edge computing (MEC) system. Towards this end, a deep
reinforcement learning (DRL) based solution is proposed, which includes the
following components. Firstly, a related and regularized stacked auto encoder
(2r-SAE) with unsupervised learning is applied to perform data compression and
representation for high dimensional channel quality information (CQI) data,
which can reduce the state space for DRL. Secondly, we present an adaptive
simulated annealing based approach (ASA) as the action search method of DRL, in
which an adaptive h-mutation is used to guide the search direction and an
adaptive iteration is proposed to enhance the search efficiency during the DRL
process. Thirdly, a preserved and prioritized experience replay (2p-ER) is
introduced to assist the DRL to train the policy network and find the optimal
offloading policy. Numerical results are provided to demonstrate that the
proposed algorithm can achieve near-optimal performance while significantly
decreasing the computational time compared with existing benchmarks. | [
"cs.LG",
"cs.NI",
"stat.ML"
] |
Explaining the behaviors of deep neural networks, usually considered as black
boxes, is critical especially when they are now being adopted over diverse
aspects of human life. Taking the advantages of interpretable machine learning
(interpretable ML), this paper proposes a novel tool called Catastrophic
Forgetting Dissector (or CFD) to explain catastrophic forgetting in continual
learning settings. We also introduce a new method called Critical Freezing
based on the observations of our tool. Experiments on ResNet articulate how
catastrophic forgetting happens, particularly showing which components of this
famous network are forgetting. Our new continual learning algorithm defeats
various recent techniques by a significant margin, proving the capability of
the investigation. Critical freezing not only attacks catastrophic forgetting
but also exposes explainability. | [
"cs.LG",
"cs.CV"
] |
We propose a planning and perception mechanism for a robot (agent), that can
only observe the underlying environment partially, in order to solve an image
classification problem. A three-layer architecture is suggested that consists
of a meta-layer that decides the intermediate goals, an action-layer that
selects local actions as the agent navigates towards a goal, and a
classification-layer that evaluates the reward and makes a prediction. We
design and implement these layers using deep reinforcement learning. A
generalized policy gradient algorithm is utilized to learn the parameters of
these layers to maximize the expected reward. Our proposed methodology is
tested on the MNIST dataset of handwritten digits, which provides us with a
level of explainability while interpreting the agent's intermediate goals and
course of action. | [
"cs.LG",
"cs.AI",
"cs.RO",
"cs.SY",
"eess.SY",
"stat.ML"
] |
Machine learning algorithms have been successfully used to approximate
nonlinear maps under weak assumptions on the structure and properties of the
maps. We present deep neural networks using dense and convolutional layers to
solve an inverse problem, where we seek to estimate parameters of a
FitzHugh-Nagumo model, which consists of a nonlinear system of ordinary
differential equations (ODEs). We employ the neural networks to approximate
reconstruction maps for model parameter estimation from observational data,
where the data comes from the solution of the ODE and takes the form of a time
series representing dynamically spiking membrane potential of a biological
neuron. We target this dynamical model because of the computational challenges
it poses in an inference setting, namely, having a highly nonlinear and
nonconvex data misfit term and permitting only weakly informative priors on
parameters. These challenges cause traditional optimization to fail and
alternative algorithms to exhibit large computational costs. We quantify the
prediction errors of model parameters obtained from the neural networks and
investigate the effects of network architectures with and without the presence
of noise in observational data. We generalize our framework for neural
network-based reconstruction maps to simultaneously estimate ODE parameters and
parameters of autocorrelated observational noise. Our results demonstrate that
deep neural networks have the potential to estimate parameters in dynamical
models and stochastic processes, and they are capable of predicting parameters
accurately for the FitzHugh-Nagumo model. | [
"stat.ML",
"cs.LG",
"math.DS"
] |
Proximal Policy Optimization (PPO) is a popular deep policy gradient
algorithm. In standard implementations, PPO regularizes policy updates with
clipped probability ratios, and parameterizes policies with either continuous
Gaussian distributions or discrete Softmax distributions. These design choices
are widely accepted, and motivated by empirical performance comparisons on
MuJoCo and Atari benchmarks.
We revisit these practices outside the regime of current benchmarks, and
expose three failure modes of standard PPO. We explain why standard design
choices are problematic in these cases, and show that alternative choices of
surrogate objectives and policy parameterizations can prevent the failure
modes. We hope that our work serves as a reminder that many algorithmic design
choices in reinforcement learning are tied to specific simulation environments.
We should not implicitly accept these choices as a standard part of a more
general algorithm. | [
"cs.LG",
"stat.ML"
] |
We introduce a new model of membership query (MQ) learning, where the
learning algorithm is restricted to query points that are \emph{close} to
random examples drawn from the underlying distribution. The learning model is
intermediate between the PAC model (Valiant, 1984) and the PAC+MQ model (where
the queries are allowed to be arbitrary points).
Membership query algorithms are not popular among machine learning
practitioners. Apart from the obvious difficulty of adaptively querying
labelers, it has also been observed that querying \emph{unnatural} points leads
to increased noise from human labelers (Lang and Baum, 1992). This motivates
our study of learning algorithms that make queries that are close to examples
generated from the data distribution.
We restrict our attention to functions defined on the $n$-dimensional Boolean
hypercube and say that a membership query is local if its Hamming distance from
some example in the (random) training data is at most $O(\log(n))$. We show the
following results in this model:
(i) The class of sparse polynomials (with coefficients in R) over $\{0,1\}^n$
is polynomial time learnable under a large class of \emph{locally smooth}
distributions using $O(\log(n))$-local queries. This class also includes the
class of $O(\log(n))$-depth decision trees.
(ii) The class of polynomial-sized decision trees is polynomial time
learnable under product distributions using $O(\log(n))$-local queries.
(iii) The class of polynomial size DNF formulas is learnable under the
uniform distribution using $O(\log(n))$-local queries in time
$n^{O(\log(\log(n)))}$.
(iv) In addition we prove a number of results relating the proposed model to
the traditional PAC model and the PAC+MQ model. | [
"cs.LG",
"cs.AI"
] |
Graph Convolutional Networks (GCNs) are increasingly adopted in large-scale
graph-based recommender systems. Training GCN requires the minibatch generator
traversing graphs and sampling the sparsely located neighboring nodes to obtain
their features. Since real-world graphs often exceed the capacity of GPU
memory, current GCN training systems keep the feature table in host memory and
rely on the CPU to collect sparse features before sending them to the GPUs.
This approach, however, puts tremendous pressure on host memory bandwidth and
the CPU. This is because the CPU needs to (1) read sparse features from memory,
(2) write features into memory as a dense format, and (3) transfer the features
from memory to the GPUs. In this work, we propose a novel GPU-oriented data
communication approach for GCN training, where GPU threads directly access
sparse features in host memory through zero-copy accesses without much CPU
help. By removing the CPU gathering stage, our method significantly reduces the
consumption of the host resources and data access latency. We further present
two important techniques to achieve high host memory access efficiency by the
GPU: (1) automatic data access address alignment to maximize PCIe packet
efficiency, and (2) asynchronous zero-copy access and kernel execution to fully
overlap data transfer with training. We incorporate our method into PyTorch and
evaluate its effectiveness using several graphs with sizes up to 111 million
nodes and 1.6 billion edges. In a multi-GPU training setup, our method is
65-92% faster than the conventional data transfer method, and can even match
the performance of all-in-GPU-memory training for some graphs that fit in GPU
memory. | [
"cs.LG"
] |
Pseudoprogression (PsP) occurs in 20-30% of patients with glioblastoma
multiforme (GBM) after receiving the standard treatment. In the course of
post-treatment magnetic resonance imaging (MRI), PsP exhibits similarities in
shape and intensity to the true tumor progression (TTP) of GBM. So, these
similarities pose challenges on the differentiation of these types of
progression and hence the selection of the appropriate clinical treatment
strategy. In this paper, we introduce DC-AL GAN, a novel feature learning
method based on deep convolutional generative adversarial network (DCGAN) and
AlexNet, to discriminate between PsP and TTP in MRI images. Due to the
adversarial relationship between the generator and the discriminator of DCGAN,
high-level discriminative features of PsP and TTP can be derived for the
discriminator with AlexNet. Also, a feature fusion scheme is used to combine
higher-layer features with lower-layer information, leading to more powerful
features that are used for effectively discriminating between PsP and TTP. The
experimental results show that DC-AL GAN achieves desirable PsP and TTP
classification performance that is superior to other state-of-the-art methods. | [
"cs.CV",
"eess.IV"
] |
Zero-shot learning is a new paradigm to classify objects from classes that
are not available at training time. Zero-shot learning (ZSL) methods have
attracted considerable attention in recent years because of their ability to
classify unseen/novel class examples. Most of the existing approaches on ZSL
works when all the samples from seen classes are available to train the model,
which does not suit real life. In this paper, we tackle this hindrance by
developing a generative replay-based continual ZSL (GRCZSL). The proposed
method endows traditional ZSL to learn from streaming data and acquire new
knowledge without forgetting the previous tasks' gained experience. We handle
catastrophic forgetting in GRCZSL by replaying the synthetic samples of seen
classes, which have appeared in the earlier tasks. These synthetic samples are
synthesized using the trained conditional variational autoencoder (VAE) over
the immediate past task. Moreover, we only require the current and immediate
previous VAE at any time for training and testing. The proposed GRZSL method is
developed for a single-head setting of continual learning, simulating a
real-world problem setting. In this setting, task identity is given during
training but unavailable during testing. GRCZSL performance is evaluated on
five benchmark datasets for the generalized setup of ZSL with fixed and dynamic
(incremental class) settings of continual learning. The existing class setting
presented recently in the literature is not suitable for a class-incremental
setting. Therefore, this paper proposes a new setting to address this issue.
Experimental results show that the proposed method significantly outperforms
the baseline and the state-of-the-art method and makes it more suitable for
real-world applications. | [
"cs.CV",
"cs.LG"
] |
We propose a novel framework for efficient parallelization of deep
reinforcement learning algorithms, enabling these algorithms to learn from
multiple actors on a single machine. The framework is algorithm agnostic and
can be applied to on-policy, off-policy, value based and policy gradient based
algorithms. Given its inherent parallelism, the framework can be efficiently
implemented on a GPU, allowing the usage of powerful models while significantly
reducing training time. We demonstrate the effectiveness of our framework by
implementing an advantage actor-critic algorithm on a GPU, using on-policy
experiences and employing synchronous updates. Our algorithm achieves
state-of-the-art performance on the Atari domain after only a few hours of
training. Our framework thus opens the door for much faster experimentation on
demanding problem domains. Our implementation is open-source and is made public
at https://github.com/alfredvc/paac | [
"cs.LG"
] |
Collecting large-scale medical datasets with fine-grained annotations is
time-consuming and requires experts. For this reason, weakly supervised
learning aims at optimising machine learning models using weaker forms of
annotations, such as scribbles, which are easier and faster to collect.
Unfortunately, training with weak labels is challenging and needs
regularisation. Herein, we introduce a novel self-supervised multi-scale
consistency loss, which, coupled with an attention mechanism, encourages the
segmentor to learn multi-scale relationships between objects and improves
performance. We show state-of-the-art performance on several medical and
non-medical datasets. The code used for the experiments is available at
https://vios-s.github.io/multiscale-pyag. | [
"cs.CV"
] |
Depth estimation features are helpful for 3D recognition. Commodity-grade
depth cameras are able to capture depth and color image in real-time. However,
glossy, transparent or distant surface cannot be scanned properly by the
sensor. As a result, enhancement and restoration from sensing depth is an
important task. Depth completion aims at filling the holes that sensors fail to
detect, which is still a complex task for machine to learn. Traditional
hand-tuned methods have reached their limits, while neural network based
methods tend to copy and interpolate the output from surrounding depth values.
This leads to blurred boundaries, and structures of the depth map are lost.
Consequently, our main work is to design an end-to-end network improving
completion depth maps while maintaining edge clarity. We utilize self-attention
mechanism, previously used in image inpainting fields, to extract more useful
information in each layer of convolution so that the complete depth map is
enhanced. In addition, we propose boundary consistency concept to enhance the
depth map quality and structure. Experimental results validate the
effectiveness of our self-attention and boundary consistency schema, which
outperforms previous state-of-the-art depth completion work on Matterport3D
dataset. Our code is publicly available at
https://github.com/patrickwu2/Depth-Completion | [
"cs.CV"
] |
In this paper, we focus on the semi-supervised person re-identification
(Re-ID) case, which only has the intra-camera (within-camera) labels but not
inter-camera (cross-camera) labels. In real-world applications, these
intra-camera labels can be readily captured by tracking algorithms or few
manual annotations, when compared with cross-camera labels. In this case, it is
very difficult to explore the relationships between cross-camera persons in the
training stage due to the lack of cross-camera label information. To deal with
this issue, we propose a novel Progressive Cross-camera Soft-label Learning
(PCSL) framework for the semi-supervised person Re-ID task, which can generate
cross-camera soft-labels and utilize them to optimize the network. Concretely,
we calculate an affinity matrix based on person-level features and adapt them
to produce the similarities between cross-camera persons (i.e., cross-camera
soft-labels). To exploit these soft-labels to train the network, we investigate
the weighted cross-entropy loss and the weighted triplet loss from the
classification and discrimination perspectives, respectively. Particularly, the
proposed framework alternately generates progressive cross-camera soft-labels
and gradually improves feature representations in the whole learning course.
Extensive experiments on five large-scale benchmark datasets show that PCSL
significantly outperforms the state-of-the-art unsupervised methods that employ
labeled source domains or the images generated by the GAN-based models.
Furthermore, the proposed method even has a competitive performance with
respect to deep supervised Re-ID methods. | [
"cs.CV"
] |
The extraction of a scene graph with objects as nodes and mutual
relationships as edges is the basis for a deep understanding of image content.
Despite recent advances, such as message passing and joint classification, the
detection of visual relationships remains a challenging task due to sub-optimal
exploration of the mutual interaction among the visual objects. In this work,
we propose a novel transformer formulation for scene graph generation and
relation prediction. We leverage the encoder-decoder architecture of the
transformer for rich feature embedding of nodes and edges. Specifically, we
model the node-to-node interaction with the self-attention of the transformer
encoder and the edge-to-node interaction with the cross-attention of the
transformer decoder. Further, we introduce a novel positional embedding
suitable to handle edges in the decoder. Finally, our relation prediction
module classifies the directed relation from the learned node and edge
embedding. We name this architecture as Relation Transformer Network (RTN). On
the Visual Genome and GQA dataset, we have achieved an overall mean of 4.85%
and 3.1% point improvement in comparison with state-of-the-art methods. Our
experiments show that Relation Transformer can efficiently model context across
various datasets with small, medium, and large-scale relation classification. | [
"cs.CV"
] |
In Machine Learning, White Box Adversarial Attacks rely on knowing underlying
knowledge about the model attributes. This works focuses on discovering to
distrinct pieces of model information: the underlying architecture and primary
training dataset. With the process in this paper, a structured set of input
probes and the output of the model become the training data for a deep
classifier. Two subdomains in Machine Learning are explored: image based
classifiers and text transformers with GPT-2. With image classification, the
focus is on exploring commonly deployed architectures and datasets available in
popular public libraries. Using a single transformer architecture with multiple
levels of parameters, text generation is explored by fine tuning off different
datasets. Each dataset explored in image and text are distinguishable from one
another. Diversity in text transformer outputs implies further research is
needed to successfully classify architecture attribution in text domain. | [
"cs.LG",
"stat.ML"
] |
Many problems in computer vision require dealing with sparse, unordered data
in the form of point clouds. Permutation-equivariant networks have become a
popular solution-they operate on individual data points with simple perceptrons
and extract contextual information with global pooling. This can be achieved
with a simple normalization of the feature maps, a global operation that is
unaffected by the order. In this paper, we propose Attentive Context
Normalization (ACN), a simple yet effective technique to build
permutation-equivariant networks robust to outliers. Specifically, we show how
to normalize the feature maps with weights that are estimated within the
network, excluding outliers from this normalization. We use this mechanism to
leverage two types of attention: local and global-by combining them, our method
is able to find the essential data points in high-dimensional space to solve a
given task. We demonstrate through extensive experiments that our approach,
which we call Attentive Context Networks (ACNe), provides a significant leap in
performance compared to the state-of-the-art on camera pose estimation, robust
fitting, and point cloud classification under noise and outliers. Source code:
https://github.com/vcg-uvic/acne. | [
"cs.CV"
] |
Model-free deep reinforcement learning (RL) agents can learn an effective
policy directly from repeated interactions with a black-box environment.
However in practice, the algorithms often require large amounts of training
experience to learn and generalize well. In addition, classic model-free
learning ignores the domain information contained in the state transition
tuples. Model-based RL, on the other hand, attempts to learn a model of the
environment from experience and is substantially more sample efficient, but
suffers from significantly large asymptotic bias owing to the imperfect
dynamics model. In this paper, we propose a gradient matching algorithm to
improve sample efficiency by utilizing target slope information from the
dynamics predictor to aid the model-free learner. We demonstrate this by
presenting a technique for matching the gradient information from the
model-based learner with the model-free component in an abstract
low-dimensional space and validate the proposed technique through experimental
results that demonstrate the efficacy of this approach. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Robust real-time detection and motion forecasting of traffic participants is
necessary for autonomous vehicles to safely navigate urban environments. In
this paper, we present RV-FuseNet, a novel end-to-end approach for joint
detection and trajectory estimation directly from time-series LiDAR data.
Instead of the widely used bird's eye view (BEV) representation, we utilize the
native range view (RV) representation of LiDAR data. The RV preserves the full
resolution of the sensor by avoiding the voxelization used in the BEV.
Furthermore, RV can be processed efficiently due to its compactness. Previous
approaches project time-series data to a common viewpoint for temporal fusion,
and often this viewpoint is different from where it was captured. This is
sufficient for BEV methods, but for RV methods, this can lead to loss of
information and data distortion which has an adverse impact on performance. To
address this challenge we propose a simple yet effective novel architecture,
\textit{Incremental Fusion}, that minimizes the information loss by
sequentially projecting each RV sweep into the viewpoint of the next sweep in
time. We show that our approach significantly improves motion forecasting
performance over the existing state-of-the-art. Furthermore, we demonstrate
that our sequential fusion approach is superior to alternative RV based fusion
methods on multiple datasets. | [
"cs.CV",
"cs.RO"
] |
The advancement of the Artificial Intelligence (AI) technologies makes it
possible to learn stylistic design criteria from existing maps or other visual
art and transfer these styles to make new digital maps. In this paper, we
propose a novel framework using AI for map style transfer applicable across
multiple map scales. Specifically, we identify and transfer the stylistic
elements from a target group of visual examples, including Google Maps,
OpenStreetMap, and artistic paintings, to unstylized GIS vector data through
two generative adversarial network (GAN) models. We then train a binary
classifier based on a deep convolutional neural network to evaluate whether the
transfer styled map images preserve the original map design characteristics.
Our experiment results show that GANs have a great potential for multiscale map
style transferring, but many challenges remain requiring future research. | [
"cs.CV",
"cs.LG",
"eess.IV",
"I.2.1; I.4.9"
] |
Lip reading has received increasing attention in recent years. This paper
focuses on the synergy of multilingual lip reading. There are about as many as
7000 languages in the world, which implies that it is impractical to train
separate lip reading models with large-scale data for each language. Although
each language has its own linguistic and pronunciation rules, the lip movements
of all languages share similar patterns due to the common structures of human
organs. Based on this idea, we try to explore the synergized learning of
multilingual lip reading in this paper, and further propose a synchronous
bidirectional learning (SBL) framework for effective synergy of multilingual
lip reading. We firstly introduce phonemes as our modeling units for the
multilingual setting here. Phonemes are more closely related with the lip
movements than the alphabet letters. At the same time, similar phonemes always
lead to similar visual patterns no matter which type the target language is.
Then, a novel SBL block is proposed to learn the rules for each language in a
fill-in-the-blank way. Specifically, the model has to learn to infer the target
unit given its bidirectional context, which could represent the composition
rules of phonemes for each language. To make the learning process more targeted
at each particular language, an extra task of predicting the language identity
is introduced in the learning process. Finally, a thorough comparison on LRW
(English) and LRW-1000 (Mandarin) is performed, which shows the promising
benefits from the synergized learning of different languages and also reports a
new state-of-the-art result on both datasets. | [
"cs.CV",
"cs.CL"
] |
We introduce Segment-Phrase Table (SPT), a large collection of bijective
associations between textual phrases and their corresponding segmentations.
Leveraging recent progress in object recognition and natural language
semantics, we show how we can successfully build a high-quality segment-phrase
table using minimal human supervision. More importantly, we demonstrate the
unique value unleashed by this rich bimodal resource, for both vision as well
as natural language understanding. First, we show that fine-grained textual
labels facilitate contextual reasoning that helps in satisfying semantic
constraints across image segments. This feature enables us to achieve
state-of-the-art segmentation results on benchmark datasets. Next, we show that
the association of high-quality segmentations to textual phrases aids in richer
semantic understanding and reasoning of these textual phrases. Leveraging this
feature, we motivate the problem of visual entailment and visual paraphrasing,
and demonstrate its utility on a large dataset. | [
"cs.CV"
] |
We study the problem of weakly supervised grounded image captioning. That is,
given an image, the goal is to automatically generate a sentence describing the
context of the image with each noun word grounded to the corresponding region
in the image. This task is challenging due to the lack of explicit fine-grained
region word alignments as supervision. Previous weakly supervised methods
mainly explore various kinds of regularization schemes to improve attention
accuracy. However, their performances are still far from the fully supervised
ones. One main issue that has been ignored is that the attention for generating
visually groundable words may only focus on the most discriminate parts and can
not cover the whole object. To this end, we propose a simple yet effective
method to alleviate the issue, termed as partial grounding problem in our
paper. Specifically, we design a distributed attention mechanism to enforce the
network to aggregate information from multiple spatially different regions with
consistent semantics while generating the words. Therefore, the union of the
focused region proposals should form a visual region that encloses the object
of interest completely. Extensive experiments have demonstrated the superiority
of our proposed method compared with the state-of-the-arts. | [
"cs.CV",
"cs.MM"
] |
Minimal paths are regarded as a powerful and efficient tool for boundary
detection and image segmentation due to its global optimality and the
well-established numerical solutions such as fast marching method. In this
paper, we introduce a flexible interactive image segmentation model based on
the Eikonal partial differential equation (PDE) framework in conjunction with
region-based homogeneity enhancement. A key ingredient in the introduced model
is the construction of local geodesic metrics, which are capable of integrating
anisotropic and asymmetric edge features, implicit region-based homogeneity
features and/or curvature regularization. The incorporation of the region-based
homogeneity features into the metrics considered relies on an implicit
representation of these features, which is one of the contributions of this
work. Moreover, we also introduce a way to build simple closed contours as the
concatenation of two disjoint open curves. Experimental results prove that the
proposed model indeed outperforms state-of-the-art minimal paths-based image
segmentation approaches. | [
"cs.CV",
"cs.CG"
] |
Improving sample efficiency is a key research problem in reinforcement
learning (RL), and CURL, which uses contrastive learning to extract high-level
features from raw pixels of individual video frames, is an efficient
algorithm~\citep{srinivas2020curl}. We observe that consecutive video frames in
a game are highly correlated but CURL deals with them independently. To further
improve data efficiency, we propose a new algorithm, masked contrastive
representation learning for RL, that takes the correlation among consecutive
inputs into consideration. In addition to the CNN encoder and the policy
network in CURL, our method introduces an auxiliary Transformer module to
leverage the correlations among video frames. During training, we randomly mask
the features of several frames, and use the CNN encoder and Transformer to
reconstruct them based on the context frames. The CNN encoder and Transformer
are jointly trained via contrastive learning where the reconstructed features
should be similar to the ground-truth ones while dissimilar to others. During
inference, the CNN encoder and the policy network are used to take actions, and
the Transformer module is discarded. Our method achieves consistent
improvements over CURL on $14$ out of $16$ environments from DMControl suite
and $21$ out of $26$ environments from Atari 2600 Games. The code is available
at https://github.com/teslacool/m-curl. | [
"cs.LG"
] |
In this paper, we propose a novel color constancy approach, called Bag of
Color Features (BoCF), building upon Bag-of-Features pooling. The proposed
method substantially reduces the number of parameters needed for illumination
estimation. At the same time, the proposed method is consistent with the color
constancy assumption stating that global spatial information is not relevant
for illumination estimation and local information ( edges, etc.) is sufficient.
Furthermore, BoCF is consistent with color constancy statistical approaches and
can be interpreted as a learning-based generalization of many statistical
approaches. To further improve the illumination estimation accuracy, we propose
a novel attention mechanism for the BoCF model with two variants based on
self-attention. BoCF approach and its variants achieve competitive, compared to
the state of the art, results while requiring much fewer parameters on three
benchmark datasets: ColorChecker RECommended, INTEL-TUT version 2, and NUS8. | [
"cs.CV"
] |
Recent work introduced deep kernel processes as an entirely kernel-based
alternative to NNs (Aitchison et al. 2020). Deep kernel processes flexibly
learn good top-layer representations by alternately sampling the kernel from a
distribution over positive semi-definite matrices and performing nonlinear
transformations. A particular deep kernel process, the deep Wishart process
(DWP), is of particular interest because its prior is equivalent to deep
Gaussian process (DGP) priors. However, inference in DWPs has not yet been
possible due to the lack of sufficiently flexible distributions over positive
semi-definite matrices. Here, we give a novel approach to obtaining flexible
distributions over positive semi-definite matrices by generalising the Bartlett
decomposition of the Wishart probability density. We use this new distribution
to develop an approximate posterior for the DWP that includes dependency across
layers. We develop a doubly-stochastic inducing-point inference scheme for the
DWP and show experimentally that inference in the DWP gives improved
performance over doing inference in a DGP with the equivalent prior. | [
"stat.ML",
"cs.LG"
] |
Local Interpretable Model-Agnostic Explanations (LIME) is a popular method to
perform interpretability of any kind of Machine Learning (ML) model. It
explains one ML prediction at a time, by learning a simple linear model around
the prediction. The model is trained on randomly generated data points, sampled
from the training dataset distribution and weighted according to the distance
from the reference point - the one being explained by LIME. Feature selection
is applied to keep only the most important variables. LIME is widespread across
different domains, although its instability - a single prediction may obtain
different explanations - is one of the major shortcomings. This is due to the
randomness in the sampling step, as well as to the flexibility in tuning the
weights and determines a lack of reliability in the retrieved explanations,
making LIME adoption problematic. In Medicine especially, clinical
professionals trust is mandatory to determine the acceptance of an explainable
algorithm, considering the importance of the decisions at stake and the related
legal issues. In this paper, we highlight a trade-off between explanation's
stability and adherence, namely how much it resembles the ML model. Exploiting
our innovative discovery, we propose a framework to maximise stability, while
retaining a predefined level of adherence. OptiLIME provides freedom to choose
the best adherence-stability trade-off level and more importantly, it clearly
highlights the mathematical properties of the retrieved explanation. As a
result, the practitioner is provided with tools to decide whether the
explanation is reliable, according to the problem at hand. We extensively test
OptiLIME on a toy dataset - to present visually the geometrical findings - and
a medical dataset. In the latter, we show how the method comes up with
meaningful explanations both from a medical and mathematical standpoint. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
This paper proposes to model chaos in the ATM cash withdrawal time series of
a big Indian bank and forecast the withdrawals using deep learning methods. It
also considers the importance of day-of-the-week and includes it as a dummy
exogenous variable. We first modelled the chaos present in the withdrawal time
series by reconstructing the state space of each series using the lag, and
embedding dimension found using an auto-correlation function and Cao's method.
This process converts the uni-variate time series into multi variate time
series. The "day-of-the-week" is converted into seven features with the help of
one-hot encoding. Then these seven features are augmented to the multivariate
time series. For forecasting the future cash withdrawals, using algorithms
namely ARIMA, random forest (RF), support vector regressor (SVR), multi-layer
perceptron (MLP), group method of data handling (GMDH), general regression
neural network (GRNN), long short term memory neural network and 1-dimensional
convolutional neural network. We considered a daily cash withdrawals data set
from an Indian commercial bank. After modelling chaos and adding exogenous
features to the data set, we observed improvements in the forecasting for all
models. Even though the random forest (RF) yielded better Symmetric Mean
Absolute Percentage Error (SMAPE) value, deep learning algorithms, namely LSTM
and 1D CNN, showed similar performance compared to RF, based on t-test. | [
"cs.LG",
"cs.AI",
"stat.ML",
"68T07",
"I.2; I.2.m"
] |
Low precision networks in the reinforcement learning (RL) setting are
relatively unexplored because of the limitations of binary activations for
function approximation. Here, in the discrete action ATARI domain, we
demonstrate, for the first time, that low precision policy distillation from a
high precision network provides a principled, practical way to train an RL
agent. As an application, on 10 different ATARI games, we demonstrate real-time
end-to-end game playing on low-power neuromorphic hardware by converting a
sequence of game frames into discrete actions. | [
"cs.LG",
"cs.NE",
"stat.ML"
] |
In this paper, we propose a novel video super-resolution method that aims at
generating high-fidelity high-resolution (HR) videos from low-resolution (LR)
ones. Previous methods predominantly leverage temporal neighbor frames to
assist the super-resolution of the current frame. Those methods achieve limited
performance as they suffer from the challenge in spatial frame alignment and
the lack of useful information from similar LR neighbor frames. In contrast, we
devise a cross-frame non-local attention mechanism that allows video
super-resolution without frame alignment, leading to be more robust to large
motions in the video. In addition, to acquire the information beyond neighbor
frames, we design a novel memory-augmented attention module to memorize general
video details during the super-resolution training. Experimental results
indicate that our method can achieve superior performance on large motion
videos comparing to the state-of-the-art methods without aligning frames. Our
source code will be released. | [
"cs.CV"
] |
Pancreatic ductal adenocarcinoma (PDAC) is the third most common cause of
cancer death in the United States. Predicting tumors like PDACs (including both
classification and segmentation) from medical images by deep learning is
becoming a growing trend, but usually a large number of annotated data are
required for training, which is very labor-intensive and time-consuming. In
this paper, we consider a partially supervised setting, where cheap image-level
annotations are provided for all the training data, and the costly per-voxel
annotations are only available for a subset of them. We propose an Inductive
Attention Guidance Network (IAG-Net) to jointly learn a global image-level
classifier for normal/PDAC classification and a local voxel-level classifier
for semi-supervised PDAC segmentation. We instantiate both the global and the
local classifiers by multiple instance learning (MIL), where the attention
guidance, indicating roughly where the PDAC regions are, is the key to bridging
them: For global MIL based normal/PDAC classification, attention serves as a
weight for each instance (voxel) during MIL pooling, which eliminates the
distraction from the background; For local MIL based semi-supervised PDAC
segmentation, the attention guidance is inductive, which not only provides
bag-level pseudo-labels to training data without per-voxel annotations for MIL
training, but also acts as a proxy of an instance-level classifier.
Experimental results show that our IAG-Net boosts PDAC segmentation accuracy by
more than 5% compared with the state-of-the-arts. | [
"cs.CV"
] |
We propose a Deep Variational Clustering (DVC) framework for unsupervised
representation learning and clustering of large-scale medical images. DVC
simultaneously learns the multivariate Gaussian posterior through the
probabilistic convolutional encoder and the likelihood distribution with the
probabilistic convolutional decoder; and optimizes cluster labels assignment.
Here, the learned multivariate Gaussian posterior captures the latent
distribution of a large set of unlabeled images. Then, we perform unsupervised
clustering on top of the variational latent space using a clustering loss. In
this approach, the probabilistic decoder helps to prevent the distortion of
data points in the latent space and to preserve the local structure of data
generating distribution. The training process can be considered as a
self-training process to refine the latent space and simultaneously optimizing
cluster assignments iteratively. We evaluated our proposed framework on three
public datasets that represented different medical imaging modalities. Our
experimental results show that our proposed framework generalizes better across
different datasets. It achieves compelling results on several medical imaging
benchmarks. Thus, our approach offers potential advantages over conventional
deep unsupervised learning in real-world applications. The source code of the
method and all the experiments are available publicly at:
https://github.com/csfarzin/DVC | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Normalizing flows transform a simple base distribution into a complex target
distribution and have proved to be powerful models for data generation and
density estimation. In this work, we propose a novel type of normalizing flow
driven by a differential deformation of the Wiener process. As a result, we
obtain a rich time series model whose observable process inherits many of the
appealing properties of its base process, such as efficient computation of
likelihoods and marginals. Furthermore, our continuous treatment provides a
natural framework for irregular time series with an independent arrival
process, including straightforward interpolation. We illustrate the desirable
properties of the proposed model on popular stochastic processes and
demonstrate its superior flexibility to variational RNN and latent ODE
baselines in a series of experiments on synthetic and real-world data. | [
"cs.LG",
"stat.ML"
] |
Recently, Deep-Neural-Network (DNN) based edge prediction is progressing
fast. Although the DNN based schemes outperform the traditional edge detectors,
they have much higher computational complexity. It could be that the DNN based
edge detectors often adopt the neural net structures designed for high-level
computer vision tasks, such as image segmentation and object recognition. Edge
detection is a rather local and simple job, the over-complicated architecture
and massive parameters may be unnecessary. Therefore, we propose a traditional
method inspired framework to produce good edges with minimal complexity. We
simplify the network architecture to include Feature Extractor, Enrichment, and
Summarizer, which roughly correspond to gradient, low pass filter, and pixel
connection in the traditional edge detection schemes. The proposed structure
can effectively reduce the complexity and retain the edge prediction quality.
Our TIN2 (Traditional Inspired Network) model has an accuracy higher than the
recent BDCN2 (Bi-Directional Cascade Network) but with a smaller model. | [
"cs.CV"
] |
Generative Adversarial Networks (GANs) have the capability of synthesizing
images, which have been successfully applied to medical image synthesis tasks.
However, most of existing methods merely consider the global contextual
information and ignore the fine foreground structures, e.g., vessel, skeleton,
which may contain diagnostic indicators for medical image analysis. Inspired by
human painting procedure, which is composed of stroking and color rendering
steps, we propose a Sketching-rendering Unconditional Generative Adversarial
Network (SkrGAN) to introduce a sketch prior constraint to guide the medical
image generation. In our SkrGAN, a sketch guidance module is utilized to
generate a high quality structural sketch from random noise, then a color
render mapping is used to embed the sketch-based representations and resemble
the background appearances. Experimental results show that the proposed SkrGAN
achieves the state-of-the-art results in synthesizing images for various image
modalities, including retinal color fundus, X-Ray, Computed Tomography (CT) and
Magnetic Resonance Imaging (MRI). In addition, we also show that the
performances of medical image segmentation method have been improved by using
our synthesized images as data augmentation. | [
"cs.CV",
"eess.IV"
] |
While deep neural networks exhibit state-of-the-art results in the task of
image super-resolution (SR) with a fixed known acquisition process (e.g., a
bicubic downscaling kernel), they experience a huge performance loss when the
real observation model mismatches the one used in training. Recently, two
different techniques suggested to mitigate this deficiency, i.e., enjoy the
advantages of deep learning without being restricted by the training phase. The
first one follows the plug-and-play (P&P) approach that solves general inverse
problems (e.g., SR) by using Gaussian denoisers for handling the prior term in
model-based optimization schemes. The second builds on internal recurrence of
information inside a single image, and trains a super-resolver network at test
time on examples synthesized from the low-resolution image. Our work
incorporates these two independent strategies, enjoying the impressive
generalization capabilities of deep learning, captured by the first, and
further improving it through internal learning at test time. First, we apply a
recent P&P strategy to SR. Then, we show how it may become image-adaptive in
test time. This technique outperforms the above two strategies on popular
datasets and gives better results than other state-of-the-art methods in
practical cases where the observation model is inexact or unknown in advance. | [
"cs.CV",
"cs.LG"
] |
We introduce a novel regularization approach for deep learning that
incorporates and respects the underlying graphical structure of the neural
network. Existing regularization methods often focus on dropping/penalizing
weights in a global manner that ignores the connectivity structure of the
neural network. We propose to use the Fiedler value of the neural network's
underlying graph as a tool for regularization. We provide theoretical support
for this approach via spectral graph theory. We list several useful properties
of the Fiedler value that makes it suitable in regularization. We provide an
approximate, variational approach for fast computation in practical training of
neural networks. We provide bounds on such approximations. We provide an
alternative but equivalent formulation of this framework in the form of a
structurally weighted L1 penalty, thus linking our approach to sparsity
induction. We performed experiments on datasets that compare Fiedler
regularization with traditional regularization methods such as dropout and
weight decay. Results demonstrate the efficacy of Fiedler regularization. | [
"stat.ML",
"cs.LG"
] |
Depth completion aims to recover a dense depth map from the sparse depth data
and the corresponding single RGB image. The observed pixels provide the
significant guidance for the recovery of the unobserved pixels' depth. However,
due to the sparsity of the depth data, the standard convolution operation,
exploited by most of existing methods, is not effective to model the observed
contexts with depth values. To address this issue, we propose to adopt the
graph propagation to capture the observed spatial contexts. Specifically, we
first construct multiple graphs at different scales from observed pixels. Since
the graph structure varies from sample to sample, we then apply the attention
mechanism on the propagation, which encourages the network to model the
contextual information adaptively. Furthermore, considering the mutli-modality
of input data, we exploit the graph propagation on the two modalities
respectively to extract multi-modal representations. Finally, we introduce the
symmetric gated fusion strategy to exploit the extracted multi-modal features
effectively. The proposed strategy preserves the original information for one
modality and also absorbs complementary information from the other through
learning the adaptive gating weights. Our model, named Adaptive Context-Aware
Multi-Modal Network (ACMNet), achieves the state-of-the-art performance on two
benchmarks, {\it i.e.}, KITTI and NYU-v2, and at the same time has fewer
parameters than latest models. Our code is available at:
\url{https://github.com/sshan-zhao/ACMNet}. | [
"cs.CV"
] |
In this work, we firstly apply the Train-Tensor (TT) networks to construct a
compact representation of the classical Multilayer Perceptron, representing a
reduction of up to 95% of the coefficients. A comparative analysis between
tensor model and standard multilayer neural networks is also carried out in the
context of prediction of the Mackey-Glass noisy chaotic time series and NASDAQ
index. We show that the weights of a multidimensional regression model can be
learned by means of TT network and the optimization of TT weights is a more
robust to the impact of coefficient initialization and hyper-parameter setting.
Furthermore, an efficient algorithm based on alternating least squares has been
proposed for approximating the weights in TT-format with a reduction of
computational calculus, providing a much faster convergence than the well-known
adaptive learning-method algorithms, widely applied for optimizing neural
networks. | [
"cs.LG"
] |
Future communication systems are faced with increased demand for high
capacity, dynamic bandwidth, reliability and heterogeneous traffic. To meet
these requirements, networks have become more complex and thus require new
design methods and monitoring techniques, as they evolve towards becoming
autonomous. Machine learning has come to the forefront in recent years as a
promising technology to aid in this evolution. Optical fiber communications can
already provide the high capacity required for most applications, however,
there is a need for increased scalability and adaptability to changing user
demands and link conditions. Accurate performance monitoring is an integral
part of this transformation. In this paper we review optical performance
monitoring techniques where machine learning algorithms have been applied.
Moreover, since alot of OPM depends on knowledge of the signal type, we also
review work for modulation format recognition and bitrate identification. We
additionally briefly introduce a neuromorphic approach to OPM as an emerging
technique that has only recently been applied to this domain. | [
"cs.LG",
"eess.SP"
] |
In this paper, the flexibility, versatility and predictive power of kernel
regression are combined with now lavishly available network data to create
regression models with even greater predictive performances. Building from
previous work featuring generalized linear models built in the presence of
network cohesion data, we construct a kernelized extension that captures
subtler nonlinearities in extremely high dimensional spaces and also produces
far better predictive performances. Applications of seamless yet substantial
adaptation to simulated and real-life data demonstrate the appeal and strength
of our work. | [
"stat.ML",
"cs.LG",
"47B34"
] |
Generative Adversarial Networks (GANs) have been used in many different
applications to generate realistic synthetic data. We introduce a novel GAN
with Autoencoder (GAN-AE) architecture to generate synthetic samples for
variable length, multi-feature sequence datasets. In this model, we develop a
GAN architecture with an additional autoencoder component, where recurrent
neural networks (RNNs) are used for each component of the model in order to
generate synthetic data to improve classification accuracy for a highly
imbalanced medical device dataset. In addition to the medical device dataset,
we also evaluate the GAN-AE performance on two additional datasets and
demonstrate the application of GAN-AE to a sequence-to-sequence task where both
synthetic sequence inputs and sequence outputs must be generated. To evaluate
the quality of the synthetic data, we train encoder-decoder models both with
and without the synthetic data and compare the classification model
performance. We show that a model trained with GAN-AE generated synthetic data
outperforms models trained with synthetic data generated both with standard
oversampling techniques such as SMOTE and Autoencoders as well as with state of
the art GAN-based models. | [
"cs.LG",
"stat.ML"
] |
To make efficient use of limited spectral resources, we in this work propose
a deep actor-critic reinforcement learning based framework for dynamic
multichannel access. We consider both a single-user case and a scenario in
which multiple users attempt to access channels simultaneously. We employ the
proposed framework as a single agent in the single-user case, and extend it to
a decentralized multi-agent framework in the multi-user scenario. In both
cases, we develop algorithms for the actor-critic deep reinforcement learning
and evaluate the proposed learning policies via experiments and numerical
results. In the single-user model, in order to evaluate the performance of the
proposed channel access policy and the framework's tolerance against
uncertainty, we explore different channel switching patterns and different
switching probabilities. In the case of multiple users, we analyze the
probabilities of each user accessing channels with favorable channel conditions
and the probability of collision. We also address a time-varying environment to
identify the adaptive ability of the proposed framework. Additionally, we
provide comparisons (in terms of both the average reward and time efficiency)
between the proposed actor-critic deep reinforcement learning framework, Deep-Q
network (DQN) based approach, random access, and the optimal policy when the
channel dynamics are known. | [
"cs.LG",
"cs.IT",
"math.IT",
"stat.ML"
] |
Aided by recent advances in Deep Learning, Image Caption Generation has seen
tremendous progress over the last few years. Most methods use transfer learning
to extract visual information, in the form of image features, with the help of
pre-trained Convolutional Neural Network models followed by transformation of
the visual information using a Caption Generator module to generate the output
sentences. Different methods have used different Convolutional Neural Network
Architectures and, to the best of our knowledge, there is no systematic study
which compares the relative efficacy of different Convolutional Neural Network
architectures for extracting the visual information. In this work, we have
evaluated 17 different Convolutional Neural Networks on two popular Image
Caption Generation frameworks: the first based on Neural Image Caption (NIC)
generation model and the second based on Soft-Attention framework. We observe
that model complexity of Convolutional Neural Network, as measured by number of
parameters, and the accuracy of the model on Object Recognition task does not
necessarily co-relate with its efficacy on feature extraction for Image Caption
Generation task. | [
"cs.CV",
"cs.AI",
"cs.LG",
"cs.MM",
"cs.NE"
] |
Machine learning pipelines often rely on optimization procedures to make
discrete decisions (e.g., sorting, picking closest neighbors, or shortest
paths). Although these discrete decisions are easily computed, they break the
back-propagation of computational graphs. In order to expand the scope of
learning problems that can be solved in an end-to-end fashion, we propose a
systematic method to transform optimizers into operations that are
differentiable and never locally constant. Our approach relies on
stochastically perturbed optimizers, and can be used readily together with
existing solvers. Their derivatives can be evaluated efficiently, and
smoothness tuned via the chosen noise amplitude. We also show how this
framework can be connected to a family of losses developed in structured
prediction, and give theoretical guarantees for their use in learning tasks. We
demonstrate experimentally the performance of our approach on various tasks. | [
"cs.LG",
"math.OC",
"stat.ML"
] |
Page segmentation is considered to be the crucial stage for the automatic
analysis of documents with complex layouts. This has traditionally been carried
out in uncompressed documents, although most of the documents in real life
exist in a compressed form warranted by the requirement to make storage and
transfer efficient. However, carrying out page segmentation directly in
compressed documents without going through the stage of decompression is a
challenging goal. This research paper proposes demonstrating the possibility of
carrying out a page segmentation operation directly in the run-length data of
the CCITT Group-3 compressed text document, which could be single- or
multi-columned and might even have some text regions in the inverted text color
mode. Therefore, before carrying out the segmentation of the text document into
columns, each column into paragraphs, each paragraph into text lines, each line
into words, and, finally, each word into characters, a pre-processing of the
text document needs to be carried out. The pre-processing stage identifies the
normal text regions and inverted text regions, and the inverted text regions
are toggled to the normal mode. In the sequel to initiate column separation, a
new strategy of incremental assimilation of white space runs in the vertical
direction and the auto-estimation of certain related parameters is proposed. A
procedure to realize column-segmentation employing these extracted parameters
has been devised. Subsequently, what follows first is a two-level horizontal
row separation process, which segments every column into paragraphs, and in
turn, into text-lines. Then, there is a two-level vertical column separation
process, which completes the separation into words and characters. | [
"cs.CV"
] |
We present a simple yet efficient Hybrid Classifier based on Deep Learning
and Reinforcement Learning. Q-Learning is used with two Q-states and four
actions. Conventional techniques use feature maps extracted from Convolutional
Neural Networks (CNNs) and include them in the Qstates along with past history.
This leads to difficulties with these approaches as the number of states is
very large number due to high dimensions of the feature maps. Since our method
uses only two Q-states it is simple and has much lesser number of parameters to
optimize and also thus has a straightforward reward function. Also, the
approach uses unexplored actions for image processing vis-a-vis other
contemporary techniques. Three datasets have been used for benchmarking of the
approach. These are the MNIST Digit Image Dataset, the USPS Digit Image Dataset
and the MATLAB Digit Image Dataset. The performance of the proposed hybrid
classifier has been compared with other contemporary techniques like a
well-established Reinforcement Learning Technique, AlexNet, CNN-Nearest
Neighbor Classifier and CNNSupport Vector Machine Classifier. Our approach
outperforms these contemporary hybrid classifiers on all the three datasets
used. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
3D hand shape and pose estimation from a single depth map is a new and
challenging computer vision problem with many applications. Existing methods
addressing it directly regress hand meshes via 2D convolutional neural
networks, which leads to artifacts due to perspective distortions in the
images. To address the limitations of the existing methods, we develop
HandVoxNet++, i.e., a voxel-based deep network with 3D and graph convolutions
trained in a fully supervised manner. The input to our network is a 3D
voxelized-depth-map-based on the truncated signed distance function (TSDF).
HandVoxNet++ relies on two hand shape representations. The first one is the 3D
voxelized grid of hand shape, which does not preserve the mesh topology and
which is the most accurate representation. The second representation is the
hand surface that preserves the mesh topology. We combine the advantages of
both representations by aligning the hand surface to the voxelized hand shape
either with a new neural Graph-Convolutions-based Mesh Registration
(GCN-MeshReg) or classical segment-wise Non-Rigid Gravitational Approach
(NRGA++) which does not rely on training data. In extensive evaluations on
three public benchmarks, i.e., SynHand5M, depth-based HANDS19 challenge and
HO-3D, the proposed HandVoxNet++ achieves the state-of-the-art performance. In
this journal extension of our previous approach presented at CVPR 2020, we gain
41.09% and 13.7% higher shape alignment accuracy on SynHand5M and HANDS19
datasets, respectively. Our method is ranked first on the HANDS19 challenge
dataset (Task 1: Depth-Based 3D Hand Pose Estimation) at the moment of the
submission of our results to the portal in August 2020. | [
"cs.CV"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.