text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
We propose a computationally efficient $G$-invariant neural network that
approximates functions invariant to the action of a given permutation subgroup
$G \leq S_n$ of the symmetric group on input data. The key element of the
proposed network architecture is a new $G$-invariant transformation module,
which produces a $G$-invariant latent representation of the input data.
Theoretical considerations are supported by numerical experiments, which
demonstrate the effectiveness and strong generalization properties of the
proposed method in comparison to other $G$-invariant neural networks. | [
"cs.LG",
"cs.AI",
"I.2.6"
] |
In this paper, we propose two distinct solutions to the problem of Diabetic
Retinopathy (DR) classification. In the first approach, we introduce a shallow
neural network architecture. This model performs well on classification of the
most frequent classes while fails at classifying the less frequent ones. In the
second approach, we use transfer learning to re-train the last modified layer
of a very deep neural network to improve the generalization ability of the
model to the less frequent classes. Our results demonstrate superior abilities
of transfer learning in DR classification of less frequent classes compared to
the shallow neural network. | [
"cs.CV",
"cs.LG",
"eess.IV",
"I.4.6; I.4.9"
] |
In this work, we consider the problem of model selection for deep
reinforcement learning (RL) in real-world environments. Typically, the
performance of deep RL algorithms is evaluated via on-policy interactions with
the target environment. However, comparing models in a real-world environment
for the purposes of early stopping or hyperparameter tuning is costly and often
practically infeasible. This leads us to examine off-policy policy evaluation
(OPE) in such settings. We focus on OPE for value-based methods, which are of
particular interest in deep RL, with applications like robotics, where
off-policy algorithms based on Q-function estimation can often attain better
sample complexity than direct policy optimization. Existing OPE metrics either
rely on a model of the environment, or the use of importance sampling (IS) to
correct for the data being off-policy. However, for high-dimensional
observations, such as images, models of the environment can be difficult to fit
and value-based methods can make IS hard to use or even ill-conditioned,
especially when dealing with continuous action spaces. In this paper, we focus
on the specific case of MDPs with continuous action spaces and sparse binary
rewards, which is representative of many important real-world applications. We
propose an alternative metric that relies on neither models nor IS, by framing
OPE as a positive-unlabeled (PU) classification problem with the Q-function as
the decision function. We experimentally show that this metric outperforms
baselines on a number of tasks. Most importantly, it can reliably predict the
relative performance of different policies in a number of generalization
scenarios, including the transfer to the real-world of policies trained in
simulation for an image-based robotic manipulation task. | [
"cs.LG",
"cs.AI",
"cs.RO",
"stat.ML"
] |
3D image segmentation plays an important role in biomedical image analysis.
Many 2D and 3D deep learning models have achieved state-of-the-art segmentation
performance on 3D biomedical image datasets. Yet, 2D and 3D models have their
own strengths and weaknesses, and by unifying them together, one may be able to
achieve more accurate results. In this paper, we propose a new ensemble
learning framework for 3D biomedical image segmentation that combines the
merits of 2D and 3D models. First, we develop a fully convolutional network
based meta-learner to learn how to improve the results from 2D and 3D models
(base-learners). Then, to minimize over-fitting for our sophisticated
meta-learner, we devise a new training method that uses the results of the
base-learners as multiple versions of "ground truths". Furthermore, since our
new meta-learner training scheme does not depend on manual annotation, it can
utilize abundant unlabeled 3D image data to further improve the model.
Extensive experiments on two public datasets (the HVSMR 2016 Challenge dataset
and the mouse piriform cortex dataset) show that our approach is effective
under fully-supervised, semi-supervised, and transductive settings, and attains
superior performance over state-of-the-art image segmentation methods. | [
"cs.CV"
] |
Salient object detection has achieved great improvement by using the Fully
Convolution Network (FCN). However, the FCN-based U-shape architecture may
cause the dilution problem in the high-level semantic information during the
up-sample operations in the top-down pathway. Thus, it can weaken the ability
of salient object localization and produce degraded boundaries. To this end, in
order to overcome this limitation, we propose a novel pyramid self-attention
module (PSAM) and the adoption of an independent feature-complementing
strategy. In PSAM, self-attention layers are equipped after multi-scale pyramid
features to capture richer high-level features and bring larger receptive
fields to the model. In addition, a channel-wise attention module is also
employed to reduce the redundant features of the FPN and provide refined
results. Experimental analysis shows that the proposed PSAM effectively
contributes to the whole model so that it outperforms state-of-the-art results
over five challenging datasets. Finally, quantitative results show that PSAM
generates clear and integral salient maps which can provide further help to
other computer vision tasks, such as object detection and semantic
segmentation. | [
"cs.CV"
] |
A generalized two-dimensional quaternion principal component analysis
(G2DQPCA) approach with weighting is presented for color image analysis. As a
general framework of 2DQPCA, G2DQPCA is flexible to adapt different constraints
or requirements by imposing $L_{p}$ norms both on the constraint function and
the objective function. The gradient operator of quaternion vector functions is
redefined by the structure-preserving gradient operator of real vector
function. Under the framework of minorization-maximization (MM), an iterative
algorithm is developed to obtain the optimal closed-form solution of G2DQPCA.
The projection vectors generated by the deflating scheme are required to be
orthogonal to each other. A weighting matrix is defined to magnify the effect
of main features. The weighted projection bases remain the accuracy of face
recognition unchanged or moving in a tight range as the number of features
increases. The numerical results based on the real face databases validate that
the newly proposed method performs better than the state-of-the-art algorithms. | [
"cs.CV"
] |
We propose a Transformer-based framework for 3D human texture estimation from
a single image. The proposed Transformer is able to effectively exploit the
global information of the input image, overcoming the limitations of existing
methods that are solely based on convolutional neural networks. In addition, we
also propose a mask-fusion strategy to combine the advantages of the RGB-based
and texture-flow-based models. We further introduce a part-style loss to help
reconstruct high-fidelity colors without introducing unpleasant artifacts.
Extensive experiments demonstrate the effectiveness of the proposed method
against state-of-the-art 3D human texture estimation approaches both
quantitatively and qualitatively. | [
"cs.CV",
"cs.AI",
"cs.GR",
"cs.LG",
"cs.MM"
] |
Face representation learning using datasets with massive number of identities
requires appropriate training methods. Softmax-based approach, currently the
state-of-the-art in face recognition, in its usual "full softmax" form is not
suitable for datasets with millions of persons. Several methods, based on the
"sampled softmax" approach, were proposed to remove this limitation. These
methods, however, have a set of disadvantages. One of them is a problem of
"prototype obsolescence": classifier weights (prototypes) of the rarely sampled
classes, receive too scarce gradients and become outdated and detached from the
current encoder state, resulting in an incorrect training signals. This problem
is especially serious in ultra-large-scale datasets. In this paper, we propose
a novel face representation learning model called Prototype Memory, which
alleviates this problem and allows training on a dataset of any size. Prototype
Memory consists of the limited-size memory module for storing recent class
prototypes and employs a set of algorithms to update it in appropriate way. New
class prototypes are generated on the fly using exemplar embeddings in the
current mini-batch. These prototypes are enqueued to the memory and used in a
role of classifier weights for usual softmax classification-based training. To
prevent obsolescence and keep the memory in close connection with encoder,
prototypes are regularly refreshed, and oldest ones are dequeued and disposed.
Prototype Memory is computationally efficient and independent of dataset size.
It can be used with various loss functions, hard example mining algorithms and
encoder architectures. We prove the effectiveness of the proposed model by
extensive experiments on popular face recognition benchmarks. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Wireless signal-based gesture recognition has promoted the developments of VR
game, smart home, etc. However, traditional approaches suffer from the
influence of the domain gap. Low recognition accuracy occurs when the
recognition model is trained in one domain but is used in another domain.
Though some solutions, such as adversarial learning, transfer learning and
body-coordinate velocity profile, have been proposed to achieve cross-domain
recognition, these solutions more or less have flaws. In this paper, we define
the concept of domain gap and then propose a more promising solution, namely
DI, to eliminate domain gap and further achieve domain-independent gesture
recognition. DI leverages the sign map of the gradient map as the domain gap
eliminator to improve the recognition accuracy. We conduct experiments with ten
domains and ten gestures. The experiment results show that DI can achieve the
recognition accuracies of 87.13%, 90.12% and 94.45% on KNN, SVM and CNN, which
outperforms existing solutions. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Structured weight pruning is a representative model compression technique of
DNNs to reduce the storage and computation requirements and accelerate
inference. An automatic hyperparameter determination process is necessary due
to the large number of flexible hyperparameters. This work proposes
AutoCompress, an automatic structured pruning framework with the following key
performance improvements: (i) effectively incorporate the combination of
structured pruning schemes in the automatic process; (ii) adopt the
state-of-art ADMM-based structured weight pruning as the core algorithm, and
propose an innovative additional purification step for further weight reduction
without accuracy loss; and (iii) develop effective heuristic search method
enhanced by experience-based guided search, replacing the prior deep
reinforcement learning technique which has underlying incompatibility with the
target pruning problem. Extensive experiments on CIFAR-10 and ImageNet datasets
demonstrate that AutoCompress is the key to achieve ultra-high pruning rates on
the number of weights and FLOPs that cannot be achieved before. As an example,
AutoCompress outperforms the prior work on automatic model compression by up to
33x in pruning rate (120x reduction in the actual parameter count) under the
same accuracy. Significant inference speedup has been observed from the
AutoCompress framework on actual measurements on smartphone. We release all
models of this work at anonymous link: http://bit.ly/2VZ63dS. | [
"cs.LG",
"cs.AI",
"cs.CV",
"cs.NE",
"stat.ML"
] |
The output of text-to-image synthesis systems should be coherent, clear,
photo-realistic scenes with high semantic fidelity to their conditioned text
descriptions. Our Cross-Modal Contrastive Generative Adversarial Network
(XMC-GAN) addresses this challenge by maximizing the mutual information between
image and text. It does this via multiple contrastive losses which capture
inter-modality and intra-modality correspondences. XMC-GAN uses an attentional
self-modulation generator, which enforces strong text-image correspondence, and
a contrastive discriminator, which acts as a critic as well as a feature
encoder for contrastive learning. The quality of XMC-GAN's output is a major
step up from previous models, as we show on three challenging datasets. On
MS-COCO, not only does XMC-GAN improve state-of-the-art FID from 24.70 to 9.33,
but--more importantly--people prefer XMC-GAN by 77.3 for image quality and 74.1
for image-text alignment, compared to three other recent models. XMC-GAN also
generalizes to the challenging Localized Narratives dataset (which has longer,
more detailed descriptions), improving state-of-the-art FID from 48.70 to
14.12. Lastly, we train and evaluate XMC-GAN on the challenging Open Images
data, establishing a strong benchmark FID score of 26.91. | [
"cs.CV"
] |
[Context.] The success of deep learning makes its usage more and more
tempting in safety-critical applications. However such applications have
historical standards (e.g., DO178, ISO26262) which typically do not envision
the usage of machine learning. We focus in particular on \emph{requirements
traceability} of software artifacts, i.e., code modules, functions, or
statements (depending on the desired granularity).
[Problem.] Both code and requirements are a problem when dealing with deep
neural networks: code constituting the network is not comparable to classical
code; furthermore, requirements for applications where neural networks are
required are typically very hard to specify: even though high-level
requirements can be defined, it is very hard to make such requirements concrete
enough, that one can qualify them of low-level requirements. An additional
problem is that deep learning is in practice very much based on
trial-and-error, which makes the final result hard to explain without the
previous iterations.
[Proposed solution.] We investigate which artifacts could play a similar role
to code or low-level requirements in neural network development and propose
various traces which one could possibly consider as a replacement for classical
notions. We also propose a form of traceability (and new artifacts) in order to
deal with the particular trial-and-error development process for deep learning. | [
"cs.LG"
] |
We propose a class of kernel-based two-sample tests, which aim to determine
whether two sets of samples are drawn from the same distribution. Our tests are
constructed from kernels parameterized by deep neural nets, trained to maximize
test power. These tests adapt to variations in distribution smoothness and
shape over space, and are especially suited to high dimensions and complex
data. By contrast, the simpler kernels used in prior kernel testing work are
spatially homogeneous, and adaptive only in lengthscale. We explain how this
scheme includes popular classifier-based two-sample tests as a special case,
but improves on them in general. We provide the first proof of consistency for
the proposed adaptation method, which applies both to kernels on deep features
and to simpler radial basis kernels or multiple kernel learning. In
experiments, we establish the superior performance of our deep kernels in
hypothesis testing on benchmark and real-world data. The code of our
deep-kernel-based two sample tests is available at
https://github.com/fengliu90/DK-for-TST. | [
"stat.ML",
"cs.LG",
"stat.ME"
] |
Driven by successes in deep learning, computer vision research has begun to
move beyond object detection and image classification to more sophisticated
tasks like image captioning or visual question answering. Motivating such
endeavors is the desire for models to capture not only objects present in an
image, but more fine-grained aspects of a scene such as relationships between
objects and their attributes. Scene graphs provide a formal construct for
capturing these aspects of an image. Despite this, there have been only a few
recent efforts to generate scene graphs from imagery. Previous works limit
themselves to settings where bounding box information is available at train
time and do not attempt to generate scene graphs with attributes. In this paper
we propose a method, based on recent advancements in Generative Adversarial
Networks, to overcome these deficiencies. We take the approach of first
generating small subgraphs, each describing a single statement about a scene
from a specific region of the input image chosen using an attention mechanism.
By doing so, our method is able to produce portions of the scene graphs with
attribute information without the need for bounding box labels. Then, the
complete scene graph is constructed from these subgraphs. We show that our
model improves upon prior work in scene graph generation on state-of-the-art
data sets and accepted metrics. Further, we demonstrate that our model is
capable of handling a larger vocabulary size than prior work has attempted. | [
"cs.CV"
] |
In this work, we propose a novel method for generating 3D point clouds that
leverage properties of hyper networks. Contrary to the existing methods that
learn only the representation of a 3D object, our approach simultaneously finds
a representation of the object and its 3D surface. The main idea of our
HyperCloud method is to build a hyper network that returns weights of a
particular neural network (target network) trained to map points from a uniform
unit ball distribution into a 3D shape. As a consequence, a particular 3D shape
can be generated using point-by-point sampling from the assumed prior
distribution and transforming sampled points with the target network. Since the
hyper network is based on an auto-encoder architecture trained to reconstruct
realistic 3D shapes, the target network weights can be considered a
parametrization of the surface of a 3D shape, and not a standard representation
of point cloud usually returned by competitive approaches. The proposed
architecture allows finding mesh-based representation of 3D objects in a
generative manner while providing point clouds en pair in quality with the
state-of-the-art methods. | [
"cs.CV"
] |
The minimal path model based on the Eikonal partial differential equation
(PDE) has served as a fundamental tool for the applications of image
segmentation and boundary detection in the passed three decades. However, the
existing minimal paths-based image segmentation approaches commonly rely on the
image boundary features, potentially limiting their performance in some
situations. In this paper, we introduce a new variational image segmentation
model based on the minimal path framework and the Eikonal PDE, where the
region-based functional that defines the homogeneity criteria can be taken into
account for estimating the associated geodesic paths. This is done by
establishing a geodesic curve interpretation to the region-based active contour
evolution problem. The image segmentation processing is carried out in an
iterative manner in our approach. A crucial ingredient in each iteration is to
construct an asymmetric Randers geodesic metric using a sufficiently small
vector field, such that a set of geodesic paths can be tracked from the
geodesic distance map which is the solution to an Eikonal PDE. The object
boundary can be delineated by the concatenation of the final geodesic paths. We
invoke the Finsler variant of the fast marching method to estimate the geodesic
distance map, yielding an efficient implementation of the proposed Eikonal
region-based active contour model. Experimental results on both of the
synthetic and real images exhibit that our model indeed achieves encouraging
segmentation performance. | [
"cs.CV",
"cs.CG"
] |
The signature is an infinite graded sequence of statistics known to
characterise a stream of data up to a negligible equivalence class. It is a
transform which has previously been treated as a fixed feature transformation,
on top of which a model may be built. We propose a novel approach which
combines the advantages of the signature transform with modern deep learning
frameworks. By learning an augmentation of the stream prior to the signature
transform, the terms of the signature may be selected in a data-dependent way.
More generally, we describe how the signature transform may be used as a layer
anywhere within a neural network. In this context it may be interpreted as a
pooling operation. We present the results of empirical experiments to back up
the theoretical justification. Code available at
https://github.com/patrick-kidger/Deep-Signature-Transforms. | [
"cs.LG",
"stat.ML",
"68T01"
] |
We present a new technique for deep reinforcement learning that automatically
detects moving objects and uses the relevant information for action selection.
The detection of moving objects is done in an unsupervised way by exploiting
structure from motion. Instead of directly learning a policy from raw images,
the agent first learns to detect and segment moving objects by exploiting flow
information in video sequences. The learned representation is then used to
focus the policy of the agent on the moving objects. Over time, the agent
identifies which objects are critical for decision making and gradually builds
a policy based on relevant moving objects. This approach, which we call
Motion-Oriented REinforcement Learning (MOREL), is demonstrated on a suite of
Atari games where the ability to detect moving objects reduces the amount of
interaction needed with the environment to obtain a good policy. Furthermore,
the resulting policy is more interpretable than policies that directly map
images to actions or values with a black box neural network. We can gain
insight into the policy by inspecting the segmentation and motion of each
object detected by the agent. This allows practitioners to confirm whether a
policy is making decisions based on sensible information. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Several papers argue that wide minima generalize better than narrow minima.
In this paper, through detailed experiments that not only corroborate the
generalization properties of wide minima, we also provide empirical evidence
for a new hypothesis that the density of wide minima is likely lower than the
density of narrow minima. Further, motivated by this hypothesis, we design a
novel explore-exploit learning rate schedule. On a variety of image and natural
language datasets, compared to their original hand-tuned learning rate
baselines, we show that our explore-exploit schedule can result in either up to
0.84% higher absolute accuracy using the original training budget or up to 57%
reduced training time while achieving the original reported accuracy. For
example, we achieve state-of-the-art (SOTA) accuracy for IWSLT'14 (DE-EN)
dataset by just modifying the learning rate schedule of a high performing
model. | [
"cs.LG",
"stat.ML"
] |
Human activity recognition in videos is a challenging problem that has drawn
a lot of interest, particularly when the goal requires the analysis of a large
video database. AOLME project provides a collaborative learning environment for
middle school students to explore mathematics, computer science, and
engineering by processing digital images and videos. As part of this project,
around 2200 hours of video data was collected for analysis. Because of the size
of the dataset, it is hard to analyze all the videos of the dataset manually.
Thus, there is a huge need for reliable computer-based methods that can detect
activities of interest. My thesis is focused on the development of accurate
methods for detecting and tracking objects in long videos. All the models are
validated on videos from 7 different sessions, ranging from 45 minutes to 90
minutes. The keyboard detector achieved a very high average precision (AP) of
92% at 0.5 intersection over union (IoU). Furthermore, a combined system of the
detector with a fast tracker KCF (159fps) was developed so that the algorithm
runs significantly faster without sacrificing accuracy. For a video of 23
minutes having resolution 858X480 @ 30 fps, the detection alone runs at 4.7Xthe
real-time, and the combined algorithm runs at 21Xthe real-time for an average
IoU of 0.84 and 0.82, respectively. The hand detector achieved average
precision (AP) of 72% at 0.5 IoU. The detection results were improved to 81%
using optimal data augmentation parameters. The hand detector runs at 4.7Xthe
real-time with AP of 81% at 0.5 IoU. The hand detection method was integrated
with projections and clustering for accurate proposal generation. This approach
reduced the number of false-positive hand detections by 80%. The overall hand
detection system runs at 4Xthe real-time, capturing all the activity regions of
the current collaborative group. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Graph Neural Network (GNN) aggregates the neighborhood of each node into the
node embedding and shows its powerful capability for graph representation
learning. However, most existing GNN variants aggregate the neighborhood
information in a fixed non-injective fashion, which may map different graphs or
nodes to the same embedding, reducing the model expressiveness. We present a
theoretical framework to design a continuous injective set function for
neighborhood aggregation in GNN. Using the framework, we propose expressive GNN
that aggregates the neighborhood of each node with a continuous injective set
function, so that a GNN layer maps similar nodes with similar neighborhoods to
similar embeddings, different nodes to different embeddings and the equivalent
nodes or isomorphic graphs to the same embeddings. Moreover, the proposed
expressive GNN can naturally learn expressive representations for graphs with
continuous node attributes. We validate the proposed expressive GNN (ExpGNN)
for graph classification on multiple benchmark datasets including simple graphs
and attributed graphs. The experimental results demonstrate that our model
achieves state-of-the-art performances on most of the benchmarks. | [
"cs.LG",
"cs.AI",
"cs.IT",
"math.IT"
] |
Text-to-Face (TTF) synthesis is a challenging task with great potential for
diverse computer vision applications. Compared to Text-to-Image (TTI) synthesis
tasks, the textual description of faces can be much more complicated and
detailed due to the variety of facial attributes and the parsing of high
dimensional abstract natural language. In this paper, we propose a Text-to-Face
model that not only produces images in high resolution (1024x1024) with
text-to-image consistency, but also outputs multiple diverse faces to cover a
wide range of unspecified facial features in a natural way. By fine-tuning the
multi-label classifier and image encoder, our model obtains the vectors and
image embeddings which are used to transform the input noise vector sampled
from the normal distribution. Afterwards, the transformed noise vector is fed
into a pre-trained high-resolution image generator to produce a set of faces
with the desired facial attributes. We refer to our model as TTF-HD.
Experimental results show that TTF-HD generates high-quality faces with
state-of-the-art performance. | [
"cs.CV"
] |
We introduce a method for constructing skills capable of solving tasks drawn
from a distribution of parameterized reinforcement learning problems. The
method draws example tasks from a distribution of interest and uses the
corresponding learned policies to estimate the topology of the
lower-dimensional piecewise-smooth manifold on which the skill policies lie.
This manifold models how policy parameters change as task parameters vary. The
method identifies the number of charts that compose the manifold and then
applies non-linear regression in each chart to construct a parameterized skill
by predicting policy parameters from task parameters. We evaluate our method on
an underactuated simulated robotic arm tasked with learning to accurately throw
darts at a parameterized target location. | [
"cs.LG",
"stat.ML"
] |
A serious problem in image classification is that a trained model might
perform well for input data that originates from the same distribution as the
data available for model training, but performs much worse for
out-of-distribution (OOD) samples. In real-world safety-critical applications,
in particular, it is important to be aware if a new data point is OOD. To date,
OOD detection is typically addressed using either confidence scores,
auto-encoder based reconstruction, or by contrastive learning. However, the
global image context has not yet been explored to discriminate the non-local
objectness between in-distribution and OOD samples. This paper proposes a
first-of-its-kind OOD detection architecture named OODformer that leverages the
contextualization capabilities of the transformer. Incorporating the
trans\-former as the principal feature extractor allows us to exploit the
object concepts and their discriminate attributes along with their
co-occurrence via visual attention. Using the contextualised embedding, we
demonstrate OOD detection using both class-conditioned latent space similarity
and a network confidence score. Our approach shows improved generalizability
across various datasets. We have achieved a new state-of-the-art result on
CIFAR-10/-100 and ImageNet30. | [
"cs.CV"
] |
Plant diseases serve as one of main threats to food security and crop
production. It is thus valuable to exploit recent advances of artificial
intelligence to assist plant disease diagnosis. One popular approach is to
transform this problem as a leaf image classification task, which can be then
addressed by the powerful convolutional neural networks (CNNs). However, the
performance of CNN-based classification approach depends on a large amount of
high-quality manually labeled training data, which are inevitably introduced
noise on labels in practice, leading to model overfitting and performance
degradation. To overcome this problem, we propose a novel framework that
incorporates rectified meta-learning module into common CNN paradigm to train a
noise-robust deep network without using extra supervision information. The
proposed method enjoys the following merits: i) A rectified meta-learning is
designed to pay more attention to unbiased samples, leading to accelerated
convergence and improved classification accuracy. ii) Our method is free on
assumption of label noise distribution, which works well on various kinds of
noise. iii) Our method serves as a plug-and-play module, which can be embedded
into any deep models optimized by gradient descent based method. Extensive
experiments are conducted to demonstrate the superior performance of our
algorithm over the state-of-the-arts. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Tabular data is a crucial form of information expression, which can organize
data in a standard structure for easy information retrieval and comparison.
However, in financial industry and many other fields tables are often disclosed
in unstructured digital files, e.g. Portable Document Format (PDF) and images,
which are difficult to be extracted directly. In this paper, to facilitate deep
learning based table extraction from unstructured digital files, we publish a
standard Chinese dataset named FinTab, which contains more than 1,600 financial
tables of diverse kinds and their corresponding structure representation in
JSON. In addition, we propose a novel graph-based convolutional neural network
model named GFTE as a baseline for future comparison. GFTE integrates image
feature, position feature and textual feature together for precise edge
prediction and reaches overall good results. | [
"cs.CV"
] |
Deep learning models are sensitive to domain shift phenomena. A model trained
on images from one domain cannot generalise well when tested on images from a
different domain, despite capturing similar anatomical structures. It is mainly
because the data distribution between the two domains is different. Moreover,
creating annotation for every new modality is a tedious and time-consuming
task, which also suffers from high inter- and intra- observer variability.
Unsupervised domain adaptation (UDA) methods intend to reduce the gap between
source and target domains by leveraging source domain labelled data to generate
labels for the target domain. However, current state-of-the-art (SOTA) UDA
methods demonstrate degraded performance when there is insufficient data in
source and target domains. In this paper, we present a novel UDA method for
multi-modal cardiac image segmentation. The proposed method is based on
adversarial learning and adapts network features between source and target
domain in different spaces. The paper introduces an end-to-end framework that
integrates: a) entropy minimisation, b) output feature space alignment and c) a
novel point-cloud shape adaptation based on the latent features learned by the
segmentation model. We validated our method on two cardiac datasets by adapting
from the annotated source domain, bSSFP-MRI (balanced Steady-State Free
Procession-MRI), to the unannotated target domain, LGE-MRI (Late-gadolinium
enhance-MRI), for the multi-sequence dataset; and from MRI (source) to CT
(target) for the cross-modality dataset. The results highlighted that by
enforcing adversarial learning in different parts of the network, the proposed
method delivered promising performance, compared to other SOTA methods. | [
"cs.CV"
] |
State-of-the-art image captioning methods mostly focus on improving visual
features, less attention has been paid to utilizing the inherent properties of
language to boost captioning performance. In this paper, we show that
vocabulary coherence between words and syntactic paradigm of sentences are also
important to generate high-quality image caption. Following the conventional
encoder-decoder framework, we propose the Reflective Decoding Network (RDN) for
image captioning, which enhances both the long-sequence dependency and position
perception of words in a caption decoder. Our model learns to collaboratively
attend on both visual and textual features and meanwhile perceive each word's
relative position in the sentence to maximize the information delivered in the
generated caption. We evaluate the effectiveness of our RDN on the COCO image
captioning datasets and achieve superior performance over the previous methods.
Further experiments reveal that our approach is particularly advantageous for
hard cases with complex scenes to describe by captions. | [
"cs.CV"
] |
When related learning tasks are naturally arranged in a hierarchy, an
appealing approach for coping with scarcity of instances is that of transfer
learning using a hierarchical Bayes framework. As fully Bayesian computations
can be difficult and computationally demanding, it is often desirable to use
posterior point estimates that facilitate (relatively) efficient prediction.
However, the hierarchical Bayes framework does not always lend itself naturally
to this maximum aposteriori goal. In this work we propose an undirected
reformulation of hierarchical Bayes that relies on priors in the form of
similarity measures. We introduce the notion of "degree of transfer" weights on
components of these similarity measures, and show how they can be automatically
learned within a joint probabilistic framework. Importantly, our reformulation
results in a convex objective for many learning problems, thus facilitating
optimal posterior point estimation using standard optimization techniques. In
addition, we no longer require proper priors, allowing for flexible and
straightforward specification of joint distributions over transfer hierarchies.
We show that our framework is effective for learning models that are part of
transfer hierarchies for two real-life tasks: object shape modeling using
Gaussian density estimation and document classification. | [
"cs.LG",
"stat.ML"
] |
Most of researchers use the vehicle re-identification based on
classification. This always requires an update with the new vehicle models in
the market. In this paper, two types of vehicle re-identification will be
presented. First, the standard method, which needs an image from the search
vehicle. VRIC and VehicleID data set are suitable for training this module. It
will be explained in detail how to improve the performance of this method using
a trained network, which is designed for the classification. The second method
takes as input a representative image of the search vehicle with similar
make/model, released year and colour. It is very useful when an image from the
search vehicle is not available. It produces as output a shape and a colour
features. This could be used by the matching across a database to re-identify
vehicles, which look similar to the search vehicle. To get a robust module for
the re-identification, a fine-grained classification has been trained, which
its class consists of four elements: the make of a vehicle refers to the
vehicle's manufacturer, e.g. Mercedes-Benz, the model of a vehicle refers to
type of model within that manufacturer's portfolio, e.g. C Class, the year
refers to the iteration of the model, which may receive progressive alterations
and upgrades by its manufacturer and the perspective of the vehicle. Thus, all
four elements describe the vehicle at increasing degree of specificity. The aim
of the vehicle shape classification is to classify the combination of these
four elements. The colour classification has been separately trained. The
results of vehicle re-identification will be shown. Using a developed tool, the
re-identification of vehicles on video images and on controlled data set will
be demonstrated. This work was partially funded under the grant. | [
"cs.CV"
] |
Scaling machine learning methods to very large datasets has attracted
considerable attention in recent years, thanks to easy access to ubiquitous
sensing and data from the web. We study face recognition and show that three
distinct properties have surprising effects on the transferability of deep
convolutional networks (CNN): (1) The bottleneck of the network serves as an
important transfer learning regularizer, and (2) in contrast to the common
wisdom, performance saturation may exist in CNN's (as the number of training
samples grows); we propose a solution for alleviating this by replacing the
naive random subsampling of the training set with a bootstrapping process.
Moreover, (3) we find a link between the representation norm and the ability to
discriminate in a target domain, which sheds lights on how such networks
represent faces. Based on these discoveries, we are able to improve face
recognition accuracy on the widely used LFW benchmark, both in the verification
(1:1) and identification (1:N) protocols, and directly compare, for the first
time, with the state of the art Commercially-Off-The-Shelf system and show a
sizable leap in performance. | [
"cs.CV"
] |
The medical field stands to see significant benefits from the recent advances
in deep learning. Knowing the uncertainty in the decision made by any machine
learning algorithm is of utmost importance for medical practitioners. This
study demonstrates the utility of using Bayesian LSTMs for classification of
medical time series. Four medical time series datasets are used to show the
accuracy improvement Bayesian LSTMs provide over standard LSTMs. Moreover, we
show cherry-picked examples of confident and uncertain classifications of the
medical time series. With simple modifications of the common practice for deep
learning, significant improvements can be made for the medical practitioner and
patient. | [
"stat.ML",
"cs.LG",
"stat.AP"
] |
Photo retouching aims at enhancing the aesthetic visual quality of images
that suffer from photographic defects such as over/under exposure, poor
contrast, inharmonious saturation. Practically, photo retouching can be
accomplished by a series of image processing operations. In this paper, we
investigate some commonly-used retouching operations and mathematically find
that these pixel-independent operations can be approximated or formulated by
multi-layer perceptrons (MLPs). Based on this analysis, we propose an extremely
light-weight framework - Conditional Sequential Retouching Network (CSRNet) -
for efficient global image retouching. CSRNet consists of a base network and a
condition network. The base network acts like an MLP that processes each pixel
independently and the condition network extracts the global features of the
input image to generate a condition vector. To realize retouching operations,
we modulate the intermediate features using Global Feature Modulation (GFM), of
which the parameters are transformed by condition vector. Benefiting from the
utilization of $1\times1$ convolution, CSRNet only contains less than 37k
trainable parameters, which is orders of magnitude smaller than existing
learning-based methods. Extensive experiments show that our method achieves
state-of-the-art performance on the benchmark MIT-Adobe FiveK dataset
quantitively and qualitatively. Code is available at
https://github.com/hejingwenhejingwen/CSRNet. | [
"cs.CV"
] |
In recent years, spiking neural networks (SNNs) emerge as an alternative to
deep neural networks (DNNs). SNNs present a higher computational efficiency
using low-power neuromorphic hardware and require less labeled data for
training using local and unsupervised learning rules such as spike
timing-dependent plasticity (STDP). SNN have proven their effectiveness in
image classification on simple datasets such as MNIST. However, to process
natural images, a pre-processing step is required. Difference-of-Gaussians
(DoG) filtering is typically used together with on-center/off-center coding,
but it results in a loss of information that is detrimental to the
classification performance. In this paper, we propose to use whitening as a
pre-processing step before learning features with STDP. Experiments on CIFAR-10
show that whitening allows STDP to learn visual features that are closer to the
ones learned with standard neural networks, with a significantly increased
classification performance as compared to DoG filtering. We also propose an
approximation of whitening as convolution kernels that is computationally
cheaper to learn and more suited to be implemented on neuromorphic hardware.
Experiments on CIFAR-10 show that it performs similarly to regular whitening.
Cross-dataset experiments on CIFAR-10 and STL-10 also show that it is fairly
stable across datasets, making it possible to learn a single whitening
transformation to process different datasets. | [
"cs.CV",
"cs.LG",
"cs.NE"
] |
Satellite image classification is a challenging problem that lies at the
crossroads of remote sensing, computer vision, and machine learning. Due to the
high variability inherent in satellite data, most of the current object
classification approaches are not suitable for handling satellite datasets. The
progress of satellite image analytics has also been inhibited by the lack of a
single labeled high-resolution dataset with multiple class labels. The
contributions of this paper are twofold - (1) first, we present two new
satellite datasets called SAT-4 and SAT-6, and (2) then, we propose a
classification framework that extracts features from an input image, normalizes
them and feeds the normalized feature vectors to a Deep Belief Network for
classification. On the SAT-4 dataset, our best network produces a
classification accuracy of 97.95% and outperforms three state-of-the-art object
recognition algorithms, namely - Deep Belief Networks, Convolutional Neural
Networks and Stacked Denoising Autoencoders by ~11%. On SAT-6, it produces a
classification accuracy of 93.9% and outperforms the other algorithms by ~15%.
Comparative studies with a Random Forest classifier show the advantage of an
unsupervised learning approach over traditional supervised learning techniques.
A statistical analysis based on Distribution Separability Criterion and
Intrinsic Dimensionality Estimation substantiates the effectiveness of our
approach in learning better representations for satellite imagery. | [
"cs.CV"
] |
In this paper, we propose PointRCNN for 3D object detection from raw point
cloud. The whole framework is composed of two stages: stage-1 for the bottom-up
3D proposal generation and stage-2 for refining proposals in the canonical
coordinates to obtain the final detection results. Instead of generating
proposals from RGB image or projecting point cloud to bird's view or voxels as
previous methods do, our stage-1 sub-network directly generates a small number
of high-quality 3D proposals from point cloud in a bottom-up manner via
segmenting the point cloud of the whole scene into foreground points and
background. The stage-2 sub-network transforms the pooled points of each
proposal to canonical coordinates to learn better local spatial features, which
is combined with global semantic features of each point learned in stage-1 for
accurate box refinement and confidence prediction. Extensive experiments on the
3D detection benchmark of KITTI dataset show that our proposed architecture
outperforms state-of-the-art methods with remarkable margins by using only
point cloud as input. The code is available at
https://github.com/sshaoshuai/PointRCNN. | [
"cs.CV"
] |
Precise localization of polyp is crucial for early cancer screening in
gastrointestinal endoscopy. Videos given by endoscopy bring both richer
contextual information as well as more challenges than still images. The
camera-moving situation, instead of the common camera-fixed-object-moving one,
leads to significant background variation between frames. Severe internal
artifacts (e.g. water flow in the human body, specular reflection by tissues)
can make the quality of adjacent frames vary considerately. These factors
hinder a video-based model to effectively aggregate features from neighborhood
frames and give better predictions. In this paper, we present Spatial-Temporal
Feature Transformation (STFT), a multi-frame collaborative framework to address
these issues. Spatially, STFT mitigates inter-frame variations in the
camera-moving situation with feature alignment by proposal-guided deformable
convolutions. Temporally, STFT proposes a channel-aware attention module to
simultaneously estimate the quality and correlation of adjacent frames for
adaptive feature aggregation. Empirical studies and superior results
demonstrate the effectiveness and stability of our method. For example, STFT
improves the still image baseline FCOS by 10.6% and 20.6% on the comprehensive
F1-score of the polyp localization task in CVC-Clinic and ASUMayo datasets,
respectively, and outperforms the state-of-the-art video-based method by 3.6%
and 8.0%, respectively. Code is available at
\url{https://github.com/lingyunwu14/STFT}. | [
"cs.CV"
] |
Deep generative models for graph-structured data offer a new angle on the
problem of chemical synthesis: by optimizing differentiable models that
directly generate molecular graphs, it is possible to side-step expensive
search procedures in the discrete and vast space of chemical structures. We
introduce MolGAN, an implicit, likelihood-free generative model for small
molecular graphs that circumvents the need for expensive graph matching
procedures or node ordering heuristics of previous likelihood-based methods.
Our method adapts generative adversarial networks (GANs) to operate directly on
graph-structured data. We combine our approach with a reinforcement learning
objective to encourage the generation of molecules with specific desired
chemical properties. In experiments on the QM9 chemical database, we
demonstrate that our model is capable of generating close to 100% valid
compounds. MolGAN compares favorably both to recent proposals that use
string-based (SMILES) representations of molecules and to a likelihood-based
method that directly generates graphs, albeit being susceptible to mode
collapse. | [
"stat.ML",
"cs.LG"
] |
Monocular 3D object detection is an important task for autonomous driving
considering its advantage of low cost. It is much more challenging than
conventional 2D cases due to its inherent ill-posed property, which is mainly
reflected in the lack of depth information. Recent progress on 2D detection
offers opportunities to better solving this problem. However, it is non-trivial
to make a general adapted 2D detector work in this 3D task. In this paper, we
study this problem with a practice built on a fully convolutional single-stage
detector and propose a general framework FCOS3D. Specifically, we first
transform the commonly defined 7-DoF 3D targets to the image domain and
decouple them as 2D and 3D attributes. Then the objects are distributed to
different feature levels with consideration of their 2D scales and assigned
only according to the projected 3D-center for the training procedure.
Furthermore, the center-ness is redefined with a 2D Gaussian distribution based
on the 3D-center to fit the 3D target formulation. All of these make this
framework simple yet effective, getting rid of any 2D detection or 2D-3D
correspondence priors. Our solution achieves 1st place out of all the
vision-only methods in the nuScenes 3D detection challenge of NeurIPS 2020.
Code and models are released at https://github.com/open-mmlab/mmdetection3d. | [
"cs.CV",
"cs.AI",
"cs.RO"
] |
Temporal context is key to the recognition of expressions of emotion.
Existing methods, that rely on recurrent or self-attention models to enforce
temporal consistency, work on the feature level, ignoring the task-specific
temporal dependencies, and fail to model context uncertainty. To alleviate
these issues, we build upon the framework of Neural Processes to propose a
method for apparent emotion recognition with three key novel components: (a)
probabilistic contextual representation with a global latent variable model;
(b) temporal context modelling using task-specific predictions in addition to
features; and (c) smart temporal context selection. We validate our approach on
four databases, two for Valence and Arousal estimation (SEWA and AffWild2), and
two for Action Unit intensity estimation (DISFA and BP4D). Results show a
consistent improvement over a series of strong baselines as well as over
state-of-the-art methods. | [
"cs.CV",
"cs.LG"
] |
Visual attention mechanisms have proven to be integrally important
constituent components of many modern deep neural architectures. They provide
an efficient and effective way to utilize visual information selectively, which
has shown to be especially valuable in multi-modal learning tasks. However, all
prior attention frameworks lack the ability to explicitly model structural
dependencies among attention variables, making it difficult to predict
consistent attention masks. In this paper we develop a novel structured spatial
attention mechanism which is end-to-end trainable and can be integrated with
any feed-forward convolutional neural network. This proposed AttentionRNN layer
explicitly enforces structure over the spatial attention variables by
sequentially predicting attention values in the spatial mask in a
bi-directional raster-scan and inverse raster-scan order. As a result, each
attention value depends not only on local image or contextual information, but
also on the previously predicted attention values. Our experiments show
consistent quantitative and qualitative improvements on a variety of
recognition tasks and datasets; including image categorization, question
answering and image generation. | [
"cs.CV"
] |
Lateral connections in the primary visual cortex (V1) have long been
hypothesized to be responsible of several visual processing mechanisms such as
brightness induction, chromatic induction, visual discomfort and bottom-up
visual attention (also named saliency). Many computational models have been
developed to independently predict these and other visual processes, but no
computational model has been able to reproduce all of them simultaneously. In
this work we show that a biologically plausible computational model of lateral
interactions of V1 is able to simultaneously predict saliency and all the
aforementioned visual processes. Our model's (NSWAM) architecture is based on
Pennachio's neurodynamic model of lateral connections of V1. It is defined as a
network of firing rate neurons, sensitive to visual features such as
brightness, color, orientation and scale. We tested NSWAM saliency predictions
using images from several eye tracking datasets. We show that accuracy of
predictions, using shuffled metrics, obtained by our architecture is similar to
other state-of-the-art computational methods, particularly with synthetic
images (CAT2000-Pattern & SID4VAM) which mainly contain low level features.
Moreover, we outperform other biologically-inspired saliency models that are
specifically designed to exclusively reproduce saliency. Hence, we show that
our biologically plausible model of lateral connections can simultaneously
explain different visual proceses present in V1 (without applying any type of
training or optimization and keeping the same parametrization for all the
visual processes). This can be useful for the definition of a unified
architecture of the primary visual cortex. | [
"cs.CV"
] |
The multi-modal salient object detection model based on RGB-D information has
better robustness in the real world. However, it remains nontrivial to better
adaptively balance effective multi-modal information in the feature fusion
phase. In this letter, we propose a novel gated recoding network (GRNet) to
evaluate the information validity of the two modes, and balance their
influence. Our framework is divided into three phases: perception phase,
recoding mixing phase and feature integration phase. First, A perception
encoder is adopted to extract multi-level single-modal features, which lays the
foundation for multi-modal semantic comparative analysis. Then, a
modal-adaptive gate unit (MGU) is proposed to suppress the invalid information
and transfer the effective modal features to the recoding mixer and the hybrid
branch decoder. The recoding mixer is responsible for recoding and mixing the
balanced multi-modal information. Finally, the hybrid branch decoder completes
the multi-level feature integration under the guidance of an optional edge
guidance stream (OEGS). Experiments and analysis on eight popular benchmarks
verify that our framework performs favorably against 9 state-of-art methods. | [
"cs.CV"
] |
This paper proposes \textit{layer fusion} - a model compression technique
that discovers which weights to combine and then fuses weights of similar
fully-connected, convolutional and attention layers. Layer fusion can
significantly reduce the number of layers of the original network with little
additional computation overhead, while maintaining competitive performance.
From experiments on CIFAR-10, we find that various deep convolution neural
networks can remain within 2\% accuracy points of the original networks up to a
compression ratio of 3.33 when iteratively retrained with layer fusion. For
experiments on the WikiText-2 language modelling dataset where pretrained
transformer models are used, we achieve compression that leads to a network
that is 20\% of its original size while being within 5 perplexity points of the
original network. We also find that other well-established compression
techniques can achieve competitive performance when compared to their original
networks given a sufficient number of retraining steps. Generally, we observe a
clear inflection point in performance as the amount of compression increases,
suggesting a bound on the amount of compression that can be achieved before an
exponential degradation in performance. | [
"cs.LG",
"stat.ML"
] |
Face image retrieval, which searches for images of the same identity from the
query input face image, is drawing more attention as the size of the image
database increases rapidly. In order to conduct fast and accurate retrieval, a
compact hash code-based methods have been proposed, and recently, deep face
image hashing methods with supervised classification training have shown
outstanding performance. However, classification-based scheme has a
disadvantage in that it cannot reveal complex similarities between face images
into the hash code learning. In this paper, we attempt to improve the face
image retrieval quality by proposing a Similarity Guided Hashing (SGH) method,
which gently considers self and pairwise-similarity simultaneously. SGH employs
various data augmentations designed to explore elaborate similarities between
face images, solving both intra and inter identity-wise difficulties. Extensive
experimental results on the protocols with existing benchmarks and an
additionally proposed large scale higher resolution face image dataset
demonstrate that our SGH delivers state-of-the-art retrieval performance. | [
"cs.CV",
"cs.IR"
] |
Reinforcement learning has achieved remarkable performance in a wide range of
tasks these days. Nevertheless, some unsolved problems limit its applications
in real-world control. One of them is model misspecification, a situation where
an agent is trained and deployed in environments with different transition
dynamics. We propose an novel framework that utilize history trajectory and
Partial Observable Markov Decision Process Modeling to deal with this dilemma.
Additionally, we put forward an efficient adversarial attack method to assist
robust training. Our experiments in four gym domains validate the effectiveness
of our framework. | [
"cs.LG",
"cs.AI"
] |
Automatic security inspection using computer vision technology is a
challenging task in real-world scenarios due to various factors, including
intra-class variance, class imbalance, and occlusion. Most of the previous
methods rarely solve the cases that the prohibited items are deliberately
hidden in messy objects due to the lack of large-scale datasets, restricted
their applications in real-world scenarios. Towards real-world prohibited item
detection, we collect a large-scale dataset, named as PIDray, which covers
various cases in real-world scenarios for prohibited item detection, especially
for deliberately hidden items. With an intensive amount of effort, our dataset
contains $12$ categories of prohibited items in $47,677$ X-ray images with
high-quality annotated segmentation masks and bounding boxes. To the best of
our knowledge, it is the largest prohibited items detection dataset to date.
Meanwhile, we design the selective dense attention network (SDANet) to
construct a strong baseline, which consists of the dense attention module and
the dependency refinement module. The dense attention module formed by the
spatial and channel-wise dense attentions, is designed to learn the
discriminative features to boost the performance. The dependency refinement
module is used to exploit the dependencies of multi-scale features. Extensive
experiments conducted on the collected PIDray dataset demonstrate that the
proposed method performs favorably against the state-of-the-art methods,
especially for detecting the deliberately hidden items. | [
"cs.CV"
] |
Multi-agent spatiotemporal modeling is a challenging task from both an
algorithmic design and computational complexity perspective. Recent work has
explored the efficacy of traditional deep sequential models in this domain, but
these architectures are slow and cumbersome to train, particularly as model
size increases. Further, prior attempts to model interactions between agents
across time have limitations, such as imposing an order on the agents, or
making assumptions about their relationships. In this paper, we introduce
baller2vec, a multi-entity generalization of the standard Transformer that can,
with minimal assumptions, simultaneously and efficiently integrate information
across entities and time. We test the effectiveness of baller2vec for
multi-agent spatiotemporal modeling by training it to perform two different
basketball-related tasks: (1) simultaneously forecasting the trajectories of
all players on the court and (2) forecasting the trajectory of the ball. Not
only does baller2vec learn to perform these tasks well (outperforming a graph
recurrent neural network with a similar number of parameters by a wide margin),
it also appears to "understand" the game of basketball, encoding idiosyncratic
qualities of players in its embeddings, and performing basketball-relevant
functions with its attention heads. | [
"cs.LG",
"cs.MA"
] |
We propose a fully unsupervised multi-modal deformable image registration
method (UMDIR), which does not require any ground truth deformation fields or
any aligned multi-modal image pairs during training. Multi-modal registration
is a key problem in many medical image analysis applications. It is very
challenging due to complicated and unknown relationships between different
modalities. In this paper, we propose an unsupervised learning approach to
reduce the multi-modal registration problem to a mono-modal one through image
disentangling. In particular, we decompose images of both modalities into a
common latent shape space and separate latent appearance spaces via an
unsupervised multi-modal image-to-image translation approach. The proposed
registration approach is then built on the factorized latent shape code, with
the assumption that the intrinsic shape deformation existing in original image
domain is preserved in this latent space. Specifically, two metrics have been
proposed for training the proposed network: a latent similarity metric defined
in the common shape space and a learningbased image similarity metric based on
an adversarial loss. We examined different variations of our proposed approach
and compared them with conventional state-of-the-art multi-modal registration
methods. Results show that our proposed methods achieve competitive performance
against other methods at substantially reduced computation time. | [
"cs.CV"
] |
Inferring behavior model of a running software system is quite useful for
several automated software engineering tasks, such as program comprehension,
anomaly detection, and testing. Most existing dynamic model inference
techniques are white-box, i.e., they require source code to be instrumented to
get run-time traces. However, in many systems, instrumenting the entire source
code is not possible (e.g., when using black-box third-party libraries) or
might be very costly. Unfortunately, most black-box techniques that detect
states over time are either univariate, or make assumptions on the data
distribution, or have limited power for learning over a long period of past
behavior. To overcome the above issues, in this paper, we propose a hybrid deep
neural network that accepts as input a set of time series, one per input/output
signal of the system, and applies a set of convolutional and recurrent layers
to learn the non-linear correlations between signals and the patterns, over
time. We have applied our approach on a real UAV auto-pilot solution from our
industry partner with half a million lines of C code. We ran 888 random recent
system-level test cases and inferred states, over time. Our comparison with
several traditional time series change point detection techniques showed that
our approach improves their performance by up to 102%, in terms of finding
state change points, measured by F1 score. We also showed that our state
classification algorithm provides on average 90.45% F1 score, which improves
traditional classification algorithms by up to 17%. | [
"cs.LG",
"cs.SE",
"stat.ML"
] |
Temporal semantic scene understanding is critical for self-driving cars or
robots operating in dynamic environments. In this paper, we propose 4D panoptic
LiDAR segmentation to assign a semantic class and a temporally-consistent
instance ID to a sequence of 3D points. To this end, we present an approach and
a point-centric evaluation metric. Our approach determines a semantic class for
every point while modeling object instances as probability distributions in the
4D spatio-temporal domain. We process multiple point clouds in parallel and
resolve point-to-instance associations, effectively alleviating the need for
explicit temporal data association. Inspired by recent advances in benchmarking
of multi-object tracking, we propose to adopt a new evaluation metric that
separates the semantic and point-to-instance association aspects of the task.
With this work, we aim at paving the road for future developments of temporal
LiDAR panoptic perception. | [
"cs.CV",
"cs.RO"
] |
Recent years have witnessed the popularity and success of graph neural
networks (GNN) in various scenarios. To obtain data-specific GNN architectures,
researchers turn to neural architecture search (NAS), which has made impressive
success in discovering effective architectures in convolutional neural
networks. However, it is non-trivial to apply NAS approaches to GNN due to
challenges in search space design and the expensive searching cost of existing
NAS methods. In this work, to obtain the data-specific GNN architectures and
address the computational challenges facing by NAS approaches, we propose a
framework, which tries to Search to Aggregate NEighborhood (SANE), to
automatically design data-specific GNN architectures. By designing a novel and
expressive search space, we propose a differentiable search algorithm, which is
more efficient than previous reinforcement learning based methods. Experimental
results on four tasks and seven real-world datasets demonstrate the superiority
of SANE compared to existing GNN models and NAS approaches in terms of
effectiveness and efficiency. (Code is available at:
https://github.com/AutoML-4Paradigm/SANE). | [
"cs.LG"
] |
How do we formalize the challenge of credit assignment in reinforcement
learning? Common intuition would draw attention to reward sparsity as a key
contributor to difficult credit assignment and traditional heuristics would
look to temporal recency for the solution, calling upon the classic eligibility
trace. We posit that it is not the sparsity of the reward itself that causes
difficulty in credit assignment, but rather the \emph{information sparsity}. We
propose to use information theory to define this notion, which we then use to
characterize when credit assignment is an obstacle to efficient learning. With
this perspective, we outline several information-theoretic mechanisms for
measuring credit under a fixed behavior policy, highlighting the potential of
information theory as a key tool towards provably-efficient credit assignment. | [
"cs.LG",
"cs.IT",
"math.IT"
] |
In batch reinforcement learning (RL), one often constrains a learned policy
to be close to the behavior (data-generating) policy, e.g., by constraining the
learned action distribution to differ from the behavior policy by some maximum
degree that is the same at each state. This can cause batch RL to be overly
conservative, unable to exploit large policy changes at frequently-visited,
high-confidence states without risking poor performance at sparsely-visited
states. To remedy this, we propose residual policies, where the allowable
deviation of the learned policy is state-action-dependent. We derive a new for
RL method, BRPO, which learns both the policy and allowable deviation that
jointly maximize a lower bound on policy performance. We show that BRPO
achieves the state-of-the-art performance in a number of tasks. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
This paper proposes a nonparametric Bayesian method for exploratory data
analysis and feature construction in continuous time series. Our method focuses
on understanding shared features in a set of time series that exhibit
significant individual variability. Our method builds on the framework of
latent Diricihlet allocation (LDA) and its extension to hierarchical Dirichlet
processes, which allows us to characterize each series as switching between
latent ``topics'', where each topic is characterized as a distribution over
``words'' that specify the series dynamics. However, unlike standard
applications of LDA, we discover the words as we learn the model. We apply this
model to the task of tracking the physiological signals of premature infants;
our model obtains clinically significant insights as well as useful features
for supervised learning tasks. | [
"stat.ML",
"cs.AI",
"stat.ME"
] |
With the recent developments in neural networks, there has been a resurgence
in algorithms for the automatic generation of simulation ready electronic
circuits from hand-drawn circuits. However, most of the approaches in
literature were confined to classify different types of electrical components
and only a few of those methods have shown a way to rebuild the circuit
schematic from the scanned image, which is extremely important for further
automation of netlist generation. This paper proposes a real-time algorithm for
the automatic recognition of hand-drawn electrical circuits based on object
detection and circuit node recognition. The proposed approach employs You Only
Look Once version 5 (YOLOv5) for detection of circuit components and a novel
Hough transform based approach for node recognition. Using YOLOv5 object
detection algorithm, a mean average precision (mAP0.5) of 98.2% is achieved in
detecting the components. The proposed method is also able to rebuild the
circuit schematic with 80% accuracy. | [
"cs.CV",
"eess.IV"
] |
Zero-shot learning, the task of learning to recognize new classes not seen
during training, has received considerable attention in the case of 2D image
classification. However, despite the increasing ubiquity of 3D sensors, the
corresponding 3D point cloud classification problem has not been meaningfully
explored and introduces new challenges. In this paper, we identify some of the
challenges and apply 2D Zero-Shot Learning (ZSL) methods in the 3D domain to
analyze the performance of existing models. Then, we propose a novel approach
to address the issues specific to 3D ZSL. We first present an inductive ZSL
process and then extend it to the transductive ZSL and Generalized ZSL (GZSL)
settings for 3D point cloud classification. To this end, a novel loss function
is developed that simultaneously aligns seen semantics with point cloud
features and takes advantage of unlabeled test data to address some known
issues (e.g., the problems of domain adaptation, hubness, and data bias). While
designed for the particularities of 3D point cloud classification, the method
is shown to also be applicable to the more common use-case of 2D image
classification. An extensive set of experiments is carried out, establishing
state-of-the-art for ZSL and GZSL on synthetic (ModelNet40, ModelNet10, McGill)
and real (ScanObjectNN) 3D point cloud datasets. | [
"cs.CV"
] |
Our objective is video retrieval based on natural language queries. In
addition, we consider the analogous problem of retrieving sentences or
generating descriptions given an input video. Recent work has addressed the
problem by embedding visual and textual inputs into a common space where
semantic similarities correlate to distances. We also adopt the embedding
approach, and make the following contributions: First, we utilize web image
search in sentence embedding process to disambiguate fine-grained visual
concepts. Second, we propose embedding models for sentence, image, and video
inputs whose parameters are learned simultaneously. Finally, we show how the
proposed model can be applied to description generation. Overall, we observe a
clear improvement over the state-of-the-art methods in the video and sentence
retrieval tasks. In description generation, the performance level is comparable
to the current state-of-the-art, although our embeddings were trained for the
retrieval tasks. | [
"cs.CV"
] |
Recently, logo detection has received more and more attention for its wide
applications in the multimedia field, such as intellectual property protection,
product brand management, and logo duration monitoring. Unlike general object
detection, logo detection is a challenging task, especially for small logo
objects and large aspect ratio logo objects in the real-world scenario. In this
paper, we propose a novel approach, named Discriminative Semantic Feature
Pyramid Network with Guided Anchoring (DSFP-GA), which can address these
challenges via aggregating the semantic information and generating different
aspect ratio anchor boxes. More specifically, our approach mainly consists of
Discriminative Semantic Feature Pyramid (DSFP) and Guided Anchoring (GA).
Considering that low-level feature maps that are used to detect small logo
objects lack semantic information, we propose the DSFP, which can enrich more
discriminative semantic features of low-level feature maps and can achieve
better performance on small logo objects. Furthermore, preset anchor boxes are
less efficient for detecting large aspect ratio logo objects. We therefore
integrate the GA into our method to generate large aspect ratio anchor boxes to
mitigate this issue. Extensive experimental results on four benchmarks
demonstrate the effectiveness of our proposed DSFP-GA. Moreover, we further
conduct visual analysis and ablation studies to illustrate the advantage of our
method in detecting small and large aspect logo objects. The code and models
can be found at https://github.com/Zhangbaisong/DSFP-GA. | [
"cs.CV"
] |
Learning powerful data embeddings has become a center piece in machine
learning, especially in natural language processing and computer vision
domains. The crux of these embeddings is that they are pretrained on huge
corpus of data in a unsupervised fashion, sometimes aided with transfer
learning. However currently in the graph learning domain, embeddings learned
through existing graph neural networks (GNNs) are task dependent and thus
cannot be shared across different datasets. In this paper, we present a first
powerful and theoretically guaranteed graph neural network that is designed to
learn task-independent graph embeddings, thereafter referred to as deep
universal graph embedding (DUGNN). Our DUGNN model incorporates a novel graph
neural network (as a universal graph encoder) and leverages rich Graph Kernels
(as a multi-task graph decoder) for both unsupervised learning and
(task-specific) adaptive supervised learning. By learning task-independent
graph embeddings across diverse datasets, DUGNN also reaps the benefits of
transfer learning. Through extensive experiments and ablation studies, we show
that the proposed DUGNN model consistently outperforms both the existing
state-of-art GNN models and Graph Kernels by an increased accuracy of 3% - 8%
on graph classification benchmark datasets. | [
"cs.LG",
"stat.ML"
] |
Network representation learning (NRL) is a powerful technique for learning
low-dimensional vector representation of high-dimensional and sparse graphs.
Most studies explore the structure and metadata associated with the graph using
random walks and employ an unsupervised or semi-supervised learning schemes.
Learning in these methods is context-free, because only a single representation
per node is learned. Recently studies have argued on the sufficiency of a
single representation and proposed a context-sensitive approach that proved to
be highly effective in applications such as link prediction and ranking.
However, most of these methods rely on additional textual features that
require RNNs or CNNs to capture high-level features or rely on a community
detection algorithm to identify multiple contexts of a node.
In this study, without requiring additional features nor a community
detection algorithm, we propose a novel context-sensitive algorithm called GAP
that learns to attend on different parts of a node's neighborhood using
attentive pooling networks. We show the efficacy of GAP using three real-world
datasets on link prediction and node clustering tasks and compare it against 10
popular and state-of-the-art (SOTA) baselines. GAP consistently outperforms
them and achieves up to ~9% and ~20% gain over the best performing methods on
link prediction and clustering tasks, respectively. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
Deep neural networks achieve unprecedented performance levels over many tasks
and scale well with large quantities of data, but performance in the low-data
regime and tasks like one shot learning still lags behind. While recent work
suggests many hypotheses from better optimization to more complicated network
structures, in this work we hypothesize that having a learnable and more
expressive similarity objective is an essential missing component. Towards
overcoming that, we propose a network design inspired by deep residual networks
that allows the efficient computation of this more expressive pairwise
similarity objective. Further, we argue that regularization is key in learning
with small amounts of data, and propose an additional generator network based
on the Generative Adversarial Networks where the discriminator is our residual
pairwise network. This provides a strong regularizer by leveraging the
generated data samples. The proposed model can generate plausible variations of
exemplars over unseen classes and outperforms strong discriminative baselines
for few shot classification tasks. Notably, our residual pairwise network
design outperforms previous state-of-theart on the challenging mini-Imagenet
dataset for one shot learning by getting over 55% accuracy for the 5-way
classification task over unseen classes. | [
"cs.CV",
"cs.NE"
] |
Nonlocal patch-based methods, in particular the Bayes' approach of Lebrun,
Buades and Morel (2013), are considered as state-of-the-art methods for
denoising (color) images corrupted by white Gaussian noise of moderate
variance. This paper is the first attempt to generalize this technique to
manifold-valued images. Such images, for example images with phase or
directional entries or with values in the manifold of symmetric positive
definite matrices, are frequently encountered in real-world applications.
Generalizing the normal law to manifolds is not canonical and different
attempts have been considered. Here we focus on a straightforward intrinsic
model and discuss the relation to other approaches for specific manifolds. We
reinterpret the Bayesian approach of Lebrun et al. (2013) in terms of minimum
mean squared error estimation, which motivates our definition of a
corresponding estimator on the manifold. With this estimator at hand we present
a nonlocal patch-based method for the restoration of manifold-valued images.
Various proof of concept examples demonstrate the potential of the proposed
algorithm. | [
"cs.CV",
"math.NA"
] |
While machine learning approaches to visual emotion recognition offer great
promise, current methods consider training and testing models on small scale
datasets covering limited visual emotion concepts. Our analysis identifies an
important but long overlooked issue of existing visual emotion benchmarks in
the form of dataset biases. We design a series of tests to show and measure how
such dataset biases obstruct learning a generalizable emotion recognition
model. Based on our analysis, we propose a webly supervised approach by
leveraging a large quantity of stock image data. Our approach uses a simple yet
effective curriculum guided training strategy for learning discriminative
emotion features. We discover that the models learned using our large scale
stock image dataset exhibit significantly better generalization ability than
the existing datasets without the manual collection of even a single label.
Moreover, visual representation learned using our approach holds a lot of
promise across a variety of tasks on different image and video datasets. | [
"cs.CV"
] |
High-resolution representations are essential for position-sensitive vision
problems, such as human pose estimation, semantic segmentation, and object
detection. Existing state-of-the-art frameworks first encode the input image as
a low-resolution representation through a subnetwork that is formed by
connecting high-to-low resolution convolutions \emph{in series} (e.g., ResNet,
VGGNet), and then recover the high-resolution representation from the encoded
low-resolution representation. Instead, our proposed network, named as
High-Resolution Network (HRNet), maintains high-resolution representations
through the whole process. There are two key characteristics: (i) Connect the
high-to-low resolution convolution streams \emph{in parallel}; (ii) Repeatedly
exchange the information across resolutions. The benefit is that the resulting
representation is semantically richer and spatially more precise. We show the
superiority of the proposed HRNet in a wide range of applications, including
human pose estimation, semantic segmentation, and object detection, suggesting
that the HRNet is a stronger backbone for computer vision problems. All the
codes are available at~{\url{https://github.com/HRNet}}. | [
"cs.CV"
] |
Unscheduled power disturbances cause severe consequences both for customers
and grid operators. To defend against such events, it is necessary to identify
the causes of interruptions in the power distribution network. In this work, we
focus on the power grid of a Norwegian community in the Arctic that experiences
several faults whose sources are unknown. First, we construct a data set
consisting of relevant meteorological data and information about the current
power quality logged by power-quality meters. Then, we adopt machine-learning
techniques to predict the occurrence of faults. Experimental results show that
both linear and non-linear classifiers achieve good classification performance.
This indicates that the considered power-quality and weather variables explain
well the power disturbances. Interpreting the decision process of the
classifiers provides valuable insights to understand the main causes of
disturbances. Traditional features selection methods can only indicate which
are the variables that, on average, mostly explain the fault occurrences in the
dataset. Besides providing such a global interpretation, it is also important
to identify the specific set of variables that explain each individual fault.
To address this challenge, we adopt a recent technique to interpret the
decision process of a deep learning model, called Integrated Gradients. The
proposed approach allows to gain detailed insights on the occurrence of a
specific fault, which are valuable for the distribution system operators to
implement strategies to prevent and mitigate power disturbances. | [
"cs.LG"
] |
Google uses continuous streams of data from industry partners in order to
deliver accurate results to users. Unexpected drops in traffic can be an
indication of an underlying issue and may be an early warning that remedial
action may be necessary. Detecting such drops is non-trivial because streams
are variable and noisy, with roughly regular spikes (in many different shapes)
in traffic data. We investigated the question of whether or not we can predict
anomalies in these data streams. Our goal is to utilize Machine Learning and
statistical approaches to classify anomalous drops in periodic, but noisy,
traffic patterns. Since we do not have a large body of labeled examples to
directly apply supervised learning for anomaly classification, we approached
the problem in two parts. First we used TensorFlow to train our various models
including DNNs, RNNs, and LSTMs to perform regression and predict the expected
value in the time series. Secondly we created anomaly detection rules that
compared the actual values to predicted values. Since the problem requires
finding sustained anomalies, rather than just short delays or momentary
inactivity in the data, our two detection methods focused on continuous
sections of activity rather than just single points. We tried multiple
combinations of our models and rules and found that using the intersection of
our two anomaly detection methods proved to be an effective method of detecting
anomalies on almost all of our models. In the process we also found that not
all data fell within our experimental assumptions, as one data stream had no
periodicity, and therefore no time based model could predict it. | [
"stat.ML",
"cs.LG"
] |
Recently, a race towards the simplification of deep networks has begun,
showing that it is effectively possible to reduce the size of these models with
minimal or no performance loss. However, there is a general lack in
understanding why these pruning strategies are effective. In this work, we are
going to compare and analyze pruned solutions with two different pruning
approaches, one-shot and gradual, showing the higher effectiveness of the
latter. In particular, we find that gradual pruning allows access to narrow,
well-generalizing minima, which are typically ignored when using one-shot
approaches. In this work we also propose PSP-entropy, a measure to understand
how a given neuron correlates to some specific learned classes. Interestingly,
we observe that the features extracted by iteratively-pruned models are less
correlated to specific classes, potentially making these models a better fit in
transfer learning approaches. | [
"cs.LG",
"cond-mat.dis-nn",
"cs.NE"
] |
We have developed a new data-driven paradigm for the rapid inference,
modeling and simulation of the physics of transport phenomena by deep learning.
Using conditional generative adversarial networks (cGAN), we train models for
the direct generation of solutions to steady state heat conduction and
incompressible fluid flow purely on observation without knowledge of the
underlying governing equations. Rather than using iterative numerical methods
to approximate the solution of the constitutive equations, cGANs learn to
directly generate the solutions to these phenomena, given arbitrary boundary
conditions and domain, with high test accuracy (MAE$<$1\%) and state-of-the-art
computational performance. The cGAN framework can be used to learn causal
models directly from experimental observations where the underlying physical
model is complex or unknown. | [
"cs.LG",
"physics.comp-ph"
] |
The segmentation of digital images is one of the essential steps in image
processing or a computer vision system. It helps in separating the pixels into
different regions according to their intensity level. A large number of
segmentation techniques have been proposed, and a few of them use complex
computational operations. Among all, the most straightforward procedure that
can be easily implemented is thresholding. In this paper, we present a unique
heuristic approach for image segmentation that automatically determines
multilevel thresholds by sampling the histogram of a digital image. Our
approach emphasis on selecting a valley as optimal threshold values. We
demonstrated that our approach outperforms the popular Otsu's method in terms
of CPU computational time. We demonstrated that our approach outperforms the
popular Otsu's method in terms of CPU computational time. We observed a maximum
speed-up of 35.58x and a minimum speed-up of 10.21x on popular image processing
benchmarks. To demonstrate the correctness of our approach in determining
threshold values, we compute PSNR, SSIM, and FSIM values to compare with the
values obtained by Otsu's method. This evaluation shows that our approach is
comparable and better in many cases as compared to well known Otsu's method. | [
"cs.CV"
] |
Style transfer aims to combine the content of one image with the artistic
style of another. It was discovered that lower levels of convolutional networks
captured style information, while higher levels captures content information.
The original style transfer formulation used a weighted combination of VGG-16
layer activations to achieve this goal. Later, this was accomplished in
real-time using a feed-forward network to learn the optimal combination of
style and content features from the respective images. The first aim of our
project was to introduce a framework for capturing the style from several
images at once. We propose a method that extends the original real-time style
transfer formulation by combining the features of several style images. This
method successfully captures color information from the separate style images.
The other aim of our project was to improve the temporal style continuity from
frame to frame. Accordingly, we have experimented with the temporal stability
of the output images and discussed the various available techniques that could
be employed as alternatives. | [
"cs.CV",
"cs.GR",
"cs.RO"
] |
A survey of existing methods for stopping active learning (AL) reveals the
needs for methods that are: more widely applicable; more aggressive in saving
annotations; and more stable across changing datasets. A new method for
stopping AL based on stabilizing predictions is presented that addresses these
needs. Furthermore, stopping methods are required to handle a broad range of
different annotation/performance tradeoff valuations. Despite this, the
existing body of work is dominated by conservative methods with little (if any)
attention paid to providing users with control over the behavior of stopping
methods. The proposed method is shown to fill a gap in the level of
aggressiveness available for stopping AL and supports providing users with
control over stopping behavior. | [
"cs.LG",
"cs.CL",
"stat.ML",
"I.2.6; I.2.7; I.5.1; I.5.4; G.3"
] |
Latent representation learned from multi-layered neural networks via
hierarchical feature abstraction enables recent success of deep learning. Under
the deep learning framework, generalization performance highly depends on the
learned latent representation which is obtained from an appropriate training
scenario with a task-specific objective on a designed network model. In this
work, we propose a novel latent space modeling method to learn better latent
representation. We designed a neural network model based on the assumption that
good base representation can be attained by maximizing the total correlation
between the input, latent, and output variables. From the base model, we
introduce a semantic noise modeling method which enables class-conditional
perturbation on latent space to enhance the representational power of learned
latent feature. During training, latent vector representation can be
stochastically perturbed by a modeled class-conditional additive noise while
maintaining its original semantic feature. It implicitly brings the effect of
semantic augmentation on the latent space. The proposed model can be easily
learned by back-propagation with common gradient-based optimization algorithms.
Experimental results show that the proposed method helps to achieve performance
benefits against various previous approaches. We also provide the empirical
analyses for the proposed class-conditional perturbation process including
t-SNE visualization. | [
"cs.LG",
"cs.NE"
] |
We consider the problem of building a state representation model for control,
in a continual learning setting. As the environment changes, the aim is to
efficiently compress the sensory state's information without losing past
knowledge, and then use Reinforcement Learning on the resulting features for
efficient policy learning. To this end, we propose S-TRIGGER, a general method
for Continual State Representation Learning applicable to Variational
Auto-Encoders and its many variants. The method is based on Generative Replay,
i.e. the use of generated samples to maintain past knowledge. It comes along
with a statistically sound method for environment change detection, which
self-triggers the Generative Replay. Our experiments on VAEs show that
S-TRIGGER learns state representations that allows fast and high-performing
Reinforcement Learning, while avoiding catastrophic forgetting. The resulting
system is capable of autonomously learning new information without using past
data and with a bounded system size. Code for our experiments is attached in
Appendix. | [
"cs.LG",
"stat.ML"
] |
In this paper we introduce OperA, a transformer-based model that accurately
predicts surgical phases from long video sequences. A novel attention
regularization loss encourages the model to focus on high-quality frames during
training. Moreover, the attention weights are utilized to identify
characteristic high attention frames for each surgical phase, which could
further be used for surgery summarization. OperA is thoroughly evaluated on two
datasets of laparoscopic cholecystectomy videos, outperforming various
state-of-the-art temporal refinement approaches. | [
"cs.CV",
"cs.AI"
] |
Research on damage detection of road surfaces using image processing
techniques has been actively conducted, achieving considerably high detection
accuracies. Many studies only focus on the detection of the presence or absence
of damage. However, in a real-world scenario, when the road managers from a
governing body need to repair such damage, they need to clearly understand the
type of damage in order to take effective action. In addition, in many of these
previous studies, the researchers acquire their own data using different
methods. Hence, there is no uniform road damage dataset available openly,
leading to the absence of a benchmark for road damage detection. This study
makes three contributions to address these issues. First, to the best of our
knowledge, for the first time, a large-scale road damage dataset is prepared.
This dataset is composed of 9,053 road damage images captured with a smartphone
installed on a car, with 15,435 instances of road surface damage included in
these road images. In order to generate this dataset, we cooperated with 7
municipalities in Japan and acquired road images for more than 40 hours. These
images were captured in a wide variety of weather and illuminance conditions.
In each image, we annotated the bounding box representing the location and type
of damage. Next, we used a state-of-the-art object detection method using
convolutional neural networks to train the damage detection model with our
dataset, and compared the accuracy and runtime speed on both, using a GPU
server and a smartphone. Finally, we demonstrate that the type of damage can be
classified into eight types with high accuracy by applying the proposed object
detection method. The road damage dataset, our experimental results, and the
developed smartphone application used in this study are publicly available
(https://github.com/sekilab/RoadDamageDetector/). | [
"cs.CV",
"cs.CY"
] |
Owing to the development and advancement of artificial intelligence, numerous
works were established in the human facial expression recognition system.
Meanwhile, the detection and classification of micro-expressions are attracting
attentions from various research communities in the recent few years. In this
paper, we first review the processes of a conventional optical-flow-based
recognition system, which comprised of facial landmarks annotations, optical
flow guided images computation, features extraction and emotion class
categorization. Secondly, a few approaches have been proposed to improve the
feature extraction part, such as exploiting GAN to generate more image samples.
Particularly, several variations of optical flow are computed in order to
generate optimal images to lead to high recognition accuracy. Next, GAN, a
combination of Generator and Discriminator, is utilized to generate new "fake"
images to increase the sample size. Thirdly, a modified state-of-the-art
Convolutional neural networks is proposed. To verify the effectiveness of the
the proposed method, the results are evaluated on spontaneous micro-expression
databases, namely SMIC, CASME II and SAMM. Both the F1-score and accuracy
performance metrics are reported in this paper. | [
"cs.CV"
] |
The high dimensionality of images presents architecture and
sampling-efficiency challenges for likelihood-based generative models. Previous
approaches such as VQ-VAE use deep autoencoders to obtain compact
representations, which are more practical as inputs for likelihood-based
models. We present an alternative approach, inspired by common image
compression methods like JPEG, and convert images to quantized discrete cosine
transform (DCT) blocks, which are represented sparsely as a sequence of DCT
channel, spatial location, and DCT coefficient triples. We propose a
Transformer-based autoregressive architecture, which is trained to sequentially
predict the conditional distribution of the next element in such sequences, and
which scales effectively to high resolution images. On a range of image
datasets, we demonstrate that our approach can generate high quality, diverse
images, with sample metric scores competitive with state of the art methods. We
additionally show that simple modifications to our method yield effective image
colorization and super-resolution models. | [
"cs.CV",
"stat.ML"
] |
The prosperity of computer vision (CV) and natural language procession (NLP)
in recent years has spurred the development of deep learning in many other
domains. The advancement in machine learning provides us with an alternative
option besides the computationally expensive density functional theories (DFT).
Kernel method and graph neural networks have been widely studied as two
mainstream methods for property prediction. The promising graph neural networks
have achieved comparable accuracy to the DFT method for specific objects in the
recent study. However, most of the graph neural networks with high precision so
far require fully connected graphs with pairwise distance distribution as edge
information. In this work, we shed light on the Directed Graph Attention Neural
Network (DGANN), which only takes chemical bonds as edges and operates on bonds
and atoms of molecules. DGANN distinguishes from previous models with those
features: (1) It learns the local chemical environment encoding by graph
attention mechanism on chemical bonds. Every initial edge message only flows
into every message passing trajectory once. (2) The transformer blocks
aggregate the global molecular representation from the local atomic encoding.
(3) The position vectors and coordinates are used as inputs instead of
distances. Our model has matched or outperformed most baseline graph neural
networks on QM9 datasets even without thorough hyper-parameters searching.
Moreover, this work suggests that models directly utilizing 3D coordinates can
still reach high accuracies for molecule representation even without rotational
and translational invariance incorporated. | [
"cs.LG",
"cond-mat.mtrl-sci"
] |
Self-imitation learning is a Reinforcement Learning (RL) method that
encourages actions whose returns were higher than expected, which helps in hard
exploration and sparse reward problems. It was shown to improve the performance
of on-policy actor-critic methods in several discrete control tasks.
Nevertheless, applying self-imitation to the mostly action-value based
off-policy RL methods is not straightforward. We propose SAIL, a novel
generalization of self-imitation learning for off-policy RL, based on a
modification of the Bellman optimality operator that we connect to Advantage
Learning. Crucially, our method mitigates the problem of stale returns by
choosing the most optimistic return estimate between the observed return and
the current action-value for self-imitation. We demonstrate the empirical
effectiveness of SAIL on the Arcade Learning Environment, with a focus on hard
exploration games. | [
"cs.LG"
] |
Color artifacts of demosaicked images are often found at contours due to
interpolation across edges and cross-channel aliasing. To tackle this problem,
we propose a novel demosaicking method to reliably reconstruct color channels
of a Bayer image based on two different optimized mean-curvature (MC) models.
The missing pixel values in green (G) channel are first estimated by minimizing
a variational MC model. The curvatures of restored G-image surface are
approximated as a linear MC model which guides the initial reconstruction of
red (R) and blue (B) channels. Then a refinement process is performed to
interpolate accurate full-resolution R and B images. Experiments on benchmark
images have testified to the superiority of the proposed method in terms of
both the objective and subjective quality. | [
"cs.CV"
] |
Monocular depth estimation, which plays a crucial role in understanding 3D
scene geometry, is an ill-posed problem. Recent methods have gained significant
improvement by exploring image-level information and hierarchical features from
deep convolutional neural networks (DCNNs). These methods model depth
estimation as a regression problem and train the regression networks by
minimizing mean squared error, which suffers from slow convergence and
unsatisfactory local solutions. Besides, existing depth estimation networks
employ repeated spatial pooling operations, resulting in undesirable
low-resolution feature maps. To obtain high-resolution depth maps,
skip-connections or multi-layer deconvolution networks are required, which
complicates network training and consumes much more computations. To eliminate
or at least largely reduce these problems, we introduce a spacing-increasing
discretization (SID) strategy to discretize depth and recast depth network
learning as an ordinal regression problem. By training the network using an
ordinary regression loss, our method achieves much higher accuracy and
\dd{faster convergence in synch}. Furthermore, we adopt a multi-scale network
structure which avoids unnecessary spatial pooling and captures multi-scale
information in parallel.
The method described in this paper achieves state-of-the-art results on four
challenging benchmarks, i.e., KITTI [17], ScanNet [9], Make3D [50], and NYU
Depth v2 [42], and win the 1st prize in Robust Vision Challenge 2018. Code has
been made available at: https://github.com/hufu6371/DORN. | [
"cs.CV"
] |
The need for accurate yield estimates for viticulture is becoming more
important due to increasing competition in the wine market worldwide. One of
the most promising methods to estimate the harvest is berry counting, as it can
be approached non-destructively, and its process can be automated. In this
article, we present a method that addresses the challenge of occluded berries
with leaves to obtain a more accurate estimate of the number of berries that
will enable a better estimate of the harvest. We use generative adversarial
networks, a deep learning-based approach that generates a likely scenario
behind the leaves exploiting learned patterns from images with non-occluded
berries. Our experiments show that the estimate of the number of berries after
applying our method is closer to the manually counted reference. In contrast to
applying a factor to the berry count, our approach better adapts to local
conditions by directly involving the appearance of the visible berries.
Furthermore, we show that our approach can identify which areas in the image
should be changed by adding new berries without explicitly requiring
information about hidden areas. | [
"cs.CV",
"cs.LG",
"cs.NE"
] |
Generating long and semantic-coherent reports to describe medical images
poses great challenges towards bridging visual and linguistic modalities,
incorporating medical domain knowledge, and generating realistic and accurate
descriptions. We propose a novel Knowledge-driven Encode, Retrieve, Paraphrase
(KERP) approach which reconciles traditional knowledge- and retrieval-based
methods with modern learning-based methods for accurate and robust medical
report generation. Specifically, KERP decomposes medical report generation into
explicit medical abnormality graph learning and subsequent natural language
modeling. KERP first employs an Encode module that transforms visual features
into a structured abnormality graph by incorporating prior medical knowledge;
then a Retrieve module that retrieves text templates based on the detected
abnormalities; and lastly, a Paraphrase module that rewrites the templates
according to specific cases. The core of KERP is a proposed generic
implementation unit---Graph Transformer (GTR) that dynamically transforms
high-level semantics between graph-structured data of multiple domains such as
knowledge graphs, images and sequences. Experiments show that the proposed
approach generates structured and robust reports supported with accurate
abnormality description and explainable attentive regions, achieving the
state-of-the-art results on two medical report benchmarks, with the best
medical abnormality and disease classification accuracy and improved human
evaluation performance. | [
"cs.CV"
] |
Generative Adversarial Networks (GANs) have been used in several machine
learning tasks such as domain transfer, super resolution, and synthetic data
generation. State-of-the-art GANs often use tens of millions of parameters,
making them expensive to deploy for applications in low SWAP (size, weight, and
power) hardware, such as mobile devices, and for applications with real time
capabilities. There has been no work found to reduce the number of parameters
used in GANs. Therefore, we propose a method to compress GANs using knowledge
distillation techniques, in which a smaller "student" GAN learns to mimic a
larger "teacher" GAN. We show that the distillation methods used on MNIST,
CIFAR-10, and Celeb-A datasets can compress teacher GANs at ratios of 1669:1,
58:1, and 87:1, respectively, while retaining the quality of the generated
image. From our experiments, we observe a qualitative limit for GAN's
compression. Moreover, we observe that, with a fixed parameter budget,
compressed GANs outperform GANs trained using standard training methods. We
conjecture that this is partially owing to the optimization landscape of
over-parameterized GANs which allows efficient training using alternating
gradient descent. Thus, training an over-parameterized GAN followed by our
proposed compression scheme provides a high quality generative model with a
small number of parameters. | [
"cs.LG",
"stat.ML"
] |
This paper presents a vision system and a depth processing algorithm for
DRC-HUBO+, the winner of the DRC finals 2015. Our system is designed to
reliably capture 3D information of a scene and objects robust to challenging
environment conditions. We also propose a depth-map upsampling method that
produces an outliers-free depth map by explicitly handling depth outliers. Our
system is suitable for an interactive robot with real-world that requires
accurate object detection and pose estimation. We evaluate our depth processing
algorithm over state-of-the-art algorithms on several synthetic and real-world
datasets. | [
"cs.CV",
"cs.RO"
] |
In this paper, we solve the sample shortage problem in the human parsing
task. We begin with the self-learning strategy, which generates pseudo-labels
for unlabeled data to retrain the model. However, directly using noisy
pseudo-labels will cause error amplification and accumulation. Considering the
topology structure of human body, we propose a trainable graph reasoning method
that establishes internal structural connections between graph nodes to correct
two typical errors in the pseudo-labels, i.e., the global structural error and
the local consistency error. For the global error, we first transform
category-wise features into a high-level graph model with coarse-grained
structural information, and then decouple the high-level graph to reconstruct
the category features. The reconstructed features have a stronger ability to
represent the topology structure of the human body. Enlarging the receptive
field of features can effectively reducing the local error. We first project
feature pixels into a local graph model to capture pixel-wise relations in a
hierarchical graph manner, then reverse the relation information back to the
pixels. With the global structural and local consistency modules, these errors
are rectified and confident pseudo-labels are generated for retraining.
Extensive experiments on the LIP and the ATR datasets demonstrate the
effectiveness of our global and local rectification modules. Our method
outperforms other state-of-the-art methods in supervised human parsing tasks. | [
"cs.CV"
] |
Local learning of sparse image models has proven to be very effective to
solve inverse problems in many computer vision applications. To learn such
models, the data samples are often clustered using the K-means algorithm with
the Euclidean distance as a dissimilarity metric. However, the Euclidean
distance may not always be a good dissimilarity measure for comparing data
samples lying on a manifold. In this paper, we propose two algorithms for
determining a local subset of training samples from which a good local model
can be computed for reconstructing a given input test sample, where we take
into account the underlying geometry of the data. The first algorithm, called
Adaptive Geometry-driven Nearest Neighbor search (AGNN), is an adaptive scheme
which can be seen as an out-of-sample extension of the replicator graph
clustering method for local model learning. The second method, called
Geometry-driven Overlapping Clusters (GOC), is a less complex nonadaptive
alternative for training subset selection. The proposed AGNN and GOC methods
are evaluated in image super-resolution, deblurring and denoising applications
and shown to outperform spectral clustering, soft clustering, and geodesic
distance based subset selection in most settings. | [
"cs.CV",
"cs.IT",
"math.IT",
"math.OC"
] |
Recommender systems have become an essential instrument in a wide range of
industries to personalize the user experience. A significant issue that has
captured both researchers' and industry experts' attention is the cold start
problem for new items. In this work, we present a graph neural network
recommender system using item hierarchy graphs and a bespoke architecture to
handle the cold start case for items. The experimental study on multiple
datasets and millions of users and interactions indicates that our method
achieves better forecasting quality than the state-of-the-art with a comparable
computational time. | [
"cs.LG",
"stat.ML"
] |
Generating a novel and optimized molecule with desired chemical properties is
an essential part of the drug discovery process. Failure to meet one of the
required properties can frequently lead to failure in a clinical test which is
costly. In addition, optimizing these multiple properties is a challenging task
because the optimization of one property is prone to changing other properties.
In this paper, we pose this multi-property optimization problem as a sequence
translation process and propose a new optimized molecule generator model based
on the Transformer with two constraint networks: property prediction and
similarity prediction. We further improve the model by incorporating score
predictions from these constraint networks in a modified beam search algorithm.
The experiments demonstrate that our proposed model outperforms
state-of-the-art models by a significant margin for optimizing multiple
properties simultaneously. | [
"cs.LG",
"cs.AI"
] |
During deployment, an object detector is expected to operate at a similar
performance level reported on its testing dataset. However, when deployed
onboard mobile robots that operate under varying and complex environmental
conditions, the detector's performance can fluctuate and occasionally degrade
severely without warning. Undetected, this can lead the robot to take unsafe
and risky actions based on low-quality and unreliable object detections. We
address this problem and introduce a cascaded neural network that monitors the
performance of the object detector by predicting the quality of its mean
average precision (mAP) on a sliding window of the input frames. The proposed
cascaded network exploits the internal features from the deep neural network of
the object detector. We evaluate our proposed approach using different
combinations of autonomous driving datasets and object detectors. | [
"cs.CV"
] |
Unsupervised learning algorithms (e.g., self-supervised learning,
auto-encoder, contrastive learning) allow deep learning models to learn
effective image representations from large-scale unlabeled data. In medical
image analysis, even unannotated data can be difficult to obtain for individual
labs. Fortunately, national-level efforts have been made to provide efficient
access to obtain biomedical image data from previous scientific publications.
For instance, NIH has launched the Open-i search engine that provides a
large-scale image database with free access. However, the images in scientific
publications consist of a considerable amount of compound figures with
subplots. To extract and curate individual subplots, many different compound
figure separation approaches have been developed, especially with the recent
advances in deep learning. However, previous approaches typically required
resource extensive bounding box annotation to train detection models. In this
paper, we propose a simple compound figure separation (SimCFS) framework that
uses weak classification annotations from individual images. Our technical
contribution is three-fold: (1) we introduce a new side loss that is designed
for compound figure separation; (2) we introduce an intra-class image
augmentation method to simulate hard cases; (3) the proposed framework enables
an efficient deployment to new classes of images, without requiring resource
extensive bounding box annotations. From the results, the SimCFS achieved a new
state-of-the-art performance on the ImageCLEF 2016 Compound Figure Separation
Database. The source code of SimCFS is made publicly available at
https://github.com/hrlblab/ImageSeperation. | [
"cs.CV"
] |
In this paper we propose a model that combines the strengths of RNNs and
SGVB: the Variational Recurrent Auto-Encoder (VRAE). Such a model can be used
for efficient, large scale unsupervised learning on time series data, mapping
the time series data to a latent vector representation. The model is
generative, such that data can be generated from samples of the latent space.
An important contribution of this work is that the model can make use of
unlabeled data in order to facilitate supervised training of RNNs by
initialising the weights and network state. | [
"stat.ML",
"cs.LG",
"cs.NE"
] |
Estimating the 3D pose of a hand is an essential part of human-computer
interaction. Estimating 3D pose using depth or multi-view sensors has become
easier with recent advances in computer vision, however, regressing pose from a
single RGB image is much less straightforward. The main difficulty arises from
the fact that 3D pose requires some form of depth estimates, which are
ambiguous given only an RGB image. In this paper we propose a new method for 3D
hand pose estimation from a monocular image through a novel 2.5D pose
representation. Our new representation estimates pose up to a scaling factor,
which can be estimated additionally if a prior of the hand size is given. We
implicitly learn depth maps and heatmap distributions with a novel CNN
architecture. Our system achieves the state-of-the-art estimation of 2D and 3D
hand pose on several challenging datasets in presence of severe occlusions. | [
"cs.CV",
"cs.LG"
] |
In recent years several architectures have been proposed to learn embodied
agents complex self-awareness models. In this paper, dynamic incremental
self-awareness (SA) models are proposed that allow experiences done by an agent
to be modeled in a hierarchical fashion, starting from more simple situations
to more structured ones. Each situation is learned from subsets of private
agent perception data as a model capable to predict normal behaviors and detect
abnormalities. Hierarchical SA models have been already proposed using low
dimensional sensorial inputs. In this work, a hierarchical model is introduced
by means of a cross-modal Generative Adversarial Networks (GANs) processing
high dimensional visual data. Different levels of the GANs are detected in a
self-supervised manner using GANs discriminators decision boundaries. Real
experiments on semi-autonomous ground vehicles are presented. | [
"cs.CV",
"cs.MM"
] |
Learning binary representation is essential to large-scale computer vision
tasks. Most existing algorithms require a separate quantization constraint to
learn effective hashing functions. In this work, we present Direct Binary
Embedding (DBE), a simple yet very effective algorithm to learn binary
representation in an end-to-end fashion. By appending an ingeniously designed
DBE layer to the deep convolutional neural network (DCNN), DBE learns binary
code directly from the continuous DBE layer activation without quantization
error. By employing the deep residual network (ResNet) as DCNN component, DBE
captures rich semantics from images. Furthermore, in the effort of handling
multilabel images, we design a joint cross entropy loss that includes both
softmax cross entropy and weighted binary cross entropy in consideration of the
correlation and independence of labels, respectively. Extensive experiments
demonstrate the significant superiority of DBE over state-of-the-art methods on
tasks of natural object recognition, image retrieval and image annotation. | [
"cs.CV",
"cs.IR"
] |
Deep learning-based coastline detection algorithms have begun to outshine
traditional statistical methods in recent years. However, they are usually
trained only as single-purpose models to either segment land and water or
delineate the coastline. In contrast to this, a human annotator will usually
keep a mental map of both segmentation and delineation when performing manual
coastline detection. To take into account this task duality, we therefore
devise a new model to unite these two approaches in a deep learning model. By
taking inspiration from the main building blocks of a semantic segmentation
framework (UNet) and an edge detection framework (HED), both tasks are combined
in a natural way. Training is made efficient by employing deep supervision on
side predictions at multiple resolutions. Finally, a hierarchical attention
mechanism is introduced to adaptively merge these multiscale predictions into
the final model output. The advantages of this approach over other traditional
and deep learning-based methods for coastline detection are demonstrated on a
dataset of Sentinel-1 imagery covering parts of the Antarctic coast, where
coastline detection is notoriously difficult. An implementation of our method
is available at \url{https://github.com/khdlr/HED-UNet}. | [
"cs.CV",
"eess.IV"
] |
In this paper, we present an extension to LaserNet, an efficient and
state-of-the-art LiDAR based 3D object detector. We propose a method for fusing
image data with the LiDAR data and show that this sensor fusion method improves
the detection performance of the model especially at long ranges. The addition
of image data is straightforward and does not require image labels.
Furthermore, we expand the capabilities of the model to perform 3D semantic
segmentation in addition to 3D object detection. On a large benchmark dataset,
we demonstrate our approach achieves state-of-the-art performance on both
object detection and semantic segmentation while maintaining a low runtime. | [
"cs.CV",
"cs.LG",
"cs.RO"
] |
Feature extraction is a key step in image processing for pattern recognition
and machine learning processes. Its purpose lies in reducing the dimensionality
of the input data through the computing of features which accurately describe
the original information. In this article, a new feature extraction method
based on Discrete Modal Decomposition (DMD) is introduced, to extend the group
of space and frequency based features. These new features are called modal
features. Initially aiming to decompose a signal into a modal basis built from
a vibration mechanics problem, the DMD projection is applied to images in order
to extract modal features with two approaches. The first one, called full scale
DMD, consists in exploiting directly the decomposition resulting coordinates as
features. The second one, called filtering DMD, consists in using the DMD modes
as filters to obtain features through a local transformation process.
Experiments are performed on image texture classification tasks including
several widely used data bases, compared to several classic feature extraction
methods. We show that the DMD approach achieves good classification
performances, comparable to the state of the art techniques, with a lower
extraction time. | [
"cs.CV",
"eess.IV"
] |
We consider a reinforcement learning setting introduced in (Maillard et al.,
NIPS 2011) where the learner does not have explicit access to the states of the
underlying Markov decision process (MDP). Instead, she has access to several
models that map histories of past interactions to states. Here we improve over
known regret bounds in this setting, and more importantly generalize to the
case where the models given to the learner do not contain a true model
resulting in an MDP representation but only approximations of it. We also give
improved error bounds for state aggregation. | [
"cs.LG"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.