text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
What is the computational model behind a Transformer? Where recurrent neural
networks have direct parallels in finite state machines, allowing clear
discussion and thought around architecture variants or trained models,
Transformers have no such familiar parallel. In this paper we aim to change
that, proposing a computational model for the transformer-encoder in the form
of a programming language. We map the basic components of a transformer-encoder
-- attention and feed-forward computation -- into simple primitives, around
which we form a programming language: the Restricted Access Sequence Processing
Language (RASP). We show how RASP can be used to program solutions to tasks
that could conceivably be learned by a Transformer, and how a Transformer can
be trained to mimic a RASP solution. In particular, we provide RASP programs
for histograms, sorting, and Dyck-languages. We further use our model to relate
their difficulty in terms of the number of required layers and attention heads:
analyzing a RASP program implies a maximum number of heads and layers necessary
to encode a task in a transformer. Finally, we see how insights gained from our
abstraction might be used to explain phenomena seen in recent works. | [
"cs.LG",
"cs.CL"
] |
Human pose estimation (HPE) is a central part of understanding the visual
narration and body movements of characters depicted in artwork collections,
such as Greek vase paintings. Unfortunately, existing HPE methods do not
generalise well across domains resulting in poorly recognized poses. Therefore,
we propose a two step approach: (1) adapting a dataset of natural images of
known person and pose annotations to the style of Greek vase paintings by means
of image style-transfer. We introduce a perceptually-grounded style transfer
training to enforce perceptual consistency. Then, we fine-tune the base model
with this newly created dataset. We show that using style-transfer learning
significantly improves the SOTA performance on unlabelled data by more than 6%
mean average precision (mAP) as well as mean average recall (mAR). (2) To
improve the already strong results further, we created a small dataset
(ClassArch) consisting of ancient Greek vase paintings from the 6-5th century
BCE with person and pose annotations. We show that fine-tuning on this data
with a style-transferred model improves the performance further. In a thorough
ablation study, we give a targeted analysis of the influence of style
intensities, revealing that the model learns generic domain styles.
Additionally, we provide a pose-based image retrieval to demonstrate the
effectiveness of our method. | [
"cs.CV"
] |
Over the past few years, Generative Adversarial Networks (GANs) have garnered
increased interest among researchers in Computer Vision, with applications
including, but not limited to, image generation, translation, imputation, and
super-resolution. Nevertheless, no GAN-based method has been proposed in the
literature that can successfully represent, generate or translate 3D facial
shapes (meshes). This can be primarily attributed to two facts, namely that (a)
publicly available 3D face databases are scarce as well as limited in terms of
sample size and variability (e.g., few subjects, little diversity in race and
gender), and (b) mesh convolutions for deep networks present several challenges
that are not entirely tackled in the literature, leading to operator
approximations and model instability, often failing to preserve high-frequency
components of the distribution. As a result, linear methods such as Principal
Component Analysis (PCA) have been mainly utilized towards 3D shape analysis,
despite being unable to capture non-linearities and high frequency details of
the 3D face - such as eyelid and lip variations. In this work, we present
3DFaceGAN, the first GAN tailored towards modeling the distribution of 3D
facial surfaces, while retaining the high frequency details of 3D face shapes.
We conduct an extensive series of both qualitative and quantitative
experiments, where the merits of 3DFaceGAN are clearly demonstrated against
other, state-of-the-art methods in tasks such as 3D shape representation,
generation, and translation. | [
"cs.CV"
] |
Adversarial data examples have drawn significant attention from the machine
learning and security communities. A line of work on tackling adversarial
examples is certified robustness via randomized smoothing that can provide a
theoretical robustness guarantee. However, such a mechanism usually uses
floating-point arithmetic for calculations in inference and requires large
memory footprints and daunting computational costs. These defensive models
cannot run efficiently on edge devices nor be deployed on integer-only logical
units such as Turing Tensor Cores or integer-only ARM processors. To overcome
these challenges, we propose an integer randomized smoothing approach with
quantization to convert any classifier into a new smoothed classifier, which
uses integer-only arithmetic for certified robustness against adversarial
perturbations. We prove a tight robustness guarantee under L2-norm for the
proposed approach. We show our approach can obtain a comparable accuracy and
4x~5x speedup over floating-point arithmetic certified robust methods on
general-purpose CPUs and mobile devices on two distinct datasets (CIFAR-10 and
Caltech-101). | [
"cs.LG",
"cs.CR",
"cs.CV"
] |
Federated Distillation (FD) is a popular novel algorithmic paradigm for
Federated Learning, which achieves training performance competitive to prior
parameter averaging based methods, while additionally allowing the clients to
train different model architectures, by distilling the client predictions on an
unlabeled auxiliary set of data into a student model. In this work we propose
FedAUX, an extension to FD, which, under the same set of assumptions,
drastically improves performance by deriving maximum utility from the unlabeled
auxiliary data. FedAUX modifies the FD training procedure in two ways: First,
unsupervised pre-training on the auxiliary data is performed to find a model
initialization for the distributed training. Second, $(\varepsilon,
\delta)$-differentially private certainty scoring is used to weight the
ensemble predictions on the auxiliary data according to the certainty of each
client model. Experiments on large-scale convolutional neural networks and
transformer models demonstrate, that the training performance of FedAUX exceeds
SOTA FL baseline methods by a substantial margin in both the iid and non-iid
regime, further closing the gap to centralized training performance. Code is
available at github.com/fedl-repo/fedaux. | [
"cs.LG",
"cs.DC",
"stat.ML"
] |
We present Hindsight Network Credit Assignment (HNCA), a novel learning
method for stochastic neural networks, which works by assigning credit to each
neuron's stochastic output based on how it influences the output of its
immediate children in the network. We prove that HNCA provides unbiased
gradient estimates while reducing variance compared to the REINFORCE estimator.
We also experimentally demonstrate the advantage of HNCA over REINFORCE in a
contextual bandit version of MNIST. The computational complexity of HNCA is
similar to that of backpropagation. We believe that HNCA can help stimulate new
ways of thinking about credit assignment in stochastic compute graphs. | [
"cs.LG",
"cs.AI"
] |
The Transformer architecture has become a dominant choice in many domains,
such as natural language processing and computer vision. Yet, it has not
achieved competitive performance on popular leaderboards of graph-level
prediction compared to mainstream GNN variants. Therefore, it remains a mystery
how Transformers could perform well for graph representation learning. In this
paper, we solve this mystery by presenting Graphormer, which is built upon the
standard Transformer architecture, and could attain excellent results on a
broad range of graph representation learning tasks, especially on the recent
OGB Large-Scale Challenge. Our key insight to utilizing Transformer in the
graph is the necessity of effectively encoding the structural information of a
graph into the model. To this end, we propose several simple yet effective
structural encoding methods to help Graphormer better model graph-structured
data. Besides, we mathematically characterize the expressive power of
Graphormer and exhibit that with our ways of encoding the structural
information of graphs, many popular GNN variants could be covered as the
special cases of Graphormer. | [
"cs.LG",
"cs.AI"
] |
Many different deep networks have been used to approximate, accelerate or
improve traditional image operators, such as image smoothing, super-resolution
and denoising. Among these traditional operators, many contain parameters which
need to be tweaked to obtain the satisfactory results, which we refer to as
"parameterized image operators". However, most existing deep networks trained
for these operators are only designed for one specific parameter configuration,
which does not meet the needs of real scenarios that usually require flexible
parameters settings. To overcome this limitation, we propose a new decouple
learning algorithm to learn from the operator parameters to dynamically adjust
the weights of a deep network for image operators, denoted as the base network.
The learned algorithm is formed as another network, namely the weight learning
network, which can be end-to-end jointly trained with the base network.
Experiments demonstrate that the proposed framework can be successfully applied
to many traditional parameterized image operators. We provide more analysis to
better understand the proposed framework, which may inspire more promising
research in this direction. Our codes and models have been released in
https://github.com/fqnchina/DecoupleLearning | [
"cs.CV"
] |
Cross-modal correlation provides an inherent supervision for video
unsupervised representation learning. Existing methods focus on distinguishing
different video clips by visual and audio representations. We human visual
perception could attend to regions where sounds are made, and our auditory
perception could also ground their frequencies of sounding objects, which we
call bidirectional local correspondence. Such supervision is intuitive but not
well explored in the contrastive learning framework. This paper introduces a
pretext task, Cross-Modal Attention Consistency (CMAC), for exploring the
bidirectional local correspondence property. The CMAC approach aims to align
the regional attention generated purely from the visual signal with the target
attention generated under the guidance of acoustic signal, and do a similar
alignment for frequency grounding on the acoustic attention. Accompanied by a
remoulded cross-modal contrastive loss where we consider additional
within-modal interactions, the CMAC approach works effectively for enforcing
the bidirectional alignment. Extensive experiments on six downstream benchmarks
demonstrate that CMAC can improve the state-of-the-art performance on both
visual and audio modalities. | [
"cs.CV"
] |
Hyperalignment has been widely employed in Multivariate Pattern (MVP)
analysis to discover the cognitive states in the human brains based on
multi-subject functional Magnetic Resonance Imaging (fMRI) datasets. Most of
the existing HA methods utilized unsupervised approaches, where they only
maximized the correlation between the voxels with the same position in the time
series. However, these unsupervised solutions may not be optimum for handling
the functional alignment in the supervised MVP problems. This paper proposes a
Supervised Hyperalignment (SHA) method to ensure better functional alignment
for MVP analysis, where the proposed method provides a supervised shared space
that can maximize the correlation among the stimuli belonging to the same
category and minimize the correlation between distinct categories of stimuli.
Further, SHA employs a generalized optimization solution, which generates the
shared space and calculates the mapped features in a single iteration, hence
with optimum time and space complexities for large datasets. Experiments on
multi-subject datasets demonstrate that SHA method achieves up to 19% better
performance for multi-class problems over the state-of-the-art HA algorithms. | [
"stat.ML",
"cs.LG",
"q-bio.NC"
] |
The Synthetic Minority Oversampling TEchnique (SMOTE) is widely-used for the
analysis of imbalanced datasets. It is known that SMOTE frequently
over-generalizes the minority class, leading to misclassifications for the
majority class, and effecting the overall balance of the model.
In this article, we present an approach that overcomes this limitation of
SMOTE, employing Localized Random Affine Shadowsampling (LoRAS) to oversample
from an approximated data manifold of the minority class.
We benchmarked our algorithm with 14 publicly available imbalanced datasets
using three different Machine Learning (ML) algorithms and compared the
performance of LoRAS, SMOTE and several SMOTE extensions that share the concept
of using convex combinations of minority class data points for oversampling
with LoRAS. We observed that LoRAS, on average generates better ML models in
terms of F1-Score and Balanced accuracy. Another key observation is that while
most of the extensions of SMOTE we have tested, improve the F1-Score with
respect to SMOTE on an average, they compromise on the Balanced accuracy of a
classification model. LoRAS on the contrary, improves both F1 Score and the
Balanced accuracy thus produces better classification models.
Moreover, to explain the success of the algorithm, we have constructed a
mathematical framework to prove that LoRAS oversampling technique provides a
better estimate for the mean of the underlying local data distribution of the
minority class data space. | [
"cs.LG",
"stat.ML"
] |
We study objective robustness failures, a type of out-of-distribution
robustness failure in reinforcement learning (RL). Objective robustness
failures occur when an RL agent retains its capabilities out-of-distribution
yet pursues the wrong objective. This kind of failure presents different risks
than the robustness problems usually considered in the literature, since it
involves agents that leverage their capabilities to pursue the wrong objective
rather than simply failing to do anything useful. We provide the first explicit
empirical demonstrations of objective robustness failures and present a partial
characterization of its causes. | [
"cs.LG",
"cs.AI"
] |
The sampling of probability distributions specified up to a normalization
constant is an important problem in both machine learning and statistical
mechanics. While classical stochastic sampling methods such as Markov Chain
Monte Carlo (MCMC) or Langevin Dynamics (LD) can suffer from slow mixing times
there is a growing interest in using normalizing flows in order to learn the
transformation of a simple prior distribution to the given target distribution.
Here we propose a generalized and combined approach to sample target densities:
Stochastic Normalizing Flows (SNF) -- an arbitrary sequence of deterministic
invertible functions and stochastic sampling blocks. We show that stochasticity
overcomes expressivity limitations of normalizing flows resulting from the
invertibility constraint, whereas trainable transformations between sampling
steps improve efficiency of pure MCMC/LD along the flow. By invoking ideas from
non-equilibrium statistical mechanics we derive an efficient training procedure
by which both the sampler's and the flow's parameters can be optimized
end-to-end, and by which we can compute exact importance weights without having
to marginalize out the randomness of the stochastic blocks. We illustrate the
representational power, sampling efficiency and asymptotic correctness of SNFs
on several benchmarks including applications to sampling molecular systems in
equilibrium. | [
"stat.ML",
"cs.LG",
"physics.chem-ph",
"physics.data-an"
] |
We study the problem of designing models for machine learning tasks defined
on \emph{sets}. In contrast to traditional approach of operating on fixed
dimensional vectors, we consider objective functions defined on sets that are
invariant to permutations. Such problems are widespread, ranging from
estimation of population statistics \cite{poczos13aistats}, to anomaly
detection in piezometer data of embankment dams \cite{Jung15Exploration}, to
cosmology \cite{Ntampaka16Dynamical,Ravanbakhsh16ICML1}. Our main theorem
characterizes the permutation invariant functions and provides a family of
functions to which any permutation invariant objective function must belong.
This family of functions has a special structure which enables us to design a
deep network architecture that can operate on sets and which can be deployed on
a variety of scenarios including both unsupervised and supervised learning
tasks. We also derive the necessary and sufficient conditions for permutation
equivariance in deep models. We demonstrate the applicability of our method on
population statistic estimation, point cloud classification, set expansion, and
outlier detection. | [
"cs.LG",
"stat.ML"
] |
As a certified defensive technique, randomized smoothing has received
considerable attention due to its scalability to large datasets and neural
networks. However, several important questions remain unanswered, such as (i)
whether the Gaussian mechanism is an appropriate option for certifying
$\ell_2$-norm robustness, and (ii) whether there is an appropriate randomized
(smoothing) mechanism to certify $\ell_\infty$-norm robustness. To shed light
on these questions, we argue that the main difficulty is how to assess the
appropriateness of each randomized mechanism. In this paper, we propose a
generic framework that connects the existing frameworks in
\cite{lecuyer2018certified, li2019certified}, to assess randomized mechanisms.
Under our framework, for a randomized mechanism that can certify a certain
extent of robustness, we define the magnitude of its required additive noise as
the metric for assessing its appropriateness. We also prove lower bounds on
this metric for the $\ell_2$-norm and $\ell_\infty$-norm cases as the criteria
for assessment. Based on our framework, we assess the Gaussian and Exponential
mechanisms by comparing the magnitude of additive noise required by these
mechanisms and the lower bounds (criteria). We first conclude that the Gaussian
mechanism is indeed an appropriate option to certify $\ell_2$-norm robustness.
Surprisingly, we show that the Gaussian mechanism is also an appropriate option
for certifying $\ell_\infty$-norm robustness, instead of the Exponential
mechanism. Finally, we generalize our framework to $\ell_p$-norm for any
$p\geq2$. Our theoretical findings are verified by evaluations on CIFAR10 and
ImageNet. | [
"cs.LG",
"cs.CR",
"stat.ML"
] |
Precise 3D segmentation of infant brain tissues is an essential step towards
comprehensive volumetric studies and quantitative analysis of early brain
developement. However, computing such segmentations is very challenging,
especially for 6-month infant brain, due to the poor image quality, among other
difficulties inherent to infant brain MRI, e.g., the isointense contrast
between white and gray matter and the severe partial volume effect due to small
brain sizes. This study investigates the problem with an ensemble of semi-dense
fully convolutional neural networks (CNNs), which employs T1-weighted and
T2-weighted MR images as input. We demonstrate that the ensemble agreement is
highly correlated with the segmentation errors. Therefore, our method provides
measures that can guide local user corrections. To the best of our knowledge,
this work is the first ensemble of 3D CNNs for suggesting annotations within
images. Furthermore, inspired by the very recent success of dense networks, we
propose a novel architecture, SemiDenseNet, which connects all convolutional
layers directly to the end of the network. Our architecture allows the
efficient propagation of gradients during training, while limiting the number
of parameters, requiring one order of magnitude less parameters than popular
medical image segmentation networks such as 3D U-Net. Another contribution of
our work is the study of the impact that early or late fusions of multiple
image modalities might have on the performances of deep architectures. We
report evaluations of our method on the public data of the MICCAI iSEG-2017
Challenge on 6-month infant brain MRI segmentation, and show very competitive
results among 21 teams, ranking first or second in most metrics. | [
"cs.CV"
] |
Automatic synthesis of high quality 3D shapes is an ongoing and challenging
area of research. While several data-driven methods have been proposed that
make use of neural networks to generate 3D shapes, none of them reach the level
of quality that deep learning synthesis approaches for images provide. In this
work we present a method for a convolutional point cloud decoder/generator that
makes use of recent advances in the domain of image synthesis. Namely, we use
Adaptive Instance Normalization and offer an intuition on why it can improve
training. Furthermore, we propose extensions to the minimization of the
commonly used Chamfer distance for auto-encoding point clouds. In addition, we
show that careful sampling is important both for the input geometry and in our
point cloud generation process to improve results. The results are evaluated in
an auto-encoding setup to offer both qualitative and quantitative analysis. The
proposed decoder is validated by an extensive ablation study and is able to
outperform current state of the art results in a number of experiments. We show
the applicability of our method in the fields of point cloud upsampling, single
view reconstruction, and shape synthesis. | [
"cs.CV",
"cs.GR"
] |
Automated equipment health monitoring from streaming multisensor time-series
data can be used to enable condition-based maintenance, avoid sudden
catastrophic failures, and ensure high operational availability. We note that
most complex machinery has a well-documented and readily accessible underlying
structure capturing the inter-dependencies between sub-systems or modules. Deep
learning models such as those based on recurrent neural networks (RNNs) or
convolutional neural networks (CNNs) fail to explicitly leverage this
potentially rich source of domain-knowledge into the learning procedure. In
this work, we propose to capture the structure of a complex equipment in the
form of a graph, and use graph neural networks (GNNs) to model multi-sensor
time-series data. Using remaining useful life estimation as an application
task, we evaluate the advantage of incorporating the graph structure via GNNs
on the publicly available turbofan engine benchmark dataset. We observe that
the proposed GNN-based RUL estimation model compares favorably to several
strong baselines from literature such as those based on RNNs and CNNs.
Additionally, we observe that the learned network is able to focus on the
module (node) with impending failure through a simple attention mechanism,
potentially paving the way for actionable diagnosis. | [
"cs.LG",
"stat.ML"
] |
A number of problems in the processing of sound and natural language, as well
as in other areas, can be reduced to simultaneously reading an input sequence
and writing an output sequence of generally different length. There are well
developed methods that produce the output sequence based on the entirely known
input. However, efficient methods that enable such transformations on-line do
not exist. In this paper we introduce an architecture that learns with
reinforcement to make decisions about whether to read a token or write another
token. This architecture is able to transform potentially infinite sequences
on-line. In an experimental study we compare it with state-of-the-art methods
for neural machine translation. While it produces slightly worse translations
than Transformer, it outperforms the autoencoder with attention, even though
our architecture translates texts on-line thereby solving a more difficult
problem than both reference methods. | [
"cs.LG",
"cs.CL",
"I.2.6"
] |
Due to the sparsity and irregularity of the point cloud data, methods that
directly consume points have become popular. Among all point-based models,
graph convolutional networks (GCN) lead to notable performance by fully
preserving the data granularity and exploiting point interrelation. However,
point-based networks spend a significant amount of time on data structuring
(e.g., Farthest Point Sampling (FPS) and neighbor points querying), which limit
the speed and scalability. In this paper, we present a method, named Grid-GCN,
for fast and scalable point cloud learning. Grid-GCN uses a novel data
structuring strategy, Coverage-Aware Grid Query (CAGQ). By leveraging the
efficiency of grid space, CAGQ improves spatial coverage while reducing the
theoretical time complexity. Compared with popular sampling methods such as
Farthest Point Sampling (FPS) and Ball Query, CAGQ achieves up to 50X speed-up.
With a Grid Context Aggregation (GCA) module, Grid-GCN achieves
state-of-the-art performance on major point cloud classification and
segmentation benchmarks with significantly faster runtime than previous
studies. Remarkably, Grid-GCN achieves the inference speed of 50fps on ScanNet
using 81920 points per scene as input. | [
"cs.CV",
"cs.LG"
] |
In this paper, we present InSeGAN, an unsupervised 3D generative adversarial
network (GAN) for segmenting (nearly) identical instances of rigid objects in
depth images. Using an analysis-by-synthesis approach, we design a novel GAN
architecture to synthesize a multiple-instance depth image with independent
control over each instance. InSeGAN takes in a set of code vectors (e.g.,
random noise vectors), each encoding the 3D pose of an object that is
represented by a learned implicit object template. The generator has two
distinct modules. The first module, the instance feature generator, uses each
encoded pose to transform the implicit template into a feature map
representation of each object instance. The second module, the depth image
renderer, aggregates all of the single-instance feature maps output by the
first module and generates a multiple-instance depth image. A discriminator
distinguishes the generated multiple-instance depth images from the
distribution of true depth images. To use our model for instance segmentation,
we propose an instance pose encoder that learns to take in a generated depth
image and reproduce the pose code vectors for all of the object instances. To
evaluate our approach, we introduce a new synthetic dataset, "Insta-10",
consisting of 100,000 depth images, each with 5 instances of an object from one
of 10 classes. Our experiments on Insta-10, as well as on real-world noisy
depth images, show that InSeGAN achieves state-of-the-art performance, often
outperforming prior methods by large margins. | [
"cs.CV",
"cs.AI",
"cs.LG",
"cs.RO"
] |
This paper presents HoughNet, a one-stage, anchor-free, voting-based,
bottom-up object detection method. Inspired by the Generalized Hough Transform,
HoughNet determines the presence of an object at a certain location by the sum
of the votes cast on that location. Votes are collected from both near and
long-distance locations based on a log-polar vote field. Thanks to this voting
mechanism, HoughNet is able to integrate both near and long-range,
class-conditional evidence for visual recognition, thereby generalizing and
enhancing current object detection methodology, which typically relies on only
local evidence. On the COCO dataset, HoughNet's best model achieves $46.4$ $AP$
(and $65.1$ $AP_{50}$), performing on par with the state-of-the-art in
bottom-up object detection and outperforming most major one-stage and two-stage
methods. We further validate the effectiveness of our proposal in other visual
detection tasks, namely, video object detection, instance segmentation, 3D
object detection and keypoint detection for human pose estimation, and an
additional ``labels to photo`` image generation task, where the integration of
our voting module consistently improves performance in all cases. Code is
available at \url{https://github.com/nerminsamet/houghnet}. | [
"cs.CV"
] |
Existing approaches for unsupervised metric learning focus on exploring
self-supervision information within the input image itself. We observe that,
when analyzing images, human eyes often compare images against each other
instead of examining images individually. In addition, they often pay attention
to certain keypoints, image regions, or objects which are discriminative
between image classes but highly consistent within classes. Even if the image
is being transformed, the attention pattern will be consistent. Motivated by
this observation, we develop a new approach to unsupervised deep metric
learning where the network is learned based on self-supervision information
across images instead of within one single image. To characterize the
consistent pattern of human attention during image comparisons, we introduce
the idea of transformed attention consistency. It assumes that visually similar
images, even undergoing different image transforms, should share the same
consistent visual attention map. This consistency leads to a pairwise
self-supervision loss, allowing us to learn a Siamese deep neural network to
encode and compare images against their transformed or matched pairs. To
further enhance the inter-class discriminative power of the feature generated
by this network, we adapt the concept of triplet loss from supervised metric
learning to our unsupervised case and introduce the contrastive clustering
loss. Our extensive experimental results on benchmark datasets demonstrate that
our proposed method outperforms current state-of-the-art methods for
unsupervised metric learning by a large margin. | [
"cs.CV"
] |
We propose a method for efficiently incorporating constraints into a
stochastic gradient Langevin framework for the training of deep neural
networks. Constraints allow direct control of the parameter space of the model.
Appropriately designed, they reduce the vanishing/exploding gradient problem,
control weight magnitudes and stabilize deep neural networks and thus improve
the robustness of training algorithms and the generalization capabilities of
the trained neural network. We present examples of constrained training methods
motivated by orthogonality preservation for weight matrices and explicit weight
normalizations. We describe the methods in the overdamped formulation of
Langevin dynamics and the underdamped form, in which momenta help to improve
sampling efficiency. The methods are explored in test examples in image
classification and natural language processing. | [
"cs.LG",
"stat.ML"
] |
Scene graph generation aims to identify objects and their relations in
images, providing structured image representations that can facilitate numerous
applications in computer vision. However, scene graph models usually require
supervised learning on large quantities of labeled data with intensive human
annotation. In this work, we propose visual distant supervision, a novel
paradigm of visual relation learning, which can train scene graph models
without any human-labeled data. The intuition is that by aligning commonsense
knowledge bases and images, we can automatically create large-scale labeled
data to provide distant supervision for visual relation learning. To alleviate
the noise in distantly labeled data, we further propose a framework that
iteratively estimates the probabilistic relation labels and eliminates the
noisy ones. Comprehensive experimental results show that our distantly
supervised model outperforms strong weakly supervised and semi-supervised
baselines. By further incorporating human-labeled data in a semi-supervised
fashion, our model outperforms state-of-the-art fully supervised models by a
large margin (e.g., 8.3 micro- and 7.8 macro-recall@50 improvements for
predicate classification in Visual Genome evaluation). We make the data and
code for this paper publicly available at https://github.com/thunlp/VisualDS. | [
"cs.CV"
] |
Veracity is an essential key in research and development of innovative
products. Live Emotion analysis and verification nullify deceit made to
complainers on live chat, corroborate messages of both ends in messaging apps
and promote an honest conversation between users. The main concept behind this
emotion artificial intelligent verifier is to license or decline message
accountability by comparing variegated emotions of chat app users recognized
through facial expressions and text prediction. In this paper, a proposed
emotion intelligent live detector acts as an honest arbiter who distributes
facial emotions into labels namely, Happiness, Sadness, Surprise, and Hate.
Further, it separately predicts a label of messages through text
classification. Finally, it compares both labels and declares the message as a
fraud or a bonafide. For emotion detection, we deployed Convolutional Neural
Network (CNN) using a miniXception model and for text prediction, we selected
Support Vector Machine (SVM) natural language processing probability classifier
due to receiving the best accuracy on training dataset after applying Support
Vector Machine (SVM), Random Forest Classifier, Naive Bayes Classifier, and
Logistic regression. | [
"cs.CV",
"cs.AI",
"cs.CL",
"cs.LG"
] |
This paper addresses the problem of path prediction for multiple interacting
agents in a scene, which is a crucial step for many autonomous platforms such
as self-driving cars and social robots. We present \textit{SoPhie}; an
interpretable framework based on Generative Adversarial Network (GAN), which
leverages two sources of information, the path history of all the agents in a
scene, and the scene context information, using images of the scene. To predict
a future path for an agent, both physical and social information must be
leveraged. Previous work has not been successful to jointly model physical and
social interactions. Our approach blends a social attention mechanism with a
physical attention that helps the model to learn where to look in a large scene
and extract the most salient parts of the image relevant to the path. Whereas,
the social attention component aggregates information across the different
agent interactions and extracts the most important trajectory information from
the surrounding neighbors. SoPhie also takes advantage of GAN to generates more
realistic samples and to capture the uncertain nature of the future paths by
modeling its distribution. All these mechanisms enable our approach to predict
socially and physically plausible paths for the agents and to achieve
state-of-the-art performance on several different trajectory forecasting
benchmarks. | [
"cs.CV"
] |
Variable selection is of significant importance for classification and
regression tasks in machine learning and statistical applications where both
predictability and explainability are needed. In this paper, a Copula Entropy
(CE) based method for variable selection which use CE based ranks to select
variables is proposed. The method is both model-free and tuning-free.
Comparison experiments between the proposed method and traditional variable
selection methods, such as Distance Correlation, Hilbert-Schmidt Independence
Criterion, Stepwise Selection, regularized generalized linear models and
Adaptive LASSO, were conducted on the UCI heart disease data. Experimental
results show that CE based method can select the `right' variables out more
effectively and derive better interpretable results than traditional methods do
without sacrificing accuracy performance. It is believed that CE based variable
selection can help to build more explainable models. | [
"cs.LG",
"stat.ME",
"stat.ML"
] |
Fluorescence microscopy images play the critical role of capturing spatial or
spatiotemporal information of biomedical processes in life sciences. Their
simple structures and semantics provide unique advantages in elucidating
learning behavior of deep neural networks (DNNs). It is generally assumed that
accurate image annotation is required to train DNNs for accurate image
segmentation. In this study, however, we find that DNNs trained by label images
in which nearly half (49%) of the binary pixel labels are randomly flipped
provide largely the same segmentation performance. This suggests that DNNs
learn high-level structures rather than pixel-level labels per se to segment
fluorescence microscopy images. We refer to these structures as
meta-structures. In support of the existence of the meta-structures, when DNNs
are trained by a series of label images with progressively less meta-structure
information, we find progressive degradation in their segmentation performance.
Motivated by the learning behavior of DNNs trained by random labels and the
characteristics of meta-structures, we propose an unsupervised segmentation
model. Experiments show that it achieves remarkably competitive performance in
comparison to supervised segmentation models. | [
"cs.CV"
] |
This paper presents the experimental study revealing weaker performance of
the automatic iris recognition methods for cataract-affected eyes when compared
to healthy eyes. There is little research on the topic, mostly incorporating
scarce databases that are often deficient in images representing more than one
illness. We built our own database, acquiring 1288 eye images of 37 patients of
the Medical University of Warsaw. Those images represent several common ocular
diseases, such as cataract, along with less ordinary conditions, such as iris
pattern alterations derived from illness or eye trauma. Images were captured in
near-infrared light (used in biometrics) and for selected cases also in visible
light (used in ophthalmological diagnosis). Since cataract is a disorder that
is most populated by samples in the database, in this paper we focus solely on
this illness. To assess the extent of the performance deterioration we use
three iris recognition methodologies (commercial and academic solutions) to
calculate genuine match scores for healthy eyes and those influenced by
cataract. Results show a significant degradation in iris recognition
reliability manifesting by worsening the genuine scores in all three matchers
used in this study (12% of genuine score increase for an academic matcher, up
to 175% of genuine score increase obtained for an example commercial matcher).
This increase in genuine scores affected the final false non-match rate in two
matchers. To our best knowledge this is the only study of such kind that
employs more than one iris matcher, and analyzes the iris image segmentation as
a potential source of decreased reliability. | [
"cs.CV"
] |
Visual-semantic embedding enables various tasks such as image-text retrieval,
image captioning, and visual question answering. The key to successful
visual-semantic embedding is to express visual and textual data properly by
accounting for their intricate relationship. While previous studies have
achieved much advance by encoding the visual and textual data into a joint
space where similar concepts are closely located, they often represent data by
a single vector ignoring the presence of multiple important components in an
image or text. Thus, in addition to the joint embedding space, we propose a
novel multi-head self-attention network to capture various components of visual
and textual data by attending to important parts in data. Our approach achieves
the new state-of-the-art results in image-text retrieval tasks on MS-COCO and
Flicker30K datasets. Through the visualization of the attention maps that
capture distinct semantic components at multiple positions in the image and the
text, we demonstrate that our method achieves an effective and interpretable
visual-semantic joint space. | [
"cs.CV",
"cs.CL",
"cs.LG"
] |
Variational Autoencoders are powerful models for unsupervised learning.
However deep models with several layers of dependent stochastic variables are
difficult to train which limits the improvements obtained using these highly
expressive models. We propose a new inference model, the Ladder Variational
Autoencoder, that recursively corrects the generative distribution by a data
dependent approximate likelihood in a process resembling the recently proposed
Ladder Network. We show that this model provides state of the art predictive
log-likelihood and tighter log-likelihood lower bound compared to the purely
bottom-up inference in layered Variational Autoencoders and other generative
models. We provide a detailed analysis of the learned hierarchical latent
representation and show that our new inference model is qualitatively different
and utilizes a deeper more distributed hierarchy of latent variables. Finally,
we observe that batch normalization and deterministic warm-up (gradually
turning on the KL-term) are crucial for training variational models with many
stochastic layers. | [
"stat.ML",
"cs.LG"
] |
With the improvement of medical data capturing, vast amount of continuous
patient monitoring data, e.g., electrocardiogram (ECG), real-time vital signs
and medications, become available for clinical decision support at intensive
care units (ICUs). However, it becomes increasingly challenging to model such
data, due to high density of the monitoring data, heterogeneous data types and
the requirement for interpretable models. Integration of these high-density
monitoring data with the discrete clinical events (including diagnosis,
medications, labs) is challenging but potentially rewarding since richness and
granularity in such multimodal data increase the possibilities for accurate
detection of complex problems and predicting outcomes (e.g., length of stay and
mortality). We propose Recurrent Attentive and Intensive Model (RAIM) for
jointly analyzing continuous monitoring data and discrete clinical events. RAIM
introduces an efficient attention mechanism for continuous monitoring data
(e.g., ECG), which is guided by discrete clinical events (e.g, medication
usage). We apply RAIM in predicting physiological decompensation and length of
stay in those critically ill patients at ICU. With evaluations on MIMIC- III
Waveform Database Matched Subset, we obtain an AUC-ROC score of 90.18% for
predicting decompensation and an accuracy of 86.82% for forecasting length of
stay with our final model, which outperforms our six baseline models. | [
"cs.LG",
"stat.ML"
] |
One of the central challenges faced by a reinforcement learning (RL) agent is
to effectively learn a (near-)optimal policy in environments with large state
spaces having sparse and noisy feedback signals. In real-world applications, an
expert with additional domain knowledge can help in speeding up the learning
process via \emph{shaping the environment}, i.e., making the environment more
learner-friendly. A popular paradigm in literature is \emph{potential-based
reward shaping}, where the environment's reward function is augmented with
additional local rewards using a potential function. However, the applicability
of potential-based reward shaping is limited in settings where (i) the state
space is very large, and it is challenging to compute an appropriate potential
function, (ii) the feedback signals are noisy, and even with shaped rewards the
agent could be trapped in local optima, and (iii) changing the rewards alone is
not sufficient, and effective shaping requires changing the dynamics. We
address these limitations of potential-based shaping methods and propose a
novel framework of \emph{environment shaping using state abstraction}. Our key
idea is to compress the environment's large state space with noisy signals to
an abstracted space, and to use this abstraction in creating smoother and more
effective feedback signals for the agent. We study the theoretical
underpinnings of our abstraction-based environment shaping, and show that the
agent's policy learnt in the shaped environment preserves near-optimal behavior
in the original environment. | [
"cs.LG",
"stat.ML"
] |
Transfer learning is a widely-used paradigm in deep learning, where models
pre-trained on standard datasets can be efficiently adapted to downstream
tasks. Typically, better pre-trained models yield better transfer results,
suggesting that initial accuracy is a key aspect of transfer learning
performance. In this work, we identify another such aspect: we find that
adversarially robust models, while less accurate, often perform better than
their standard-trained counterparts when used for transfer learning.
Specifically, we focus on adversarially robust ImageNet classifiers, and show
that they yield improved accuracy on a standard suite of downstream
classification tasks. Further analysis uncovers more differences between robust
and standard models in the context of transfer learning. Our results are
consistent with (and in fact, add to) recent hypotheses stating that robustness
leads to improved feature representations. Our code and models are available at
https://github.com/Microsoft/robust-models-transfer . | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
3D object recognition accuracy can be improved by learning the multi-scale
spatial features from 3D spatial geometric representations of objects such as
point clouds, 3D models, surfaces, and RGB-D data. Current deep learning
approaches learn such features either using structured data representations
(voxel grids and octrees) or from unstructured representations (graphs and
point clouds). Learning features from such structured representations is
limited by the restriction on resolution and tree depth while unstructured
representations creates a challenge due to non-uniformity among data samples.
In this paper, we propose an end-to-end multi-level learning approach on a
multi-level voxel grid to overcome these drawbacks. To demonstrate the utility
of the proposed multi-level learning, we use a multi-level voxel representation
of 3D objects to perform object recognition. The multi-level voxel
representation consists of a coarse voxel grid that contains volumetric
information of the 3D object. In addition, each voxel in the coarse grid that
contains a portion of the object boundary is subdivided into multiple
fine-level voxel grids. The performance of our multi-level learning algorithm
for object recognition is comparable to dense voxel representations while using
significantly lower memory. | [
"cs.CV",
"stat.ML"
] |
Graph neural networks (GNNs) have achieved state-of-the-art performance for
node classification on graphs. The vast majority of existing works assume that
genuine node labels are always provided for training. However, there has been
very little research effort on how to improve the robustness of GNNs in the
presence of label noise. Learning with label noise has been primarily studied
in the context of image classification, but these techniques cannot be directly
applied to graph-structured data, due to two major challenges -- label sparsity
and label dependency -- faced by learning on graphs. In this paper, we propose
a new framework, UnionNET, for learning with noisy labels on graphs under a
semi-supervised setting. Our approach provides a unified solution for robustly
training GNNs and performing label correction simultaneously. The key idea is
to perform label aggregation to estimate node-level class probability
distributions, which are used to guide sample reweighting and label correction.
Compared with existing works, UnionNET has two appealing advantages. First, it
requires no extra clean supervision, or explicit estimation of the noise
transition matrix. Second, a unified learning framework is proposed to robustly
train GNNs in an end-to-end manner. Experimental results show that our proposed
approach: (1) is effective in improving model robustness against different
types and levels of label noise; (2) yields significant improvements over
state-of-the-art baselines. | [
"cs.LG"
] |
End-to-end reinforcement learning agents learn a state representation and a
policy at the same time. Recurrent neural networks (RNNs) have been trained
successfully as reinforcement learning agents in settings like dialogue that
require structured prediction. In this paper, we investigate the
representations learned by RNN-based agents when trained with both policy
gradient and value-based methods. We show through extensive experiments and
analysis that, when trained with policy gradient, recurrent neural networks
often fail to learn a state representation that leads to an optimal policy in
settings where the same action should be taken at different states. To explain
this failure, we highlight the problem of state aliasing, which entails
conflating two or more distinct states in the representation space. We
demonstrate that state aliasing occurs when several states share the same
optimal action and the agent is trained via policy gradient. We characterize
this phenomenon through experiments on a simple maze setting and a more complex
text-based game, and make recommendations for training RNNs with reinforcement
learning. | [
"cs.LG",
"cs.AI",
"cs.CL"
] |
Clustering techniques attempt to group objects with similar properties into a
cluster. Clustering the nodes of an attributed graph, in which each node is
associated with a set of feature attributes, has attracted significant
attention. Graph convolutional networks (GCNs) represent an effective approach
for integrating the two complementary factors of node attributes and structural
information for attributed graph clustering. However, oversmoothing of GCNs
produces indistinguishable representations of nodes, such that the nodes in a
graph tend to be grouped into fewer clusters, and poses a challenge due to the
resulting performance drop. In this study, we propose a smoothness sensor for
attributed graph clustering based on adaptive smoothness-transition graph
convolutions, which senses the smoothness of a graph and adaptively terminates
the current convolution once the smoothness is saturated to prevent
oversmoothing. Furthermore, as an alternative to graph-level smoothness, a
novel fine-gained node-wise level assessment of smoothness is proposed, in
which smoothness is computed in accordance with the neighborhood conditions of
a given node at a certain order of graph convolution. In addition, a
self-supervision criterion is designed considering both the tightness within
clusters and the separation between clusters to guide the whole neural network
training process. Experiments show that the proposed methods significantly
outperform 12 other state-of-the-art baselines in terms of three different
metrics across four benchmark datasets. In addition, an extensive study reveals
the reasons for their effectiveness and efficiency. | [
"cs.CV",
"cs.AI"
] |
Self-attention networks have revolutionized natural language processing and
are making impressive strides in image analysis tasks such as image
classification and object detection. Inspired by this success, we investigate
the application of self-attention networks to 3D point cloud processing. We
design self-attention layers for point clouds and use these to construct
self-attention networks for tasks such as semantic scene segmentation, object
part segmentation, and object classification. Our Point Transformer design
improves upon prior work across domains and tasks. For example, on the
challenging S3DIS dataset for large-scale semantic scene segmentation, the
Point Transformer attains an mIoU of 70.4% on Area 5, outperforming the
strongest prior model by 3.3 absolute percentage points and crossing the 70%
mIoU threshold for the first time. | [
"cs.CV"
] |
The wide-spread adoption of representation learning technologies in clinical
decision making strongly emphasizes the need for characterizing model
reliability and enabling rigorous introspection of model behavior. While the
former need is often addressed by incorporating uncertainty quantification
strategies, the latter challenge is addressed using a broad class of
interpretability techniques. In this paper, we argue that these two objectives
are not necessarily disparate and propose to utilize prediction calibration to
meet both objectives. More specifically, our approach is comprised of a
calibration-driven learning method, which is also used to design an
interpretability technique based on counterfactual reasoning. Furthermore, we
introduce \textit{reliability plots}, a holistic evaluation mechanism for model
reliability. Using a lesion classification problem with dermoscopy images, we
demonstrate the effectiveness of our approach and infer interesting insights
about the model behavior. | [
"cs.LG",
"stat.ML"
] |
A key impediment to reinforcement learning (RL) in real applications with
limited, batch data is defining a reward function that reflects what we
implicitly know about reasonable behaviour for a task and allows for robust
off-policy evaluation. In this work, we develop a method to identify an
admissible set of reward functions for policies that (a) do not diverge too far
from past behaviour, and (b) can be evaluated with high confidence, given only
a collection of past trajectories. Together, these ensure that we propose
policies that we trust to be implemented in high-risk settings. We demonstrate
our approach to reward design on synthetic domains as well as in a critical
care context, for a reward that consolidates clinical objectives to learn a
policy for weaning patients from mechanical ventilation. | [
"cs.LG",
"stat.ML"
] |
Few-shot learning algorithms aim to learn model parameters capable of
adapting to unseen classes with the help of only a few labeled examples. A
recent regularization technique - Manifold Mixup focuses on learning a
general-purpose representation, robust to small changes in the data
distribution. Since the goal of few-shot learning is closely linked to robust
representation learning, we study Manifold Mixup in this problem setting.
Self-supervised learning is another technique that learns semantically
meaningful features, using only the inherent structure of the data. This work
investigates the role of learning relevant feature manifold for few-shot tasks
using self-supervision and regularization techniques. We observe that
regularizing the feature manifold, enriched via self-supervised techniques,
with Manifold Mixup significantly improves few-shot learning performance. We
show that our proposed method S2M2 beats the current state-of-the-art accuracy
on standard few-shot learning datasets like CIFAR-FS, CUB, mini-ImageNet and
tiered-ImageNet by 3-8 %. Through extensive experimentation, we show that the
features learned using our approach generalize to complex few-shot evaluation
tasks, cross-domain scenarios and are robust against slight changes to data
distribution. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Attention modules connecting encoder and decoders have been widely applied in
the field of object recognition, image captioning, visual question answering
and neural machine translation, and significantly improves the performance. In
this paper, we propose a bottom-up gated hierarchical attention (GHA) mechanism
for image captioning. Our proposed model employs a CNN as the decoder which is
able to learn different concepts at different layers, and apparently, different
concepts correspond to different areas of an image. Therefore, we develop the
GHA in which low-level concepts are merged into high-level concepts and
simultaneously low-level attended features pass to the top to make predictions.
Our GHA significantly improves the performance of the model that only applies
one level attention, for example, the CIDEr score increases from 0.923 to
0.999, which is comparable to the state-of-the-art models that employ
attributes boosting and reinforcement learning (RL). We also conduct extensive
experiments to analyze the CNN decoder and our proposed GHA, and we find that
deeper decoders cannot obtain better performance, and when the convolutional
decoder becomes deeper the model is likely to collapse during training. | [
"cs.CV",
"cs.AI"
] |
We provide a bridge between generative modeling in the Machine Learning
community and simulated physical processes in High Energy Particle Physics by
applying a novel Generative Adversarial Network (GAN) architecture to the
production of jet images -- 2D representations of energy depositions from
particles interacting with a calorimeter. We propose a simple architecture, the
Location-Aware Generative Adversarial Network, that learns to produce realistic
radiation patterns from simulated high energy particle collisions. The pixel
intensities of GAN-generated images faithfully span over many orders of
magnitude and exhibit the desired low-dimensional physical properties (i.e.,
jet mass, n-subjettiness, etc.). We shed light on limitations, and provide a
novel empirical validation of image quality and validity of GAN-produced
simulations of the natural world. This work provides a base for further
explorations of GANs for use in faster simulation in High Energy Particle
Physics. | [
"stat.ML",
"hep-ex",
"physics.data-an"
] |
This abstract briefly describes a segmentation algorithm developed for the
ISIC 2017 Skin Lesion Detection Competition hosted at [ref]. The objective of
the competition is to perform a segmentation (in the form of a binary mask
image) of skin lesions in dermoscopic images as close as possible to a
segmentation performed by trained clinicians, which is taken as ground truth.
This project only takes part in the segmentation phase of the challenge. The
other phases of the competition (feature extraction and lesion identification)
are not considered.
The proposed algorithm consists of 4 steps: (1) lesion image preprocessing,
(2) image segmentation using k-means clustering of pixel colors, (3)
calculation of a set of features describing the properties of each segmented
region, and (4) calculation of a final score for each region, representing the
likelihood of corresponding to a suitable lesion segmentation. The scores in
step (4) are obtained by averaging the results of 2 different regression models
using the scores of each region as input. Before using the algorithm these
regression models must be trained using the training set of images and ground
truth masks provided by the Competition. Steps 2 to 4 are repeated with an
increasing number of clusters (and therefore the image is segmented into more
regions) until there is no further improvement of the calculated scores. | [
"cs.CV"
] |
3D ultrasound (US) has become prevalent due to its rich spatial and
diagnostic information not contained in 2D US. Moreover, 3D US can contain
multiple standard planes (SPs) in one shot. Thus, automatically localizing SPs
in 3D US has the potential to improve user-independence and
scanning-efficiency. However, manual SP localization in 3D US is challenging
because of the low image quality, huge search space and large anatomical
variability. In this work, we propose a novel multi-agent reinforcement
learning (MARL) framework to simultaneously localize multiple SPs in 3D US. Our
contribution is four-fold. First, our proposed method is general and it can
accurately localize multiple SPs in different challenging US datasets. Second,
we equip the MARL system with a recurrent neural network (RNN) based
collaborative module, which can strengthen the communication among agents and
learn the spatial relationship among planes effectively. Third, we explore to
adopt the neural architecture search (NAS) to automatically design the network
architecture of both the agents and the collaborative module. Last, we believe
we are the first to realize automatic SP localization in pelvic US volumes, and
note that our approach can handle both normal and abnormal uterus cases.
Extensively validated on two challenging datasets of the uterus and fetal
brain, our proposed method achieves the average localization accuracy of 7.03
degrees/1.59mm and 9.75 degrees/1.19mm. Experimental results show that our
light-weight MARL model has higher accuracy than state-of-the-art methods. | [
"cs.CV",
"cs.MA",
"eess.IV"
] |
Deep neural networks have become commonplace in the domain of reinforcement
learning, but are often expensive in terms of the number of parameters needed.
While compressing deep neural networks has of late assumed great importance to
overcome this drawback, little work has been done to address this problem in
the context of reinforcement learning agents. This work aims at making first
steps towards model compression in an RL agent. In particular, we compress
networks to drastically reduce the number of parameters in them (to sizes less
than 3% of their original size), further facilitated by applying a global max
pool after the final convolution layer, and propose using Actor-Mimic in the
context of compression. Finally, we show that this global max-pool allows for
weakly supervised object localization, improving the ability to identify the
agent's points of focus. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Deepfake is the manipulated video made with a generative deep learning
technique such as Generative Adversarial Networks (GANs) or Auto Encoder that
anyone can utilize. Recently, with the increase of Deepfake videos, some
classifiers consisting of the convolutional neural network that can distinguish
fake videos as well as deepfake datasets have been actively created. However,
the previous studies based on the CNN structure have the problem of not only
overfitting, but also considerable misjudging fake video as real ones. In this
paper, we propose a Vision Transformer model with distillation methodology for
detecting fake videos. We design that a CNN features and patch-based
positioning model learns to interact with all positions to find the artifact
region for solving false negative problem. Through comparative analysis on
Deepfake Detection (DFDC) Dataset, we verify that the proposed scheme with
patch embedding as input outperforms the state-of-the-art using the combined
CNN features. Without ensemble technique, our model obtains 0.978 of AUC and
91.9 of f1 score, while previous SOTA model yields 0.972 of AUC and 90.6 of f1
score on the same condition. | [
"cs.CV",
"cs.AI"
] |
In optical flow estimation task, coarse-to-fine (C2F) warping strategy is
widely used to deal with the large displacement problem and provides efficiency
and speed. However, limited by the small search range between the first images
and warped second images, current coarse-to-fine optical flow networks fail to
capture small and fast-moving objects which disappear at coarse resolution
levels. To address this problem, we introduce a lightweight but effective
Global Matching Component (GMC) to grab global matching features. We propose a
new Hybrid Matching Optical Flow Network (HMFlow) by integrating GMC into
existing coarse-to-fine networks seamlessly. Besides keeping in high accuracy
and small model size, our proposed HMFlow can apply global matching features to
guide the network to discover the small and fast-moving objects mismatched by
local matching features. We also build a new dataset, named Small and
Fast-Moving Chairs (SFChairs), for evaluation. The experimental results show
that our proposed network achieves considerable performance, especially at
regions with small and fast-moving objects. | [
"cs.CV"
] |
Voxel-based 3D object classification has been frequently studied in recent
years. The previous methods often directly convert the classic 2D convolution
into a 3D form applied to an object with binary voxel representation. In this
paper, we investigate the reason why binary voxel representation is not very
suitable for 3D convolution and how to simultaneously improve the performance
both in accuracy and speed. We show that by giving each voxel a signed distance
value, the accuracy will gain about 30% promotion compared with binary voxel
representation using a two-layer fully connected network. We then propose a
fast fully connected and convolution hybrid cascade network for voxel-based 3D
object classification. This threestage cascade network can divide 3D models
into three categories: easy, moderate and hard. Consequently, the mean
inference time (0.3ms) can speedup about 5x and 2x compared with the
state-of-the-art point cloud and voxel based methods respectively, while
achieving the highest accuracy in the latter category of methods (92%).
Experiments with ModelNet andMNIST verify the performance of the proposed
hybrid cascade network. | [
"cs.CV",
"cs.LG"
] |
An important task in network analysis is the detection of anomalous events in
a network time series. These events could merely be times of interest in the
network timeline or they could be examples of malicious activity or network
malfunction. Hypothesis testing using network statistics to summarize the
behavior of the network provides a robust framework for the anomaly detection
decision process. Unfortunately, choosing network statistics that are dependent
on confounding factors like the total number of nodes or edges can lead to
incorrect conclusions (e.g., false positives and false negatives). In this
dissertation we describe the challenges that face anomaly detection in dynamic
network streams regarding confounding factors. We also provide two solutions to
avoiding error due to confounding factors: the first is a randomization testing
method that controls for confounding factors, and the second is a set of
size-consistent network statistics which avoid confounding due to the most
common factors, edge count and node count. | [
"cs.LG"
] |
This paper proposes the first model-free Reinforcement Learning (RL)
framework to synthesise policies for unknown, and continuous-state Markov
Decision Processes (MDPs), such that a given linear temporal property is
satisfied. We convert the given property into a Limit Deterministic Buchi
Automaton (LDBA), namely a finite-state machine expressing the property.
Exploiting the structure of the LDBA, we shape a synchronous reward function
on-the-fly, so that an RL algorithm can synthesise a policy resulting in traces
that probabilistically satisfy the linear temporal property. This probability
(certificate) is also calculated in parallel with policy learning when the
state space of the MDP is finite: as such, the RL algorithm produces a policy
that is certified with respect to the property. Under the assumption of finite
state space, theoretical guarantees are provided on the convergence of the RL
algorithm to an optimal policy, maximising the above probability. We also show
that our method produces ''best available'' control policies when the logical
property cannot be satisfied. In the general case of a continuous state space,
we propose a neural network architecture for RL and we empirically show that
the algorithm finds satisfying policies, if there exist such policies. The
performance of the proposed framework is evaluated via a set of numerical
examples and benchmarks, where we observe an improvement of one order of
magnitude in the number of iterations required for the policy synthesis,
compared to existing approaches whenever available. | [
"cs.LG",
"stat.ML"
] |
The metro ridership prediction has always received extensive attention from
governments and researchers. Recent works focus on designing complicated graph
convolutional recurrent network architectures to capture spatial and temporal
patterns. These works extract the information of spatial dimension well, but
the limitation of temporal dimension still exists. We extended Neural ODE
algorithms to the graph network and proposed the STR-GODEs network, which can
effectively learn spatial, temporal, and ridership correlations without the
limitation of dividing data into equal-sized intervals on the timeline. While
learning the spatial relations and the temporal correlations, we modify the
GODE-RNN cell to obtain the ridership feature and hidden states. Ridership
information and its hidden states are added to the GODESolve to reduce the
error accumulation caused by long time series in prediction. Extensive
experiments on two large-scale datasets demonstrate the efficacy and robustness
of our model. | [
"cs.LG"
] |
Graph convolutional networks (GCNs) achieve promising performance for
skeleton-based action recognition. However, in most GCN-based methods, the
spatial-temporal graph convolution is strictly restricted by the graph topology
while only captures the short-term temporal context, thus lacking the
flexibility of feature extraction. In this work, we present a novel
architecture, named Graph Convolutional skeleton Transformer (GCsT), which
addresses limitations in GCNs by introducing Transformer. Our GCsT employs all
the benefits of Transformer (i.e. dynamical attention and global context) while
keeps the advantages of GCNs (i.e. hierarchy and local topology structure). In
GCsT, the spatial-temporal GCN forces the capture of local dependencies while
Transformer dynamically extracts global spatial-temporal relationships.
Furthermore, the proposed GCsT shows stronger expressive capability by adding
additional information present in skeleton sequences. Incorporating the
Transformer allows that information to be introduced into the model almost
effortlessly. We validate the proposed GCsT by conducting extensive
experiments, which achieves the state-of-the-art performance on NTU RGB+D, NTU
RGB+D 120 and Northwestern-UCLA datasets. | [
"cs.CV",
"cs.AI"
] |
Understanding the internal representations of deep neural networks (DNNs) is
crucal to explain their behavior. The interpretation of individual units, which
are neurons in MLPs or convolution kernels in convolutional networks, has been
paid much attention given their fundamental role. However, recent research
(Morcos et al. 2018) presented a counterintuitive phenomenon, which suggests
that an individual unit with high class selectivity, called interpretable
units, has poor contributions to generalization of DNNs. In this work, we
provide a new perspective to understand this counterintuitive phenomenon, which
makes sense when we introduce Representative Substitution (RS). Instead of
individually selective units with classes, the RS refers to the independence of
a unit's representations in the same layer without any annotation. Our
experiments demonstrate that interpretable units have high RS which are not
critical to network's generalization. The RS provides new insights into the
interpretation of DNNs and suggests that we need to focus on the independence
and relationship of the representations. | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
For the past 5 years, the ILSVRC competition and the ImageNet dataset have
attracted a lot of interest from the Computer Vision community, allowing for
state-of-the-art accuracy to grow tremendously. This should be credited to the
use of deep artificial neural network designs. As these became more complex,
the storage, bandwidth, and compute requirements increased. This means that
with a non-distributed approach, even when using the most high-density server
available, the training process may take weeks, making it prohibitive.
Furthermore, as datasets grow, the representation learning potential of deep
networks grows as well by using more complex models. This synchronicity
triggers a sharp increase in the computational requirements and motivates us to
explore the scaling behaviour on petaflop scale supercomputers. In this paper
we will describe the challenges and novel solutions needed in order to train
ResNet-50 in this large scale environment. We demonstrate above 90\% scaling
efficiency and a training time of 28 minutes using up to 104K x86 cores. This
is supported by software tools from Intel's ecosystem. Moreover, we show that
with regular 90 - 120 epoch train runs we can achieve a top-1 accuracy as high
as 77\% for the unmodified ResNet-50 topology. We also introduce the novel
Collapsed Ensemble (CE) technique that allows us to obtain a 77.5\% top-1
accuracy, similar to that of a ResNet-152, while training a unmodified
ResNet-50 topology for the same fixed training budget. All ResNet-50 models as
well as the scripts needed to replicate them will be posted shortly. | [
"stat.ML",
"cs.LG"
] |
Robustness of machine learning models to various adversarial and
non-adversarial corruptions continues to be of interest. In this paper, we
introduce the notion of the boundary thickness of a classifier, and we describe
its connection with and usefulness for model robustness. Thick decision
boundaries lead to improved performance, while thin decision boundaries lead to
overfitting (e.g., measured by the robust generalization gap between training
and testing) and lower robustness. We show that a thicker boundary helps
improve robustness against adversarial examples (e.g., improving the robust
test accuracy of adversarial training) as well as so-called out-of-distribution
(OOD) transforms, and we show that many commonly-used regularization and data
augmentation procedures can increase boundary thickness. On the theoretical
side, we establish that maximizing boundary thickness during training is akin
to the so-called mixup training. Using these observations, we show that
noise-augmentation on mixup training further increases boundary thickness,
thereby combating vulnerability to various forms of adversarial attacks and OOD
transforms. We can also show that the performance improvement in several lines
of recent work happens in conjunction with a thicker boundary. | [
"cs.LG",
"stat.ML"
] |
The ability to accurately identify human activities is essential for
developing automatic rehabilitation and sports training systems. In this paper,
large-scale exercise motion data obtained from a forearm-worn wearable sensor
are classified with a convolutional neural network (CNN). Time-series data
consisting of accelerometer and orientation measurements are formatted as
images, allowing the CNN to automatically extract discriminative features. A
comparative study on the effects of image formatting and different CNN
architectures is also presented. The best performing configuration classifies
50 gym exercises with 92.1% accuracy. | [
"cs.CV",
"cs.LG"
] |
It is almost universal to regard attention as the facility that permits an
agent, human or machine, to give priority processing resources to relevant
stimuli while ignoring the irrelevant. The reality of how this might manifest
itself throughout all the forms of perceptual and cognitive processes possessed
by humans, however, is not as clear. Here we examine this reality with a broad
perspective in order to highlight the myriad ways that attentional processes
impact both perception and cognition. The paper concludes by showing two real
world problems that exhibit sufficient complexity to illustrate the ways in
which attention and cognition connect. These then point to new avenues of
research that might illuminate the overall cognitive architecture of spatial
cognition. | [
"cs.CV"
] |
Since acquiring pixel-wise annotations for training convolutional neural
networks for semantic image segmentation is time-consuming, weakly supervised
approaches that only require class tags have been proposed. In this work, we
propose another form of supervision, namely image captions as they can be found
on the Internet. These captions have two advantages. They do not require
additional curation as it is the case for the clean class tags used by current
weakly supervised approaches and they provide textual context for the classes
present in an image. To leverage such textual context, we deploy a multi-modal
network that learns a joint embedding of the visual representation of the image
and the textual representation of the caption. The network estimates text
activation maps (TAMs) for class names as well as compound concepts, i.e.
combinations of nouns and their attributes. The TAMs of compound concepts
describing classes of interest substantially improve the quality of the
estimated class activation maps which are then used to train a network for
semantic segmentation. We evaluate our method on the COCO dataset where it
achieves state of the art results for weakly supervised image segmentation. | [
"cs.CV"
] |
This work addresses the problem of semantic image segmentation of nighttime
scenes. Although considerable progress has been made in semantic image
segmentation, it is mainly related to daytime scenarios. This paper proposes a
novel method to progressive adapt the semantic models trained on daytime
scenes, along with large-scale annotations therein, to nighttime scenes via the
bridge of twilight time -- the time between dawn and sunrise, or between sunset
and dusk. The goal of the method is to alleviate the cost of human annotation
for nighttime images by transferring knowledge from standard daytime
conditions. In addition to the method, a new dataset of road scenes is
compiled; it consists of 35,000 images ranging from daytime to twilight time
and to nighttime. Also, a subset of the nighttime images are densely annotated
for method evaluation. Our experiments show that our method is effective for
model adaptation from daytime scenes to nighttime scenes, without using extra
human annotation. | [
"cs.CV"
] |
Epipolar constraints are at the core of feature matching and depth estimation
in current multi-person multi-camera 3D human pose estimation methods. Despite
the satisfactory performance of this formulation in sparser crowd scenes, its
effectiveness is frequently challenged under denser crowd circumstances mainly
due to two sources of ambiguity. The first is the mismatch of human joints
resulting from the simple cues provided by the Euclidean distances between
joints and epipolar lines. The second is the lack of robustness from the naive
formulation of the problem as a least squares minimization. In this paper, we
depart from the multi-person 3D pose estimation formulation, and instead
reformulate it as crowd pose estimation. Our method consists of two key
components: a graph model for fast cross-view matching, and a maximum a
posteriori (MAP) estimator for the reconstruction of the 3D human poses. We
demonstrate the effectiveness and superiority of our proposed method on four
benchmark datasets. | [
"cs.CV"
] |
We present an attention-based model for recognizing multiple objects in
images. The proposed model is a deep recurrent neural network trained with
reinforcement learning to attend to the most relevant regions of the input
image. We show that the model learns to both localize and recognize multiple
objects despite being given only class labels during training. We evaluate the
model on the challenging task of transcribing house number sequences from
Google Street View images and show that it is both more accurate than the
state-of-the-art convolutional networks and uses fewer parameters and less
computation. | [
"cs.LG",
"cs.CV",
"cs.NE"
] |
Detailed analysis of seizure semiology, the symptoms and signs which occur
during a seizure, is critical for management of epilepsy patients. Inter-rater
reliability using qualitative visual analysis is often poor for semiological
features. Therefore, automatic and quantitative analysis of video-recorded
seizures is needed for objective assessment.
We present GESTURES, a novel architecture combining convolutional neural
networks (CNNs) and recurrent neural networks (RNNs) to learn deep
representations of arbitrarily long videos of epileptic seizures.
We use a spatiotemporal CNN (STCNN) pre-trained on large human action
recognition (HAR) datasets to extract features from short snippets (approx. 0.5
s) sampled from seizure videos. We then train an RNN to learn seizure-level
representations from the sequence of features.
We curated a dataset of seizure videos from 68 patients and evaluated
GESTURES on its ability to classify seizures into focal onset seizures (FOSs)
(N = 106) vs. focal to bilateral tonic-clonic seizures (TCSs) (N = 77),
obtaining an accuracy of 98.9% using bidirectional long short-term memory
(BLSTM) units.
We demonstrate that an STCNN trained on a HAR dataset can be used in
combination with an RNN to accurately represent arbitrarily long videos of
seizures. GESTURES can provide accurate seizure classification by modeling
sequences of semiologies. | [
"cs.CV"
] |
Segmenting histology images into diagnostically relevant regions is
imperative to support timely and reliable decisions by pathologists. To this
end, computer-aided techniques have been proposed to delineate relevant regions
in scanned histology slides. However, the techniques necessitate task-specific
large datasets of annotated pixels, which is tedious, time-consuming,
expensive, and infeasible to acquire for many histology tasks. Thus,
weakly-supervised semantic segmentation techniques are proposed to utilize weak
supervision that is cheaper and quicker to acquire. In this paper, we propose
SegGini, a weakly supervised segmentation method using graphs, that can utilize
weak multiplex annotations, i.e. inexact and incomplete annotations, to segment
arbitrary and large images, scaling from tissue microarray (TMA) to whole slide
image (WSI). Formally, SegGini constructs a tissue-graph representation for an
input histology image, where the graph nodes depict tissue regions. Then, it
performs weakly-supervised segmentation via node classification by using
inexact image-level labels, incomplete scribbles, or both. We evaluated SegGini
on two public prostate cancer datasets containing TMAs and WSIs. Our method
achieved state-of-the-art segmentation performance on both datasets for various
annotation settings while being comparable to a pathologist baseline. | [
"cs.CV"
] |
Transfer learning is widely used for training machine learning models. Here,
we study the role of transfer learning for training fully convolutional
networks (FCNs) for medical image segmentation. Our experiments show that
although transfer learning reduces the training time on the target task, the
improvement in segmentation accuracy is highly task/data-dependent. Larger
improvements in accuracy are observed when the segmentation task is more
challenging and the target training data is smaller. We observe that
convolutional filters of an FCN change little during training for medical image
segmentation, and still look random at convergence. We further show that quite
accurate FCNs can be built by freezing the encoder section of the network at
random values and only training the decoder section. At least for medical image
segmentation, this finding challenges the common belief that the encoder
section needs to learn data/task-specific representations. We examine the
evolution of FCN representations to gain a better insight into the effects of
transfer learning on the training dynamics. Our analysis shows that although
FCNs trained via transfer learning learn different representations than FCNs
trained with random initialization, the variability among FCNs trained via
transfer learning can be as high as that among FCNs trained with random
initialization. Moreover, feature reuse is not restricted to the early encoder
layers; rather, it can be more significant in deeper layers. These findings
offer new insights and suggest alternative ways of training FCNs for medical
image segmentation. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
We address the challenging problem of learning motion representations using
deep models for video recognition. To this end, we make use of attention
modules that learn to highlight regions in the video and aggregate features for
recognition. Specifically, we propose to leverage output attention maps as a
vehicle to transfer the learned representation from a motion (flow) network to
an RGB network. We systematically study the design of attention modules, and
develop a novel method for attention distillation. Our method is evaluated on
major action benchmarks, and consistently improves the performance of the
baseline RGB network by a significant margin. Moreover, we demonstrate that our
attention maps can leverage motion cues in learning to identify the location of
actions in video frames. We believe our method provides a step towards learning
motion-aware representations in deep models. Our project page is available at
https://aptx4869lm.github.io/AttentionDistillation/ | [
"cs.CV"
] |
Responsible AI is becoming critical as AI is widely used in our everyday
lives. Many companies that deploy AI publicly state that when training a model,
we not only need to improve its accuracy, but also need to guarantee that the
model does not discriminate against users (fairness), is resilient to noisy or
poisoned data (robustness), is explainable, and more. In addition, these
objectives are not only relevant to model training, but to all steps of
end-to-end machine learning, which include data collection, data cleaning and
validation, model training, model evaluation, and model management and serving.
Finally, responsible AI is conceptually challenging, and supporting all the
objectives must be as easy as possible. We thus propose three key research
directions towards this vision - depth, breadth, and usability - to measure
progress and introduce our ongoing research. First, responsible AI must be
deeply supported where multiple objectives like fairness and robust must be
handled together. To this end, we propose FR-Train, a holistic framework for
fair and robust model training in the presence of data bias and poisoning.
Second, responsible AI must be broadly supported, preferably in all steps of
machine learning. Currently we focus on the data pre-processing steps and
propose Slice Tuner, a selective data acquisition framework for training fair
and accurate models, and MLClean, a data cleaning framework that also improves
fairness and robustness. Finally, responsible AI must be usable where the
techniques must be easy to deploy and actionable. We propose FairBatch, a batch
selection approach for fairness that is effective and simple to use, and Slice
Finder, a model evaluation tool that automatically finds problematic slices. We
believe we scratched the surface of responsible AI for end-to-end machine
learning and suggest research challenges moving forward. | [
"cs.LG",
"cs.AI"
] |
Often we wish to transfer representational knowledge from one neural network
to another. Examples include distilling a large network into a smaller one,
transferring knowledge from one sensory modality to a second, or ensembling a
collection of models into a single estimator. Knowledge distillation, the
standard approach to these problems, minimizes the KL divergence between the
probabilistic outputs of a teacher and student network. We demonstrate that
this objective ignores important structural knowledge of the teacher network.
This motivates an alternative objective by which we train a student to capture
significantly more information in the teacher's representation of the data. We
formulate this objective as contrastive learning. Experiments demonstrate that
our resulting new objective outperforms knowledge distillation and other
cutting-edge distillers on a variety of knowledge transfer tasks, including
single model compression, ensemble distillation, and cross-modal transfer. Our
method sets a new state-of-the-art in many transfer tasks, and sometimes even
outperforms the teacher network when combined with knowledge distillation.
Code: http://github.com/HobbitLong/RepDistiller. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
The success of deep neural networks (DNNs) is attributable to three factors:
increased compute capacity, more complex models, and more data. These factors,
however, are not always present, especially for edge applications such as
autonomous driving, augmented reality, and internet-of-things. Training DNNs
requires a large amount of data, which is difficult to obtain. Edge devices
such as mobile phones have limited compute capacity, and therefore, require
specialized and efficient DNNs. However, due to the enormous design space and
prohibitive training costs, designing efficient DNNs for different target
devices is challenging. So the question is, with limited data, compute
capacity, and model complexity, can we still successfully apply deep neural
networks?
This dissertation focuses on the above problems and improving the efficiency
of deep neural networks at four levels. Model efficiency: we designed neural
networks for various computer vision tasks and achieved more than 10x faster
speed and lower energy. Data efficiency: we developed an advanced tool that
enables 6.2x faster annotation of a LiDAR point cloud. We also leveraged domain
adaptation to utilize simulated data, bypassing the need for real data.
Hardware efficiency: we co-designed neural networks and hardware accelerators
and achieved 11.6x faster inference. Design efficiency: the process of finding
the optimal neural networks is time-consuming. Our automated neural
architecture search algorithms discovered, using 421x lower computational cost
than previous search methods, models with state-of-the-art accuracy and
efficiency. | [
"cs.CV"
] |
We present our work in progress exploring the possibilities of a shared
embedding space between textual and visual modality. Leveraging the textual
nature of object detection labels and the hypothetical expressiveness of
extracted visual object representations, we propose an approach opposite to the
current trend, grounding of the representations in the word embedding space of
the captioning system instead of grounding words or sentences in their
associated images. Based on the previous work, we apply additional grounding
losses to the image captioning training objective aiming to force visual object
representations to create more heterogeneous clusters based on their class
label and copy a semantic structure of the word embedding space. In addition,
we provide an analysis of the learned object vector space projection and its
impact on the IC system performance. With only slight change in performance,
grounded models reach the stopping criterion during training faster than the
unconstrained model, needing about two to three times less training updates.
Additionally, an improvement in structural correlation between the word
embeddings and both original and projected object vectors suggests that the
grounding is actually mutual. | [
"cs.CV",
"cs.CL"
] |
Skeleton-based human action recognition has achieved a great interest in
recent years, as skeleton data has been demonstrated to be robust to
illumination changes, body scales, dynamic camera views, and complex
background. Nevertheless, an effective encoding of the latent information
underlying the 3D skeleton is still an open problem. In this work, we propose a
novel Spatial-Temporal Transformer network (ST-TR) which models dependencies
between joints using the Transformer self-attention operator. In our ST-TR
model, a Spatial Self-Attention module (SSA) is used to understand intra-frame
interactions between different body parts, and a Temporal Self-Attention module
(TSA) to model inter-frame correlations. The two are combined in a two-stream
network which outperforms state-of-the-art models using the same input data on
both NTU-RGB+D 60 and NTU-RGB+D 120. | [
"cs.CV"
] |
Early diagnosis of signet ring cell carcinoma dramatically improves the
survival rate of patients. Due to lack of public dataset and expert-level
annotations, automatic detection on signet ring cell (SRC) has not been
thoroughly investigated. In MICCAI DigestPath2019 challenge, apart from
foreground (SRC region)-background (normal tissue area) class imbalance, SRCs
are partially annotated due to costly medical image annotation, which
introduces extra label noise. To address the issues simultaneously, we propose
Decoupled Gradient Harmonizing Mechanism (DGHM) and embed it into
classification loss, denoted as DGHM-C loss. Specifically, besides positive
(SRCs) and negative (normal tissues) examples, we further decouple noisy
examples from clean examples and harmonize the corresponding gradient
distributions in classification respectively. Without whistles and bells, we
achieved the 2nd place in the challenge. Ablation studies and controlled label
missing rate experiments demonstrate that DGHM-C loss can bring substantial
improvement in partially annotated object detection. | [
"cs.CV",
"eess.IV"
] |
Given high-dimensional time series data (e.g., sensor data), how can we
detect anomalous events, such as system faults and attacks? More challengingly,
how can we do this in a way that captures complex inter-sensor relationships,
and detects and explains anomalies which deviate from these relationships?
Recently, deep learning approaches have enabled improvements in anomaly
detection in high-dimensional datasets; however, existing methods do not
explicitly learn the structure of existing relationships between variables, or
use them to predict the expected behavior of time series. Our approach combines
a structure learning approach with graph neural networks, additionally using
attention weights to provide explainability for the detected anomalies.
Experiments on two real-world sensor datasets with ground truth anomalies show
that our method detects anomalies more accurately than baseline approaches,
accurately captures correlations between sensors, and allows users to deduce
the root cause of a detected anomaly. | [
"cs.LG",
"cs.AI"
] |
Time series forecasting is a crucial task in machine learning, as it has a
wide range of applications including but not limited to forecasting electricity
consumption, traffic, and air quality. Traditional forecasting models relied on
rolling averages, vector auto-regression and auto-regressive integrated moving
averages. On the other hand, deep learning and matrix factorization models have
been recently proposed to tackle the same problem with more competitive
performance. However, one major drawback of such models is that they tend to be
overly complex in comparison to traditional techniques. In this paper, we try
to answer whether these highly complex deep learning models are without
alternative. We aim to enrich the pool of simple but powerful baselines by
revisiting the gradient boosting regression trees for time series forecasting.
Specifically, we reconfigure the way time series data is handled by Gradient
Tree Boosting models in a windowed fashion that is similar to the deep learning
models. For each training window, the target values are concatenated with
external features, and then flattened to form one input instance for a
multi-output gradient boosting regression tree model. We conducted a
comparative study on nine datasets for eight state-of-the-art deep-learning
models that were presented at top-level conferences in the last years. The
results demonstrated that the proposed approach outperforms all of the
state-of-the-art models. | [
"cs.LG",
"stat.ML"
] |
The information bottleneck (IB) principle has been adopted to explain deep
learning in terms of information compression and prediction, which are balanced
by a trade-off hyperparameter. How to optimize the IB principle for better
robustness and figure out the effects of compression through the trade-off
hyperparameter are two challenging problems. Previous methods attempted to
optimize the IB principle by introducing random noise into learning the
representation and achieved state-of-the-art performance in the nuisance
information compression and semantic information extraction. However, their
performance on resisting adversarial perturbations is far less impressive. To
this end, we propose an adversarial information bottleneck (AIB) method without
any explicit assumptions about the underlying distribution of the
representations, which can be optimized effectively by solving a Min-Max
optimization problem. Numerical experiments on synthetic and real-world
datasets demonstrate its effectiveness on learning more invariant
representations and mitigating adversarial perturbations compared to several
competing IB methods. In addition, we analyse the adversarial robustness of
diverse IB methods contrasting with their IB curves, and reveal that IB models
with the hyperparameter $\beta$ corresponding to the knee point in the IB curve
achieve the best trade-off between compression and prediction, and has best
robustness against various attacks. | [
"cs.LG",
"stat.ML"
] |
Robust road detection is a key challenge in safe autonomous driving.
Recently, with the rapid development of 3D sensors, more and more researchers
are trying to fuse information across different sensors to improve the
performance of road detection. Although many successful works have been
achieved in this field, methods for data fusion under deep learning framework
is still an open problem. In this paper, we propose a Siamese deep neural
network based on FCN-8s to detect road region. Our method uses data collected
from a monocular color camera and a Velodyne-64 LiDAR sensor. We project the
LiDAR point clouds onto the image plane to generate LiDAR images and feed them
into one of the branches of the network. The RGB images are fed into another
branch of our proposed network. The feature maps that these two branches
extract in multiple scales are fused before each pooling layer, via padding
additional fusion layers. Extensive experimental results on public dataset
KITTI ROAD demonstrate the effectiveness of our proposed approach. | [
"cs.CV",
"eess.IV"
] |
While deep learning has recently achieved great success on multi-view stereo
(MVS), limited training data makes the trained model hard to be generalized to
unseen scenarios. Compared with other computer vision tasks, it is rather
difficult to collect a large-scale MVS dataset as it requires expensive active
scanners and labor-intensive process to obtain ground truth 3D structures. In
this paper, we introduce BlendedMVS, a novel large-scale dataset, to provide
sufficient training ground truth for learning-based MVS. To create the dataset,
we apply a 3D reconstruction pipeline to recover high-quality textured meshes
from images of well-selected scenes. Then, we render these mesh models to color
images and depth maps. To introduce the ambient lighting information during
training, the rendered color images are further blended with the input images
to generate the training input. Our dataset contains over 17k high-resolution
images covering a variety of scenes, including cities, architectures,
sculptures and small objects. Extensive experiments demonstrate that BlendedMVS
endows the trained model with significantly better generalization ability
compared with other MVS datasets. The dataset and pretrained models are
available at \url{https://github.com/YoYo000/BlendedMVS}. | [
"cs.CV"
] |
Discriminating between distributions is an important problem in a number of
scientific fields. This motivated the introduction of Linear Optimal
Transportation (LOT), which embeds the space of distributions into an
$L^2$-space. The transform is defined by computing the optimal transport of
each distribution to a fixed reference distribution, and has a number of
benefits when it comes to speed of computation and to determining
classification boundaries. In this paper, we characterize a number of settings
in which LOT embeds families of distributions into a space in which they are
linearly separable. This is true in arbitrary dimension, and for families of
distributions generated through perturbations of shifts and scalings of a fixed
distribution.We also prove conditions under which the $L^2$ distance of the LOT
embedding between two distributions in arbitrary dimension is nearly isometric
to Wasserstein-2 distance between those distributions. This is of significant
computational benefit, as one must only compute $N$ optimal transport maps to
define the $N^2$ pairwise distances between $N$ distributions. We demonstrate
the benefits of LOT on a number of distribution classification problems. | [
"stat.ML",
"cs.LG",
"math.OC",
"60D05, 68T10, 68T05"
] |
Hypergraphs are used to model higher-order interactions amongst agents and
there exist many practically relevant instances of hypergraph datasets. To
enable efficient processing of hypergraph-structured data, several hypergraph
neural network platforms have been proposed for learning hypergraph properties
and structure, with a special focus on node classification. However, almost all
existing methods use heuristic propagation rules and offer suboptimal
performance on many datasets. We propose AllSet, a new hypergraph neural
network paradigm that represents a highly general framework for (hyper)graph
neural networks and for the first time implements hypergraph neural network
layers as compositions of two multiset functions that can be efficiently
learned for each task and each dataset. Furthermore, AllSet draws on new
connections between hypergraph neural networks and recent advances in deep
learning of multiset functions. In particular, the proposed architecture
utilizes Deep Sets and Set Transformer architectures that allow for significant
modeling flexibility and offer high expressive power. To evaluate the
performance of AllSet, we conduct the most extensive experiments to date
involving ten known benchmarking datasets and three newly curated datasets that
represent significant challenges for hypergraph node classification. The
results demonstrate that AllSet has the unique ability to consistently either
match or outperform all other hypergraph neural networks across the tested
datasets. Our implementation and dataset will be released upon acceptance. | [
"cs.LG",
"cs.AI"
] |
Infant motility assessment using intelligent wearables is a promising new
approach for assessment of infant neurophysiological development, and where
efficient signal analysis plays a central role. This study investigates the use
of different end-to-end neural network architectures for processing infant
motility data from wearable sensors. We focus on the performance and
computational burden of alternative sensor encoder and time-series modelling
modules and their combinations. In addition, we explore the benefits of data
augmentation methods in ideal and non-ideal recording conditions. The
experiments are conducted using a data-set of multi-sensor movement recordings
from 7-month-old infants, as captured by a recently proposed smart jumpsuit for
infant motility assessment. Our results indicate that the choice of the encoder
module has a major impact on classifier performance. For sensor encoders, the
best performance was obtained with parallel 2-dimensional convolutions for
intra-sensor channel fusion with shared weights for all sensors. The results
also indicate that a relatively compact feature representation is obtainable
for within-sensor feature extraction without a drastic loss to classifier
performance. Comparison of time-series models revealed that feed-forward
dilated convolutions with residual and skip connections outperformed all
RNN-based models in performance, training time, and training stability. The
experiments also indicate that data augmentation improves model robustness in
simulated packet loss or sensor dropout scenarios. In particular, signal- and
sensor-dropout-based augmentation strategies provided considerable boosts to
performance without negatively affecting the baseline performance. Overall the
results provide tangible suggestions on how to optimize end-to-end neural
network training for multi-channel movement sensor data. | [
"cs.CV",
"cs.HC"
] |
Scene flow depicts the dynamics of a 3D scene, which is critical for various
applications such as autonomous driving, robot navigation, AR/VR, etc.
Conventionally, scene flow is estimated from dense/regular RGB video frames.
With the development of depth-sensing technologies, precise 3D measurements are
available via point clouds which have sparked new research in 3D scene flow.
Nevertheless, it remains challenging to extract scene flow from point clouds
due to the sparsity and irregularity in typical point cloud sampling patterns.
One major issue related to irregular sampling is identified as the randomness
during point set abstraction/feature extraction -- an elementary process in
many flow estimation scenarios. A novel Spatial Abstraction with Attention
(SA^2) layer is accordingly proposed to alleviate the unstable abstraction
problem. Moreover, a Temporal Abstraction with Attention (TA^2) layer is
proposed to rectify attention in temporal domain, leading to benefits with
motions scaled in a larger range. Extensive analysis and experiments verified
the motivation and significant performance gains of our method, dubbed as Flow
Estimation via Spatial-Temporal Attention (FESTA), when compared to several
state-of-the-art benchmarks of scene flow estimation. | [
"cs.CV"
] |
Deep neural networks are known to be vulnerable to adversarial examples,
where a perturbation in the input space leads to an amplified shift in the
latent network representation. In this paper, we combine canonical supervised
learning with self-supervised representation learning, and present
Self-supervised Online Adversarial Purification (SOAP), a novel defense
strategy that uses a self-supervised loss to purify adversarial examples at
test-time. Our approach leverages the label-independent nature of
self-supervised signals and counters the adversarial perturbation with respect
to the self-supervised tasks. SOAP yields competitive robust accuracy against
state-of-the-art adversarial training and purification methods, with
considerably less training complexity. In addition, our approach is robust even
when adversaries are given knowledge of the purification defense strategy. To
the best of our knowledge, our paper is the first that generalizes the idea of
using self-supervised signals to perform online test-time purification. | [
"cs.LG",
"cs.CR"
] |
Explanations in Machine Learning come in many forms, but a consensus
regarding their desired properties is yet to emerge. In this paper we introduce
a taxonomy and a set of descriptors that can be used to characterise and
systematically assess explainable systems along five key dimensions:
functional, operational, usability, safety and validation. In order to design a
comprehensive and representative taxonomy and associated descriptors we
surveyed the eXplainable Artificial Intelligence literature, extracting the
criteria and desiderata that other authors have proposed or implicitly used in
their research. The survey includes papers introducing new explainability
algorithms to see what criteria are used to guide their development and how
these algorithms are evaluated, as well as papers proposing such criteria from
both computer science and social science perspectives. This novel framework
allows to systematically compare and contrast explainability approaches, not
just to better understand their capabilities but also to identify discrepancies
between their theoretical qualities and properties of their implementations. We
developed an operationalisation of the framework in the form of Explainability
Fact Sheets, which enable researchers and practitioners alike to quickly grasp
capabilities and limitations of a particular explainable method. When used as a
Work Sheet, our taxonomy can guide the development of new explainability
approaches by aiding in their critical evaluation along the five proposed
dimensions. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Methods of transfer learning try to combine knowledge from several related
tasks (or domains) to improve performance on a test task. Inspired by causal
methodology, we relax the usual covariate shift assumption and assume that it
holds true for a subset of predictor variables: the conditional distribution of
the target variable given this subset of predictors is invariant over all
tasks. We show how this assumption can be motivated from ideas in the field of
causality. We focus on the problem of Domain Generalization, in which no
examples from the test task are observed. We prove that in an adversarial
setting using this subset for prediction is optimal in Domain Generalization;
we further provide examples, in which the tasks are sufficiently diverse and
the estimator therefore outperforms pooling the data, even on average. If
examples from the test task are available, we also provide a method to transfer
knowledge from the training tasks and exploit all available features for
prediction. However, we provide no guarantees for this method. We introduce a
practical method which allows for automatic inference of the above subset and
provide corresponding code. We present results on synthetic data sets and a
gene deletion data set. | [
"stat.ML"
] |
We propose Stochastic Neural Architecture Search (SNAS), an economical
end-to-end solution to Neural Architecture Search (NAS) that trains neural
operation parameters and architecture distribution parameters in same round of
back-propagation, while maintaining the completeness and differentiability of
the NAS pipeline. In this work, NAS is reformulated as an optimization problem
on parameters of a joint distribution for the search space in a cell. To
leverage the gradient information in generic differentiable loss for
architecture search, a novel search gradient is proposed. We prove that this
search gradient optimizes the same objective as reinforcement-learning-based
NAS, but assigns credits to structural decisions more efficiently. This credit
assignment is further augmented with locally decomposable reward to enforce a
resource-efficient constraint. In experiments on CIFAR-10, SNAS takes less
epochs to find a cell architecture with state-of-the-art accuracy than
non-differentiable evolution-based and reinforcement-learning-based NAS, which
is also transferable to ImageNet. It is also shown that child networks of SNAS
can maintain the validation accuracy in searching, with which attention-based
NAS requires parameter retraining to compete, exhibiting potentials to stride
towards efficient NAS on big datasets. We have released our implementation at
https://github.com/SNAS-Series/SNAS-Series. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Deep learning-based medical image segmentation technology aims at automatic
recognizing and annotating objects on the medical image. Non-local attention
and feature learning by multi-scale methods are widely used to model network,
which drives progress in medical image segmentation. However, those attention
mechanism methods have weakly non-local receptive fields' strengthened
connection for small objects in medical images. Then, the features of important
small objects in abstract or coarse feature maps may be deserted, which leads
to unsatisfactory performance. Moreover, the existing multi-scale methods only
simply focus on different sizes of view, whose sparse multi-scale features
collected are not abundant enough for small objects segmentation. In this work,
a multi-dimensional attention segmentation model with cascade multi-scale
convolution is proposed to predict accurate segmentation for small objects in
medical images. As the weight function, multi-dimensional attention modules
provide coefficient modification for significant/informative small objects
features. Furthermore, The cascade multi-scale convolution modules in each
skip-connection path are exploited to capture multi-scale features in different
semantic depth. The proposed method is evaluated on three datasets: KiTS19,
Pancreas CT of Decathlon-10, and MICCAI 2018 LiTS Challenge, demonstrating
better segmentation performances than the state-of-the-art baselines. | [
"cs.CV"
] |
Graph data widely exist in many high-impact applications. Inspired by the
success of deep learning in grid-structured data, graph neural network models
have been proposed to learn powerful node-level or graph-level representation.
However, most of the existing graph neural networks suffer from the following
limitations: (1) there is limited analysis regarding the graph convolution
properties, such as seed-oriented, degree-aware and order-free; (2) the node's
degree-specific graph structure is not explicitly expressed in graph
convolution for distinguishing structure-aware node neighborhoods; (3) the
theoretical explanation regarding the graph-level pooling schemes is unclear.
To address these problems, we propose a generic degree-specific graph neural
network named DEMO-Net motivated by Weisfeiler-Lehman graph isomorphism test
that recursively identifies 1-hop neighborhood structures. In order to
explicitly capture the graph topology integrated with node attributes, we argue
that graph convolution should have three properties: seed-oriented,
degree-aware, order-free. To this end, we propose multi-task graph convolution
where each task represents node representation learning for nodes with a
specific degree value, thus leading to preserving the degree-specific graph
structure. In particular, we design two multi-task learning methods:
degree-specific weight and hashing functions for graph convolution. In
addition, we propose a novel graph-level pooling/readout scheme for learning
graph representation provably lying in a degree-specific Hilbert kernel space.
The experimental results on several node and graph classification benchmark
data sets demonstrate the effectiveness and efficiency of our proposed DEMO-Net
over state-of-the-art graph neural network models. | [
"cs.LG",
"stat.ML"
] |
Machine learning has been getting a large attention in the recent years, as a
tool to process big data generated by ubiquitous sensors in our daily life.
High speed, low energy computing machines are in demand to enable real-time
artificial intelligence at the edge, i.e., without the support of a remote
frame server in the cloud. Such requirements challenge the complementary
metal-oxide-semiconductor (CMOS) technology, which is limited by the Moore's
law approaching its end and the communication bottleneck in conventional
computing architecture. Novel computing concepts, architectures and devices are
thus strongly needed to accelerate data-intensive applications. Here we show a
crosspoint resistive memory circuit with feedback configuration can execute
linear regression and logistic regression in just one step by computing the
pseudoinverse matrix of the data within the memory. The most elementary
learning operation, that is the regression of a sequence of data and the
classification of a set of data, can thus be executed in one single
computational step by the novel technology. One-step learning is further
supported by simulations of the prediction of the cost of a house in Boston and
the training of a 2-layer neural network for MNIST digit recognition. The
results are all obtained in one computational step, thanks to the physical,
parallel, and analog computing within the crosspoint array. | [
"cs.LG",
"cs.ET",
"stat.ML"
] |
This paper describes an exponential transient excision algorithm (ETEA). In
biomedical time series analysis, e.g., in vivo neural recording and
electrocorticography (ECoG), some measurement artifacts take the form of
piecewise exponential transients. The proposed method is formulated as an
unconstrained convex optimization problem, regularized by smoothed l1-norm
penalty function, which can be solved by majorization-minimization (MM) method.
With a slight modification of the regularizer, ETEA can also suppress more
irregular piecewise smooth artifacts, especially, ocular artifacts (OA) in
electroencephalog- raphy (EEG) data. Examples of synthetic signal, EEG data,
and ECoG data are presented to illustrate the proposed algorithms. | [
"cs.LG"
] |
Swarms of drones are being more and more used in many practical scenarios,
such as surveillance, environmental monitoring, search and rescue in
hardly-accessible areas, etc.. While a single drone can be guided by a human
operator, the deployment of a swarm of multiple drones requires proper
algorithms for automatic task-oriented control. In this paper, we focus on
visual coverage optimization with drone-mounted camera sensors. In particular,
we consider the specific case in which the coverage requirements are uneven,
meaning that different parts of the environment have different coverage
priorities. We model these coverage requirements with relevance maps and
propose a deep reinforcement learning algorithm to guide the swarm. The paper
first defines a proper learning model for a single drone, and then extends it
to the case of multiple drones both with greedy and cooperative strategies.
Experimental results show the performance of the proposed method, also compared
with a standard patrolling algorithm. | [
"cs.CV"
] |
Estimating a representative and discriminative brain network atlas (BNA) is a
nascent research field in mapping a population of brain networks in health and
disease. Although limited, existing BNA estimation methods have several
limitations. First, they primarily rely on a similarity network diffusion and
fusion technique, which only considers node degree as a topological measure in
the cross-network diffusion process, thereby overlooking rich topological
measures of the brain network (e.g., centrality). Second, both diffusion and
fusion techniques are implemented in fully unsupervised manner, which might
decrease the discriminative power of the estimated BNAs. To fill these gaps, we
propose a supervised multi-topology network cross-diffusion (SM-netFusion)
framework for estimating a BNA satisfying : (i) well-representativeness
(captures shared traits across subjects), (ii) well-centeredness (optimally
close to all subjects), and (iii) high discriminativeness (can easily and
efficiently identify discriminative brain connections that distinguish between
two populations). For a specific class, given the cluster labels of the
training data, we learn a weighted combination of the topological diffusion
kernels derived from degree, closeness and eigenvector centrality measures in a
supervised manner. Specifically, we learn the cross-diffusion process by
normalizing the training brain networks using the learned diffusion kernels.
Our SM-netFusion produces the most centered and representative template in
comparison with its variants and state-of-the-art methods and further boosted
the classification of autistic subjects by 5-15%. SM-netFusion presents the
first work for supervised network cross-diffusion based on graph topological
measures, which can be further leveraged to design an efficient graph feature
selection method for training predictive learners in network neuroscience. | [
"cs.LG",
"stat.ML"
] |
Low-shot learning methods for image classification support learning from
sparse data. We extend these techniques to support dense semantic image
segmentation. Specifically, we train a network that, given a small set of
annotated images, produces parameters for a Fully Convolutional Network (FCN).
We use this FCN to perform dense pixel-level prediction on a test image for the
new semantic class. Our architecture shows a 25% relative meanIoU improvement
compared to the best baseline methods for one-shot segmentation on unseen
classes in the PASCAL VOC 2012 dataset and is at least 3 times faster. | [
"cs.CV"
] |
Tunnel CCTVs are installed to low height and long-distance interval. However,
because of the limitation of installation height, severe perspective effect in
distance occurs, and it is almost impossible to detect vehicles in far distance
from the CCTV in the existing tunnel CCTV-based accident detection system
(Pflugfelder 2005). To overcome the limitation, a vehicle object is detected
through an object detection algorithm based on an inverse perspective transform
by re-setting the region of interest (ROI). It can detect vehicles that are far
away from the CCTV. To verify this process, this paper creates each dataset
consisting of images and bounding boxes based on the original and warped images
of the CCTV at the same time, and then compares performance of the deep
learning object detection models trained with the two datasets. As a result,
the model that trained the warped image was able to detect vehicle objects more
accurately at the position far from the CCTV compared to the model that trained
the original image. | [
"stat.ML",
"cs.LG"
] |
We consider the problem of determining which classes of functions can be
tested more efficiently than they can be learned, in the distribution-free
sample-based model that corresponds to the standard PAC learning setting. Our
main result shows that while VC dimension by itself does not always provide
tight bounds on the number of samples required to test a class of functions in
this model, it can be combined with a closely-related variant that we call
"lower VC" (or LVC) dimension to obtain strong lower bounds on this sample
complexity.
We use this result to obtain strong and in many cases nearly optimal lower
bounds on the sample complexity for testing unions of intervals, halfspaces,
intersections of halfspaces, polynomial threshold functions, and decision
trees. Conversely, we show that two natural classes of functions, juntas and
monotone functions, can be tested with a number of samples that is polynomially
smaller than the number of samples required for PAC learning.
Finally, we also use the connection between VC dimension and property testing
to establish new lower bounds for testing radius clusterability and testing
feasibility of linear constraint systems. | [
"cs.LG",
"cs.CC",
"cs.DS"
] |
Image completion has made tremendous progress with convolutional neural
networks (CNNs), because of their powerful texture modeling capacity. However,
due to some inherent properties (e.g., local inductive prior, spatial-invariant
kernels), CNNs do not perform well in understanding global structures or
naturally support pluralistic completion. Recently, transformers demonstrate
their power in modeling the long-term relationship and generating diverse
results, but their computation complexity is quadratic to input length, thus
hampering the application in processing high-resolution images. This paper
brings the best of both worlds to pluralistic image completion: appearance
prior reconstruction with transformer and texture replenishment with CNN. The
former transformer recovers pluralistic coherent structures together with some
coarse textures, while the latter CNN enhances the local texture details of
coarse priors guided by the high-resolution masked images. The proposed method
vastly outperforms state-of-the-art methods in terms of three aspects: 1) large
performance boost on image fidelity even compared to deterministic completion
methods; 2) better diversity and higher fidelity for pluralistic completion; 3)
exceptional generalization ability on large masks and generic dataset, like
ImageNet. | [
"cs.CV",
"cs.GR"
] |
The Graph Convolutional Networks (GCNs) proposed by Kipf and Welling are
effective models for semi-supervised learning, but facing the obstacle of
over-smoothing, which will weaken the representation ability of GCNs. Recently
some works are proposed to tackle with above limitation by randomly perturbing
graph topology or feature matrix to generate data augmentations as input for
training. However, these operations have to pay the price of information
structure integrity breaking, and inevitably sacrifice information
stochastically from original graph. In this paper, we introduce a novel graph
entropy definition as an quantitative index to evaluate feature information
diffusion among a graph. Under considerations of preserving graph entropy, we
propose an effective strategy to generate perturbed training data using a
stochastic mechanism but guaranteeing graph topology integrity and with only a
small amount of graph entropy decaying. Extensive experiments have been
conducted on real-world datasets and the results verify the effectiveness of
our proposed method in improving semi-supervised node classification accuracy
compared with a surge of baselines. Beyond that, our proposed approach
significantly enhances the robustness and generalization ability of GCNs during
the training process. | [
"cs.LG",
"cs.AI"
] |
Labeling semantic segmentation datasets is a costly and laborious process if
compared with tasks like image classification and object detection. This is
especially true for remote sensing applications that not only work with
extremely high spatial resolution data but also commonly require the knowledge
of experts of the area to perform the manual labeling. Data augmentation
techniques help to improve deep learning models under the circumstance of few
and imbalanced labeled samples. In this work, we propose a novel data
augmentation method focused on exploring the spatial context of remote sensing
semantic segmentation. This method, ChessMix, creates new synthetic images from
the existing training set by mixing transformed mini-patches across the dataset
in a chessboard-like grid. ChessMix prioritizes patches with more examples of
the rarest classes to alleviate the imbalance problems. The results in three
diverse well-known remote sensing datasets show that this is a promising
approach that helps to improve the networks' performance, working especially
well in datasets with few available data. The results also show that ChessMix
is capable of improving the segmentation of objects with few labeled pixels
when compared to the most common data augmentation methods widely used. | [
"cs.CV",
"cs.LG"
] |
Attribution editing has achieved remarkable progress in recent years owing to
the encoder-decoder structure and generative adversarial network (GAN).
However, it remains challenging in generating high-quality images with accurate
attribute transformation. Attacking these problems, the work proposes a novel
selective attribute editing model based on classification adversarial network
(referred to as ClsGAN) that shows good balance between attribute transfer
accuracy and photo-realistic images. Considering that the editing images are
prone to be affected by original attribute due to skip-connection in
encoder-decoder structure, an upper convolution residual network (referred to
as Tr-resnet) is presented to selectively extract information from the source
image and target label. In addition, to further improve the transfer accuracy
of generated images, an attribute adversarial classifier (referred to as
Atta-cls) is introduced to guide the generator from the perspective of
attribute through learning the defects of attribute transfer images.
Experimental results on CelebA demonstrate that our ClsGAN performs favorably
against state-of-the-art approaches in image quality and transfer accuracy.
Moreover, ablation studies are also designed to verify the great performance of
Tr-resnet and Atta-cls. | [
"cs.CV",
"eess.IV"
] |
Subsets and Splits