text
stringlengths 29
3.31k
| label
listlengths 1
11
|
---|---|
Convolutional neural network has made remarkable achievements in
classification of idealized point cloud, however, non-idealized point cloud
classification is still a challenging task. In this paper, DNDFN, namely,
Dual-Neighborhood Deep Fusion Network, is proposed to deal with this problem.
DNDFN has two key points. One is combination of local neighborhood and global
neigh-borhood. nearest neighbor (kNN) or ball query can capture the local
neighborhood but ignores long-distance dependencies. A trainable neighborhood
learning meth-od called TN-Learning is proposed, which can capture the global
neighborhood. TN-Learning is combined with them to obtain richer neighborhood
information. The other is information transfer convolution (IT-Conv) which can
learn the structural information between two points and transfer features
through it. Extensive exper-iments on idealized and non-idealized benchmarks
across four tasks verify DNDFN achieves the state of the arts. | [
"cs.CV"
]
|
CNNs perform remarkably well when the training and test distributions are
i.i.d, but unseen image corruptions can cause a surprisingly large drop in
performance. In various real scenarios, unexpected distortions, such as random
noise, compression artefacts, or weather distortions are common phenomena.
Improving performance on corrupted images must not result in degraded i.i.d
performance - a challenge faced by many state-of-the-art robust approaches.
Image corruption types have different characteristics in the frequency spectrum
and would benefit from a targeted type of data augmentation, which, however, is
often unknown during training. In this paper, we introduce a mixture of two
expert models specializing in high and low-frequency robustness, respectively.
Moreover, we propose a new regularization scheme that minimizes the total
variation (TV) of convolution feature-maps to increase high-frequency
robustness. The approach improves on corrupted images without degrading
in-distribution performance. We demonstrate this on ImageNet-C and also for
real-world corruptions on an automotive dataset, both for object classification
and object detection. | [
"cs.CV"
]
|
Squeeze-and-Excitation (SE) block presents a channel attention mechanism for
modeling global context via explicitly capturing dependencies across channels.
However, we are still far from understanding how the SE block works. In this
work, we first revisit the SE block, and then present a detailed empirical
study of the relationship between global context and attention distribution,
based on which we propose a simple yet effective module, called Linear Context
Transform (LCT) block. We divide all channels into different groups and
normalize the globally aggregated context features within each channel group,
reducing the disturbance from irrelevant channels. Through linear transform of
the normalized context features, we model global context for each channel
independently. The LCT block is extremely lightweight and easy to be plugged
into different backbone models while with negligible parameters and
computational burden increase. Extensive experiments show that the LCT block
outperforms the SE block in image classification task on the ImageNet and
object detection/segmentation on the COCO dataset with different backbone
models. Moreover, LCT yields consistent performance gains over existing
state-of-the-art detection architectures, e.g., 1.5$\sim$1.7% AP$^{bbox}$ and
1.0$\sim$1.2% AP$^{mask}$ improvements on the COCO benchmark, irrespective of
different baseline models of varied capacities. We hope our simple yet
effective approach will shed some light on future research of attention-based
models. | [
"cs.LG",
"cs.AI",
"cs.CV"
]
|
Representation learning (RL) plays an important role in extracting proper
representations from complex medical data for various analyzing tasks, such as
patient grouping, clinical endpoint prediction and medication recommendation.
Medical data can be divided into two typical categories, outpatient and
inpatient, that have different data characteristics. However, few of existing
RL methods are specially designed for inpatients data, which have strong
temporal relations and consistent diagnosis. In addition, for unordered medical
activity set, existing medical RL methods utilize a simple pooling strategy,
which would result in indistinguishable contributions among the activities for
learning. In this work, weproposeInpatient2Vec, anovelmodel for learning three
kinds of representations for inpatient, including medical activity, hospital
day and diagnosis. A multi-layer self-attention mechanism with two training
tasks is designed to capture the inpatient data characteristics and process the
unordered set. Using a real-world dataset, we demonstrate that the proposed
approach outperforms the competitive baselines on semantic similarity
measurement and clinical events prediction tasks. | [
"cs.LG",
"stat.ML"
]
|
Learning in sparse reward settings remains a challenge in Reinforcement
Learning, which is often addressed by using intrinsic rewards. One promising
strategy is inspired by human curiosity, requiring the agent to learn to
predict the future. In this paper a curiosity-driven agent is extended to use
these predictions directly for training. To achieve this, the agent predicts
the value function of the next state at any point in time. Subsequently, the
consistency of this prediction with the current value function is measured,
which is then used as a regularization term in the loss function of the
algorithm. Experiments were made on grid-world environments as well as on a 3D
navigation task, both with sparse rewards. In the first case the extended agent
is able to learn significantly faster than the baselines. | [
"cs.LG",
"cs.AI",
"stat.ML"
]
|
Current fiducial marker detection algorithms rely on marker IDs for false
positive rejection. Time is wasted on potential detections that will eventually
be rejected as false positives. We introduce ChromaTag, a fiducial marker and
detection algorithm designed to use opponent colors to limit and quickly reject
initial false detections and grayscale for precise localization. Through
experiments, we show that ChromaTag is significantly faster than current
fiducial markers while achieving similar or better detection accuracy. We also
show how tag size and viewing direction effect detection accuracy. Our
contribution is significant because fiducial markers are often used in
real-time applications (e.g. marker assisted robot navigation) where heavy
computation is required by other parts of the system. | [
"cs.CV"
]
|
Current image segmentation techniques usually require that the user tune
several parameters in order to obtain maximum segmentation accuracy, a
computationally inefficient approach, especially when a large number of images
must be processed sequentially in daily practice. The use of evolving fuzzy
systems for designing a method that automatically adjusts parameters to segment
medical images according to the quality expectation of expert users has been
proposed recently (Evolving fuzzy image segmentation EFIS). However, EFIS
suffers from a few limitations when used in practice mainly due to some fixed
parameters. For instance, EFIS depends on auto-detection of the object of
interest for feature calculation, a task that is highly application-dependent.
This shortcoming limits the applicability of EFIS, which was proposed with the
ultimate goal of offering a generic but adjustable segmentation scheme. In this
paper, a new version of EFIS is proposed to overcome these limitations. The new
EFIS, called self-configuring EFIS (SC-EFIS), uses available training data to
self-estimate the parameters that are fixed in EFIS. As well, the proposed
SC-EFIS relies on a feature selection process that does not require
auto-detection of an ROI. The proposed SC-EFIS was evaluated using the same
segmentation algorithms and the same dataset as for EFIS. The results show that
SC-EFIS can provide the same results as EFIS but with a higher level of
automation. | [
"cs.CV"
]
|
As machine learning is increasingly deployed in the real world, it is ever
more vital that we understand the decision-criteria of the models we train.
Recently, researchers have shown that influence functions, a statistical
measure of sample impact, may be extended to approximate the effects of
training samples on classification accuracy for deep neural networks. However,
prior work only applies to supervised learning setups where training and
testing share an objective function. Despite the rise in unsupervised learning,
self-supervised learning, and model pre-training, there are currently no
suitable technologies for estimating influence of deep networks that do not
train and test on the same objective. To overcome this limitation, we provide
the first theoretical and empirical demonstration that influence functions can
be extended to handle mismatched training and testing settings. Our result
enables us to compute the influence of unsupervised and self-supervised
training examples with respect to a supervised test objective. We demonstrate
this technique on a synthetic dataset as well as two Skip-gram language model
examples to examine cluster membership and sources of unwanted bias. | [
"cs.LG"
]
|
We introduce a novel representation learning method to disentangle
pose-dependent as well as view-dependent factors from 2D human poses. The
method trains a network using cross-view mutual information maximization
(CV-MIM) which maximizes mutual information of the same pose performed from
different viewpoints in a contrastive learning manner. We further propose two
regularization terms to ensure disentanglement and smoothness of the learned
representations. The resulting pose representations can be used for cross-view
action recognition. To evaluate the power of the learned representations, in
addition to the conventional fully-supervised action recognition settings, we
introduce a novel task called single-shot cross-view action recognition. This
task trains models with actions from only one single viewpoint while models are
evaluated on poses captured from all possible viewpoints. We evaluate the
learned representations on standard benchmarks for action recognition, and show
that (i) CV-MIM performs competitively compared with the state-of-the-art
models in the fully-supervised scenarios; (ii) CV-MIM outperforms other
competing methods by a large margin in the single-shot cross-view setting;
(iii) and the learned representations can significantly boost the performance
when reducing the amount of supervised training data. Our code is made publicly
available at
https://github.com/google-research/google-research/tree/master/poem | [
"cs.CV"
]
|
Causal structure discovery from observational data is fundamental to the
causal understanding of autonomous systems such as medical decision support
systems, advertising campaigns and self-driving cars. This is essential to
solve well-known causal decision making and prediction problems associated with
those real-world applications. Recently, recursive causal discovery algorithms
have gained particular attention among the research community due to their
ability to provide good results by using Conditional Independent (CI) tests in
smaller sub-problems. However, each of such algorithms needs a refinement
function to remove undesired causal relations of the discovered graphs.
Notably, with the increase of the problem size, the computation cost (i.e., the
number of CI-tests) of the refinement function makes an algorithm expensive to
deploy in practice. This paper proposes a generic causal structure refinement
strategy that can locate the undesired relations with a small number of
CI-tests, thus speeding up the algorithm for large and complex problems. We
theoretically prove the correctness of our algorithm. We then empirically
evaluate its performance against the state-of-the-art algorithms in terms of
solution quality and completion time in synthetic and real datasets. | [
"cs.LG",
"cs.AI"
]
|
Registration is a fundamental task in medical robotics and is often a crucial
step for many downstream tasks such as motion analysis, intra-operative
tracking and image segmentation. Popular registration methods such as ANTs and
NiftyReg optimize objective functions for each pair of images from scratch,
which are time-consuming for 3D and sequential images with complex
deformations. Recently, deep learning-based registration approaches such as
VoxelMorph have been emerging and achieve competitive performance. In this
work, we construct a test-time training for deep deformable image registration
to improve the generalization ability of conventional learning-based
registration model. We design multi-scale deep networks to consecutively model
the residual deformations, which is effective for high variational
deformations. Extensive experiments validate the effectiveness of multi-scale
deep registration with test-time training based on Dice coefficient for image
segmentation and mean square error (MSE), normalized local cross-correlation
(NLCC) for tissue dense tracking tasks. Two videos are in
https://www.youtube.com/watch?v=NvLrCaqCiAE and
https://www.youtube.com/watch?v=pEA6ZmtTNuQ | [
"cs.CV",
"cs.LG",
"cs.NE",
"cs.RO",
"eess.IV"
]
|
Multi-agent reinforcement learning (MARL) requires coordination to
efficiently solve certain tasks. Fully centralized control is often infeasible
in such domains due to the size of joint action spaces. Coordination graph
based formalization allows reasoning about the joint action based on the
structure of interactions. However, they often require domain expertise in
their design. This paper introduces the deep implicit coordination graph (DICG)
architecture for such scenarios. DICG consists of a module for inferring the
dynamic coordination graph structure which is then used by a graph neural
network based module to learn to implicitly reason about the joint actions or
values. DICG allows learning the tradeoff between full centralization and
decentralization via standard actor-critic methods to significantly improve
coordination for domains with large number of agents. We apply DICG to both
centralized-training-centralized-execution and
centralized-training-decentralized-execution regimes. We demonstrate that DICG
solves the relative overgeneralization pathology in predatory-prey tasks as
well as outperforms various MARL baselines on the challenging StarCraft II
Multi-agent Challenge (SMAC) and traffic junction environments. | [
"cs.LG",
"cs.AI",
"cs.MA"
]
|
To read the final version please go to IEEE TGRS on IEEE Xplore.
Convolutional neural networks (CNNs) have been attracting increasing attention
in hyperspectral (HS) image classification, owing to their ability to capture
spatial-spectral feature representations. Nevertheless, their ability in
modeling relations between samples remains limited. Beyond the limitations of
grid sampling, graph convolutional networks (GCNs) have been recently proposed
and successfully applied in irregular (or non-grid) data representation and
analysis. In this paper, we thoroughly investigate CNNs and GCNs (qualitatively
and quantitatively) in terms of HS image classification. Due to the
construction of the adjacency matrix on all the data, traditional GCNs usually
suffer from a huge computational cost, particularly in large-scale remote
sensing (RS) problems. To this end, we develop a new mini-batch GCN (called
miniGCN hereinafter) which allows to train large-scale GCNs in a mini-batch
fashion. More significantly, our miniGCN is capable of inferring out-of-sample
data without re-training networks and improving classification performance.
Furthermore, as CNNs and GCNs can extract different types of HS features, an
intuitive solution to break the performance bottleneck of a single model is to
fuse them. Since miniGCNs can perform batch-wise network training (enabling the
combination of CNNs and GCNs) we explore three fusion strategies: additive
fusion, element-wise multiplicative fusion, and concatenation fusion to measure
the obtained performance gain. Extensive experiments, conducted on three HS
datasets, demonstrate the advantages of miniGCNs over GCNs and the superiority
of the tested fusion strategies with regards to the single CNN or GCN models.
The codes of this work will be available at
https://github.com/danfenghong/IEEE_TGRS_GCN for the sake of reproducibility. | [
"cs.CV"
]
|
Providing Reinforcement Learning agents with expert advice can dramatically
improve various aspects of learning. Prior work has developed teaching
protocols that enable agents to learn efficiently in complex environments; many
of these methods tailor the teacher's guidance to agents with a particular
representation or underlying learning scheme, offering effective but
specialized teaching procedures. In this work, we explore protocol programs, an
agent-agnostic schema for Human-in-the-Loop Reinforcement Learning. Our goal is
to incorporate the beneficial properties of a human teacher into Reinforcement
Learning without making strong assumptions about the inner workings of the
agent. We show how to represent existing approaches such as action pruning,
reward shaping, and training in simulation as special cases of our schema and
conduct preliminary experiments on simple domains. | [
"cs.LG",
"cs.AI"
]
|
In this paper, we present an omnidirectional localization and dense mapping
system for a wide-baseline multiview stereo setup with ultra-wide field-of-view
(FOV) fisheye cameras, which has a 360 degrees coverage of stereo observations
of the environment. For more practical and accurate reconstruction, we first
introduce improved and light-weighted deep neural networks for the
omnidirectional depth estimation, which are faster and more accurate than the
existing networks. Second, we integrate our omnidirectional depth estimates
into the visual odometry (VO) and add a loop closing module for global
consistency. Using the estimated depth map, we reproject keypoints onto each
other view, which leads to a better and more efficient feature matching
process. Finally, we fuse the omnidirectional depth maps and the estimated rig
poses into the truncated signed distance function (TSDF) volume to acquire a 3D
map. We evaluate our method on synthetic datasets with ground-truth and
real-world sequences of challenging environments, and the extensive experiments
show that the proposed system generates excellent reconstruction results in
both synthetic and real-world environments. | [
"cs.CV",
"cs.RO"
]
|
We describe a method for 3D human pose estimation from transient images
(i.e., a 3D spatio-temporal histogram of photons) acquired by an optical
non-line-of-sight (NLOS) imaging system. Our method can perceive 3D human pose
by `looking around corners' through the use of light indirectly reflected by
the environment. We bring together a diverse set of technologies from NLOS
imaging, human pose estimation and deep reinforcement learning to construct an
end-to-end data processing pipeline that converts a raw stream of photon
measurements into a full 3D human pose sequence estimate. Our contributions are
the design of data representation process which includes (1) a learnable
inverse point spread function (PSF) to convert raw transient images into a deep
feature vector; (2) a neural humanoid control policy conditioned on the
transient image feature and learned from interactions with a physics simulator;
and (3) a data synthesis and augmentation strategy based on depth data that can
be transferred to a real-world NLOS imaging system. Our preliminary experiments
suggest that our method is able to generalize to real-world NLOS measurement to
estimate physically-valid 3D human poses. | [
"cs.CV",
"cs.LG",
"cs.RO",
"eess.IV"
]
|
We propose SelfDoc, a task-agnostic pre-training framework for document image
understanding. Because documents are multimodal and are intended for sequential
reading, our framework exploits the positional, textual, and visual information
of every semantically meaningful component in a document, and it models the
contextualization between each block of content. Unlike existing document
pre-training models, our model is coarse-grained instead of treating individual
words as input, therefore avoiding an overly fine-grained with excessive
contextualization. Beyond that, we introduce cross-modal learning in the model
pre-training phase to fully leverage multimodal information from unlabeled
documents. For downstream usage, we propose a novel modality-adaptive attention
mechanism for multimodal feature fusion by adaptively emphasizing language and
vision signals. Our framework benefits from self-supervised pre-training on
documents without requiring annotations by a feature masking training strategy.
It achieves superior performance on multiple downstream tasks with
significantly fewer document images used in the pre-training stage compared to
previous works. | [
"cs.CV",
"cs.CL"
]
|
In this paper, a multi-modal 360$^{\circ}$ framework for 3D object detection
and tracking for autonomous vehicles is presented. The process is divided into
four main stages. First, images are fed into a CNN network to obtain instance
segmentation of the surrounding road participants. Second, LiDAR-to-image
association is performed for the estimated mask proposals. Then, the isolated
points of every object are processed by a PointNet ensemble to compute their
corresponding 3D bounding boxes and poses. Lastly, a tracking stage based on
Unscented Kalman Filter is used to track the agents along time. The solution,
based on a novel sensor fusion configuration, provides accurate and reliable
road environment detection. A wide variety of tests of the system, deployed in
an autonomous vehicle, have successfully assessed the suitability of the
proposed perception stack in a real autonomous driving application. | [
"cs.CV",
"cs.RO"
]
|
Single stage deep learning algorithm for 2D object detection was made popular
by Single Shot MultiBox Detector (SSD) and it was heavily adopted in several
embedded applications. PointPillars is a state of the art 3D object detection
algorithm that uses a Single Shot Detector adapted for 3D object detection. The
main downside of PointPillars is that it has a two stage approach with learned
input representation based on fully connected layers followed by the Single
Shot Detector for 3D detection. In this paper we present Single Shot 3D Object
Detection (SS3D) - a single stage 3D object detection algorithm which combines
straight forward, statistically computed input representation and a Single Shot
Detector (based on PointPillars). Computing the input representation is
straight forward, does not involve learning and does not have much
computational cost. We also extend our method to stereo input and show that,
aided by additional semantic segmentation input; our method produces similar
accuracy as state of the art stereo based detectors. Achieving the accuracy of
two stage detectors using a single stage approach is important as single stage
approaches are simpler to implement in embedded, real-time applications. With
LiDAR as well as stereo input, our method outperforms PointPillars. When using
LiDAR input, our input representation is able to improve the AP3D of Cars
objects in the moderate category from 74.99 to 76.84. When using stereo input,
our input representation is able to improve the AP3D of Cars objects in the
moderate category from 38.13 to 45.13. Our results are also better than other
popular 3D object detectors such as AVOD and F-PointNet. | [
"cs.CV",
"cs.LG",
"eess.IV"
]
|
This paper evaluates the approach of imaging timeseries data such as EEG in
the diagnosis of epilepsy through Deep Neural Network (DNN). EEG signal is
transformed into an RGB image using Gramian Angular Summation Field (GASF).
Many such EEG epochs are transformed into GASF images for the normal and focal
EEG signals. Then, some of the widely used Deep Neural Networks for image
classification problems are used here to detect the focal GASF images. Three
pre-trained DNN such as the AlexNet, VGG16, and VGG19 are validated for
epilepsy detection based on the transfer learning approach. Furthermore, the
textural features are extracted from GASF images, and prominent features are
selected for a multilayer Artificial Neural Network (ANN) classifier. Lastly, a
Custom Convolutional Neural Network (CNN) with three CNN layers, Batch
Normalization, Max-pooling layer, and Dense layers, is proposed for epilepsy
diagnosis from GASF images. The results of this paper show that the Custom CNN
model was able to discriminate against the focal and normal GASF images with an
average peak Precision of 0.885, Recall of 0.92, and F1-score of 0.90.
Moreover, the Area Under the Curve (AUC) value of the Receiver Operating
Characteristic (ROC) curve is 0.92 for the Custom CNN model. This paper
suggests that Deep Learning methods widely used in image classification
problems can be an alternative approach for epilepsy detection from EEG signals
through GASF images. | [
"cs.CV",
"eess.SP"
]
|
We consider a power transmission system monitored with Phasor Measurement
Units (PMUs) placed at significant, but not all, nodes of the system. Assuming
that a sufficient number of distinct single-line faults, specifically pre-fault
state and (not cleared) post-fault state, are recorded by the PMUs and are
available for training, we, first, design a comprehensive sequence of Neural
Networks (NNs) locating the faulty line. Performance of different NNs in the
sequence, including Linear Regression, Feed-Forward NN, AlexNet, Graphical
Convolutional NN, Neural Linear ODE and Neural Graph-based ODE, ordered
according to the type and amount of the power flow physics involved, are
compared for different levels of observability. Second, we build a sequence of
advanced Power-System-Dynamics-Informed and Neural-ODE based Machine Learning
schemes trained, given pre-fault state, to predict the post-fault state and
also, in parallel, to estimate system parameters. Finally, third, and
continuing to work with the first (fault localization) setting we design a
(NN-based) algorithm which discovers optimal PMU placement. | [
"stat.ML",
"cs.LG",
"physics.data-an",
"physics.soc-ph"
]
|
Over the past few years machine learning has seen a renewed explosion of
interest, following a number of studies showing the effectiveness of neural
networks in a range of tasks which had previously been considered incredibly
hard. Neural networks' effectiveness in the fields of image recognition and
natural language processing stems primarily from the vast amounts of data
available to companies and researchers, coupled with the huge amounts of
compute power available in modern accelerators such as GPUs, FPGAs and ASICs.
There are a number of approaches available to developers for utilizing GPGPU
technologies such as SYCL, OpenCL and CUDA, however many applications require
the same low level mathematical routines. Libraries dedicated to accelerating
these common routines allow developers to easily make full use of the available
hardware without requiring low level knowledge of the hardware themselves,
however such libraries are often provided by hardware manufacturers for
specific hardware such as cuDNN for Nvidia hardware or MIOpen for AMD hardware.
SYCL-DNN is a new open-source library dedicated to providing accelerated
routines for neural network operations which are hardware and vendor agnostic.
Built on top of the SYCL open standard and written entirely in standard C++,
SYCL-DNN allows a user to easily accelerate neural network code for a wide
range of hardware using a modern C++ interface. The library is tested on AMD's
OpenCL for GPU, Intel's OpenCL for CPU and GPU, ARM's OpenCL for Mali GPUs as
well as ComputeAorta's OpenCL for R-Car CV engine and host CPU. In this talk we
will present performance figures for SYCL-DNN on this range of hardware, and
discuss how high performance was achieved on such a varied set of accelerators
with such different hardware features. | [
"cs.LG",
"cs.DC",
"cs.PF"
]
|
We propose a unified data-driven framework based on inverse optimal transport
that can learn adaptive, nonlinear interaction cost function from noisy and
incomplete empirical matching matrix and predict new matching in various
matching contexts. We emphasize that the discrete optimal transport plays the
role of a variational principle which gives rise to an optimization-based
framework for modeling the observed empirical matching data. Our formulation
leads to a non-convex optimization problem which can be solved efficiently by
an alternating optimization method. A key novel aspect of our formulation is
the incorporation of marginal relaxation via regularized Wasserstein distance,
significantly improving the robustness of the method in the face of noisy or
missing empirical matching data. Our model falls into the category of
prescriptive models, which not only predict potential future matching, but is
also able to explain what leads to empirical matching and quantifies the impact
of changes in matching factors. The proposed approach has wide applicability
including predicting matching in online dating, labor market, college
application and crowdsourcing. We back up our claims with numerical experiments
on both synthetic data and real world data sets. | [
"stat.ML",
"cs.LG"
]
|
Image captioning is a research hotspot where encoder-decoder models combining
convolutional neural network (CNN) and long short-term memory (LSTM) achieve
promising results. Despite significant progress, these models generate
sentences differently from human cognitive styles. Existing models often
generate a complete sentence from the first word to the end, without
considering the influence of the following words on the whole sentence
generation. In this paper, we explore the utilization of a human-like cognitive
style, i.e., building overall cognition for the image to be described and the
sentence to be constructed, for enhancing computer image understanding. This
paper first proposes a Mutual-aid network structure with Bidirectional LSTMs
(MaBi-LSTMs) for acquiring overall contextual information. In the training
process, the forward and backward LSTMs encode the succeeding and preceding
words into their respective hidden states by simultaneously constructing the
whole sentence in a complementary manner. In the captioning process, the LSTM
implicitly utilizes the subsequent semantic information contained in its hidden
states. In fact, MaBi-LSTMs can generate two sentences in forward and backward
directions. To bridge the gap between cross-domain models and generate a
sentence with higher quality, we further develop a cross-modal attention
mechanism to retouch the two sentences by fusing their salient parts as well as
the salient areas of the image. Experimental results on the Microsoft COCO
dataset show that the proposed model improves the performance of
encoder-decoder models and achieves state-of-the-art results. | [
"cs.CV"
]
|
Recent advances in neural forecasting have produced major improvements in
accuracy for probabilistic demand prediction. In this work, we propose novel
improvements to the current state of the art by incorporating changes inspired
by recent advances in Transformer architectures for Natural Language
Processing. We develop a novel decoder-encoder attention for context-alignment,
improving forecasting accuracy by allowing the network to study its own history
based on the context for which it is producing a forecast. We also present a
novel positional encoding that allows the neural network to learn
context-dependent seasonality functions as well as arbitrary holiday distances.
Finally we show that the current state of the art MQ-Forecaster (Wen et al.,
2017) models display excess variability by failing to leverage previous errors
in the forecast to improve accuracy. We propose a novel decoder-self attention
scheme for forecasting that produces significant improvements in the excess
variation of the forecast. | [
"cs.LG",
"stat.ML"
]
|
Automatic differentiation (autodiff) has revolutionized machine learning. It
allows expressing complex computations by composing elementary ones in creative
ways and removes the burden of computing their derivatives by hand. More
recently, differentiation of optimization problem solutions has attracted
widespread attention with applications such as optimization as a layer, and in
bi-level problems such as hyper-parameter optimization and meta-learning.
However, the formulas for these derivatives often involve case-by-case tedious
mathematical derivations. In this paper, we propose a unified, efficient and
modular approach for implicit differentiation of optimization problems. In our
approach, the user defines (in Python in the case of our implementation) a
function $F$ capturing the optimality conditions of the problem to be
differentiated. Once this is done, we leverage autodiff of $F$ and implicit
differentiation to automatically differentiate the optimization problem. Our
approach thus combines the benefits of implicit differentiation and autodiff.
It is efficient as it can be added on top of any state-of-the-art solver and
modular as the optimality condition specification is decoupled from the
implicit differentiation mechanism. We show that seemingly simple principles
allow to recover many recently proposed implicit differentiation methods and
create new ones easily. We demonstrate the ease of formulating and solving
bi-level optimization problems using our framework. We also showcase an
application to the sensitivity analysis of molecular dynamics. | [
"cs.LG",
"cs.NA",
"math.NA",
"stat.ML"
]
|
This paper introduces SuperGlue, a neural network that matches two sets of
local features by jointly finding correspondences and rejecting non-matchable
points. Assignments are estimated by solving a differentiable optimal transport
problem, whose costs are predicted by a graph neural network. We introduce a
flexible context aggregation mechanism based on attention, enabling SuperGlue
to reason about the underlying 3D scene and feature assignments jointly.
Compared to traditional, hand-designed heuristics, our technique learns priors
over geometric transformations and regularities of the 3D world through
end-to-end training from image pairs. SuperGlue outperforms other learned
approaches and achieves state-of-the-art results on the task of pose estimation
in challenging real-world indoor and outdoor environments. The proposed method
performs matching in real-time on a modern GPU and can be readily integrated
into modern SfM or SLAM systems. The code and trained weights are publicly
available at https://github.com/magicleap/SuperGluePretrainedNetwork. | [
"cs.CV"
]
|
This paper proposes a deep learning based method for colored transparent
object matting from a single image. Existing approaches for transparent object
matting often require multiple images and long processing times, which greatly
hinder their applications on real-world transparent objects. The recently
proposed TOM-Net can produce a matte for a colorless transparent object from a
single image in a single fast feed-forward pass. In this paper, we extend
TOM-Net to handle colored transparent object by modeling the intrinsic color of
a transparent object with a color filter. We formulate the problem of colored
transparent object matting as simultaneously estimating an object mask, a color
filter, and a refractive flow field from a single image, and present a deep
learning framework for learning this task. We create a large-scale synthetic
dataset for training our network. We also capture a real dataset for
evaluation. Experiments on both synthetic and real datasets show promising
results, which demonstrate the effectiveness of our method. | [
"cs.CV"
]
|
Fairness-aware classification is receiving increasing attention in the
machine learning fields. Recently research proposes to formulate the
fairness-aware classification as constrained optimization problems. However,
several limitations exist in previous works due to the lack of a theoretical
framework for guiding the formulation. In this paper, we propose a general
framework for learning fair classifiers which addresses previous limitations.
The framework formulates various commonly-used fairness metrics as convex
constraints that can be directly incorporated into classic classification
models. Within the framework, we propose a constraint-free criterion on the
training data which ensures that any classifier learned from the data is fair.
We also derive the constraints which ensure that the real fairness metric is
satisfied when surrogate functions are used to achieve convexity. Our framework
can be used to for formulating fairness-aware classification with fairness
guarantee and computational efficiency. The experiments using real-world
datasets demonstrate our theoretical results and show the effectiveness of
proposed framework and methods. | [
"cs.LG",
"cs.AI",
"cs.CY",
"stat.ML"
]
|
Generative Adversarial Network (GAN) and its variants have recently attracted
intensive research interests due to their elegant theoretical foundation and
excellent empirical performance as generative models. These tools provide a
promising direction in the studies where data availability is limited. One
common issue in GANs is that the density of the learned generative distribution
could concentrate on the training data points, meaning that they can easily
remember training samples due to the high model complexity of deep networks.
This becomes a major concern when GANs are applied to private or sensitive data
such as patient medical records, and the concentration of distribution may
divulge critical patient information. To address this issue, in this paper we
propose a differentially private GAN (DPGAN) model, in which we achieve
differential privacy in GANs by adding carefully designed noise to gradients
during the learning procedure. We provide rigorous proof for the privacy
guarantee, as well as comprehensive empirical evidence to support our analysis,
where we demonstrate that our method can generate high quality data points at a
reasonable privacy level. | [
"cs.LG",
"cs.CR",
"stat.ML"
]
|
In this work, we address the task of natural image generation guided by a
conditioning input. We introduce a new architecture called conditional
invertible neural network (cINN). The cINN combines the purely generative INN
model with an unconstrained feed-forward network, which efficiently
preprocesses the conditioning input into useful features. All parameters of the
cINN are jointly optimized with a stable, maximum likelihood-based training
procedure. By construction, the cINN does not experience mode collapse and
generates diverse samples, in contrast to e.g. cGANs. At the same time our
model produces sharp images since no reconstruction loss is required, in
contrast to e.g. VAEs. We demonstrate these properties for the tasks of MNIST
digit generation and image colorization. Furthermore, we take advantage of our
bi-directional cINN architecture to explore and manipulate emergent properties
of the latent space, such as changing the image style in an intuitive way. | [
"cs.CV",
"cs.LG",
"68T01"
]
|
This work is devoted to unresolved problems of Artificial General
Intelligence - the inefficiency of transfer learning. One of the mechanisms
that are used to solve this problem in the area of reinforcement learning is a
model-based approach. In the paper we are expanding the schema networks method
which allows to extract the logical relationships between objects and actions
from the environment data. We present algorithms for training a Delta Schema
Network (DSN), predicting future states of the environment and planning actions
that will lead to positive reward. DSN shows strong performance of transfer
learning on the classic Atari game environment. | [
"cs.LG",
"cs.AI"
]
|
3D Multi-object tracking (MOT) is crucial to autonomous systems. Recent work
uses a standard tracking-by-detection pipeline, where feature extraction is
first performed independently for each object in order to compute an affinity
matrix. Then the affinity matrix is passed to the Hungarian algorithm for data
association. A key process of this standard pipeline is to learn discriminative
features for different objects in order to reduce confusion during data
association. In this work, we propose two techniques to improve the
discriminative feature learning for MOT: (1) instead of obtaining features for
each object independently, we propose a novel feature interaction mechanism by
introducing the Graph Neural Network. As a result, the feature of one object is
informed of the features of other objects so that the object feature can lean
towards the object with similar feature (i.e., object probably with a same ID)
and deviate from objects with dissimilar features (i.e., object probably with
different IDs), leading to a more discriminative feature for each object; (2)
instead of obtaining the feature from either 2D or 3D space in prior work, we
propose a novel joint feature extractor to learn appearance and motion features
from 2D and 3D space simultaneously. As features from different modalities
often have complementary information, the joint feature can be more
discriminate than feature from each individual modality. To ensure that the
joint feature extractor does not heavily rely on one modality, we also propose
an ensemble training paradigm. Through extensive evaluation, our proposed
method achieves state-of-the-art performance on KITTI and nuScenes 3D MOT
benchmarks. Our code will be made available at
https://github.com/xinshuoweng/GNN3DMOT | [
"cs.CV",
"cs.LG",
"eess.IV"
]
|
A number of machine learning tasks entail a high degree of invariance: the
data distribution does not change if we act on the data with a certain group of
transformations. For instance, labels of images are invariant under
translations of the images. Certain neural network architectures -- for
instance, convolutional networks -- are believed to owe their success to the
fact that they exploit such invariance properties. With the objective of
quantifying the gain achieved by invariant architectures, we introduce two
classes of models: invariant random features and invariant kernel methods. The
latter includes, as a special case, the neural tangent kernel for convolutional
networks with global average pooling. We consider uniform covariates
distributions on the sphere and hypercube and a general invariant target
function. We characterize the test error of invariant methods in a
high-dimensional regime in which the sample size and number of hidden units
scale as polynomials in the dimension, for a class of groups that we call
`degeneracy $\alpha$', with $\alpha \leq 1$. We show that exploiting invariance
in the architecture saves a $d^\alpha$ factor ($d$ stands for the dimension) in
sample size and number of hidden units to achieve the same test error as for
unstructured architectures.
Finally, we show that output symmetrization of an unstructured kernel
estimator does not give a significant statistical improvement; on the other
hand, data augmentation with an unstructured kernel estimator is equivalent to
an invariant kernel estimator and enjoys the same improvement in statistical
efficiency. | [
"stat.ML",
"cs.LG",
"math.ST",
"stat.TH",
"62J99 (Primary)"
]
|
Although there are a great number of adversarial attacks on deep learning
based classifiers, how to attack object detection systems has been rarely
studied. In this paper, we propose a Half-Neighbor Masked Projected Gradient
Descent (HNM-PGD) based attack, which can generate strong perturbation to fool
different kinds of detectors under strict constraints. We also applied the
proposed HNM-PGD attack in the CIKM 2020 AnalytiCup Competition, which was
ranked within the top 1% on the leaderboard. We release the code at
https://github.com/YanghaoZYH/HNM-PGD. | [
"cs.CV",
"cs.LG"
]
|
Adversarial example generation becomes a viable method for evaluating the
robustness of a machine learning model. In this paper, we consider hard-label
black-box attacks (a.k.a. decision-based attacks), which is a challenging
setting that generates adversarial examples based on only a series of black-box
hard-label queries. This type of attacks can be used to attack discrete and
complex models, such as Gradient Boosting Decision Tree (GBDT) and
detection-based defense models. Existing decision-based attacks based on
iterative local updates often get stuck in a local minimum and fail to generate
the optimal adversarial example with the smallest distortion. To remedy this
issue, we propose an efficient meta algorithm called BOSH-attack, which
tremendously improves existing algorithms through Bayesian Optimization (BO)
and Successive Halving (SH). In particular, instead of traversing a single
solution path when searching an adversarial example, we maintain a pool of
solution paths to explore important regions. We show empirically that the
proposed algorithm converges to a better solution than existing approaches,
while the query count is smaller than applying multiple random initializations
by a factor of 10. | [
"cs.LG",
"stat.ML"
]
|
Kernel methods are popular in clustering due to their generality and
discriminating power. However, we show that many kernel clustering criteria
have density biases theoretically explaining some practically significant
artifacts empirically observed in the past. For example, we provide conditions
and formally prove the density mode isolation bias in kernel K-means for a
common class of kernels. We call it Breiman's bias due to its similarity to the
histogram mode isolation previously discovered by Breiman in decision tree
learning with Gini impurity. We also extend our analysis to other popular
kernel clustering methods, e.g. average/normalized cut or dominant sets, where
density biases can take different forms. For example, splitting isolated points
by cut-based criteria is essentially the sparsest subset bias, which is the
opposite of the density mode bias. Our findings suggest that a principled
solution for density biases in kernel clustering should directly address data
inhomogeneity. We show that density equalization can be implicitly achieved
using either locally adaptive weights or locally adaptive kernels. Moreover,
density equalization makes many popular kernel clustering objectives
equivalent. Our synthetic and real data experiments illustrate density biases
and proposed solutions. We anticipate that theoretical understanding of kernel
clustering limitations and their principled solutions will be important for a
broad spectrum of data analysis applications across the disciplines. | [
"stat.ML"
]
|
Drawing and annotating comic illustrations is a complex and difficult
process. No existing machine learning algorithms have been developed to create
comic illustrations based on descriptions of illustrations, or the dialogue in
comics. Moreover, it is not known if a generative adversarial network (GAN) can
generate original comics that correspond to the dialogue and/or descriptions.
GANs are successful in producing photo-realistic images, but this technology
does not necessarily translate to generation of flawless comics. What is more,
comic evaluation is a prominent challenge as common metrics such as Inception
Score will not perform comparably, as they are designed to work on photos. In
this paper: 1. We implement ComicGAN, a novel text-to-comic pipeline based on a
text-to-image GAN that synthesizes comics according to text descriptions. 2. We
describe an in-depth empirical study of the technical difficulties of comic
generation using GAN's. ComicGAN has two novel features: (i) text description
creation from labels via permutation and augmentation, and (ii) custom image
encoding with Convolutional Neural Networks. We extensively evaluate the
proposed ComicGAN in two scenarios, namely image generation from descriptions,
and image generation from dialogue. Our results on 1000 Dilbert comic panels
and 6000 descriptions show synthetic comic panels from text inputs resemble
original Dilbert panels. Novel methods for text description creation and custom
image encoding brought improvements to Frechet Inception Distance, detail, and
overall image quality over baseline algorithms. Generating illustrations from
descriptions provided clear comics including characters and colours that were
specified in the descriptions. | [
"cs.CV",
"cs.LG"
]
|
We describe a method for fast approximation of sparse coding. The input space
is subdivided by a binary decision tree, and we simultaneously learn a
dictionary and assignment of allowed dictionary elements for each leaf of the
tree. We store a lookup table with the assignments and the pseudoinverses for
each node, allowing for very fast inference. We give an algorithm for learning
the tree, the dictionary and the dictionary element assignment, and In the
process of describing this algorithm, we discuss the more general problem of
learning the groups in group structured sparse modelling. We show that our
method creates good sparse representations by using it in the object
recognition framework of \cite{lazebnik06,yang-cvpr-09}. Implementing our own
fast version of the SIFT descriptor the whole system runs at 20 frames per
second on $321 \times 481$ sized images on a laptop with a quad-core cpu, while
sacrificing very little accuracy on the Caltech 101 and 15 scenes benchmarks. | [
"cs.CV"
]
|
Lake ice is a strong climate indicator and has been recognised as part of the
Essential Climate Variables (ECV) by the Global Climate Observing System
(GCOS). The dynamics of freezing and thawing, and possible shifts of freezing
patterns over time, can help in understanding the local and global climate
systems. One way to acquire the spatio-temporal information about lake ice
formation, independent of clouds, is to analyse webcam images. This paper
intends to move towards a universal model for monitoring lake ice with freely
available webcam data. We demonstrate good performance, including the ability
to generalise across different winters and different lakes, with a
state-of-the-art Convolutional Neural Network (CNN) model for semantic image
segmentation, Deeplab v3+. Moreover, we design a variant of that model, termed
Deep-U-Lab, which predicts sharper, more correct segmentation boundaries. We
have tested the model's ability to generalise with data from multiple camera
views and two different winters. On average, it achieves
intersection-over-union (IoU) values of ~71% across different cameras and ~69%
across different winters, greatly outperforming prior work. Going even further,
we show that the model even achieves 60% IoU on arbitrary images scraped from
photo-sharing web sites. As part of the work, we introduce a new benchmark
dataset of webcam images, Photi-LakeIce, from multiple cameras and two
different winters, along with pixel-wise ground truth annotations. | [
"cs.CV",
"eess.IV"
]
|
In this work, we introduce a novel local pairwise descriptor and then develop
a simple, effective iterative method to solve the resulting quadratic
assignment through sparsity control for shape correspondence between two
approximate isometric surfaces. Our pairwise descriptor is based on the
stiffness and mass matrix of finite element approximation of the
Laplace-Beltrami differential operator, which is local in space, sparse to
represent, and extremely easy to compute while containing global information.
It allows us to deal with open surfaces, partial matching, and topological
perturbations robustly. To solve the resulting quadratic assignment problem
efficiently, the two key ideas of our iterative algorithm are: 1) select pairs
with good (approximate) correspondence as anchor points, 2) solve a regularized
quadratic assignment problem only in the neighborhood of selected anchor points
through sparsity control. These two ingredients can improve and increase the
number of anchor points quickly while reducing the computation cost in each
quadratic assignment iteration significantly. With enough high-quality anchor
points, one may use various pointwise global features with reference to these
anchor points to further improve the dense shape correspondence. We use various
experiments to show the efficiency, quality, and versatility of our method on
large data sets, patches, and point clouds (without global meshes). | [
"cs.CV"
]
|
Driven by the urgent demand for managing remote sensing big data, large-scale
remote sensing image retrieval (RSIR) attracts increasing attention in the
remote sensing field. In general, existing retrieval methods can be regarded as
visual-based retrieval approaches which search and return a set of similar
images from a database to a given query image. Although retrieval methods have
achieved great success, there is still a question that needs to be responded
to: Can we obtain the accurate semantic labels of the returned similar images
to further help analyzing and processing imagery? Inspired by the above
question, in this paper, we redefine the image retrieval problem as visual and
semantic retrieval of images. Specifically, we propose a novel deep hashing
convolutional neural network (DHCNN) to simultaneously retrieve the similar
images and classify their semantic labels in a unified framework. In more
detail, a convolutional neural network (CNN) is used to extract
high-dimensional deep features. Then, a hash layer is perfectly inserted into
the network to transfer the deep features into compact hash codes. In addition,
a fully connected layer with a softmax function is performed on hash layer to
generate class distribution. Finally, a loss function is elaborately designed
to simultaneously consider the label loss of each image and similarity loss of
pairs of images. Experimental results on two remote sensing datasets
demonstrate that the proposed method achieves the state-of-art retrieval and
classification performance. | [
"cs.CV",
"eess.IV"
]
|
Classical global convergence results for first-order methods rely on uniform
smoothness and the \L{}ojasiewicz inequality. Motivated by properties of
objective functions that arise in machine learning, we propose a non-uniform
refinement of these notions, leading to \emph{Non-uniform Smoothness} (NS) and
\emph{Non-uniform \L{}ojasiewicz inequality} (N\L{}). The new definitions
inspire new geometry-aware first-order methods that are able to converge to
global optimality faster than the classical $\Omega(1/t^2)$ lower bounds. To
illustrate the power of these geometry-aware methods and their corresponding
non-uniform analysis, we consider two important problems in machine learning:
policy gradient optimization in reinforcement learning (PG), and generalized
linear model training in supervised learning (GLM). For PG, we find that
normalizing the gradient ascent method can accelerate convergence to
$O(e^{-t})$ while incurring less overhead than existing algorithms. For GLM, we
show that geometry-aware normalized gradient descent can also achieve a linear
convergence rate, which significantly improves the best known results. We
additionally show that the proposed geometry-aware descent methods escape
landscape plateaus faster than standard gradient descent. Experimental results
are used to illustrate and complement the theoretical findings. | [
"cs.LG"
]
|
In this paper, a new interval type-2 fuzzy neural network able to construct
non-separable fuzzy rules with adaptive shapes is introduced. To reflect the
uncertainty, the shape of fuzzy sets considered to be uncertain. Therefore, a
new form of interval type-2 fuzzy sets based on a general Gaussian model able
to construct different shapes (including triangular, bell-shaped, trapezoidal)
is proposed. To consider the interactions among input variables, input vectors
are transformed to new feature spaces with uncorrelated variables proper for
defining each fuzzy rule. Next, the new features are fed to a fuzzification
layer using proposed interval type-2 fuzzy sets with adaptive shape.
Consequently, interval type-2 non-separable fuzzy rules with proper shapes,
considering the local interactions of variables and the uncertainty are formed.
For type reduction the contribution of the upper and lower firing strengths of
each fuzzy rule are adaptively selected separately. To train different
parameters of the network, the Levenberg-Marquadt optimization method is
utilized. The performance of the proposed method is investigated on clean and
noisy datasets to show the ability to consider the uncertainty. Moreover, the
proposed paradigm, is successfully applied to real-world time-series
predictions, regression problems, and nonlinear system identification.
According to the experimental results, the performance of our proposed model
outperforms other methods with a more parsimonious structure. | [
"cs.LG",
"cs.AI"
]
|
Time series with long-term structure arise in a variety of contexts and
capturing this temporal structure is a critical challenge in time series
analysis for both inference and forecasting settings. Traditionally, state
space models have been successful in providing uncertainty estimates of
trajectories in the latent space. More recently, deep learning, attention-based
approaches have achieved state of the art performance for sequence modeling,
though often require large amounts of data and parameters to do so. We propose
Stanza, a nonlinear, non-stationary state space model as an intermediate
approach to fill the gap between traditional models and modern deep learning
approaches for complex time series. Stanza strikes a balance between
competitive forecasting accuracy and probabilistic, interpretable inference for
highly structured time series. In particular, Stanza achieves forecasting
accuracy competitive with deep LSTMs on real-world datasets, especially for
multi-step ahead forecasting. | [
"stat.ML",
"cs.LG"
]
|
Reinforcement learning has steadily improved and outperform human in lots of
traditional games since the resurgence of deep neural network. However, these
success is not easy to be copied to autonomous driving because the state spaces
in real world are extreme complex and action spaces are continuous and fine
control is required. Moreover, the autonomous driving vehicles must also keep
functional safety under the complex environments. To deal with these
challenges, we first adopt the deep deterministic policy gradient (DDPG)
algorithm, which has the capacity to handle complex state and action spaces in
continuous domain. We then choose The Open Racing Car Simulator (TORCS) as our
environment to avoid physical damage. Meanwhile, we select a set of appropriate
sensor information from TORCS and design our own rewarder. In order to fit DDPG
algorithm to TORCS, we design our network architecture for both actor and
critic inside DDPG paradigm. To demonstrate the effectiveness of our model, We
evaluate on different modes in TORCS and show both quantitative and qualitative
results. | [
"cs.CV",
"cs.LG",
"cs.RO"
]
|
The lack of transparency of neural networks stays a major break for their
use. The Layerwise Relevance Propagation technique builds heat-maps
representing the relevance of each input in the model s decision. The relevance
spreads backward from the last to the first layer of the Deep Neural Network.
Layer-wise Relevance Propagation does not manage normalization layers, in this
work we suggest a method to include normalization layers. Specifically, we
build an equivalent network fusing normalization layers and convolutional or
fully connected layers. Heatmaps obtained with our method on MNIST and CIFAR 10
datasets are more accurate for convolutional layers. Our study also prevents
from using Layerwise Relevance Propagation with networks including a
combination of connected layers and normalization layer. | [
"cs.LG",
"cs.AI",
"stat.ML"
]
|
Dataset augmentation, the practice of applying a wide array of
domain-specific transformations to synthetically expand a training set, is a
standard tool in supervised learning. While effective in tasks such as visual
recognition, the set of transformations must be carefully designed,
implemented, and tested for every new domain, limiting its re-use and
generality. In this paper, we adopt a simpler, domain-agnostic approach to
dataset augmentation. We start with existing data points and apply simple
transformations such as adding noise, interpolating, or extrapolating between
them. Our main insight is to perform the transformation not in input space, but
in a learned feature space. A re-kindling of interest in unsupervised
representation learning makes this technique timely and more effective. It is a
simple proposal, but to-date one that has not been tested empirically. Working
in the space of context vectors generated by sequence-to-sequence models, we
demonstrate a technique that is effective for both static and sequential data. | [
"stat.ML",
"cs.LG"
]
|
Several state-of-the-art video deblurring methods are based on a strong
assumption that the captured scenes are static. These methods fail to deblur
blurry videos in dynamic scenes. We propose a video deblurring method to deal
with general blurs inherent in dynamic scenes, contrary to other methods. To
handle locally varying and general blurs caused by various sources, such as
camera shake, moving objects, and depth variation in a scene, we approximate
pixel-wise kernel with bidirectional optical flows. Therefore, we propose a
single energy model that simultaneously estimates optical flows and latent
frames to solve our deblurring problem. We also provide a framework and
efficient solvers to optimize the energy model. By minimizing the proposed
energy function, we achieve significant improvements in removing blurs and
estimating accurate optical flows in blurry frames. Extensive experimental
results demonstrate the superiority of the proposed method in real and
challenging videos that state-of-the-art methods fail in either deblurring or
optical flow estimation. | [
"cs.CV"
]
|
We prove a Chernoff-type bound for sums of matrix-valued random variables
sampled via a regular (aperiodic and irreducible) finite Markov chain.
Specially, consider a random walk on a regular Markov chain and a Hermitian
matrix-valued function on its state space. Our result gives exponentially
decreasing bounds on the tail distributions of the extreme eigenvalues of the
sample mean matrix. Our proof is based on the matrix expander (regular
undirected graph) Chernoff bound [Garg et al. STOC '18] and scalar
Chernoff-Hoeffding bounds for Markov chains [Chung et al. STACS '12].
Our matrix Chernoff bound for Markov chains can be applied to analyze the
behavior of co-occurrence statistics for sequential data, which have been
common and important data signals in machine learning. We show that given a
regular Markov chain with $n$ states and mixing time $\tau$, we need a
trajectory of length $O(\tau (\log{(n)}+\log{(\tau)})/\epsilon^2)$ to achieve
an estimator of the co-occurrence matrix with error bound $\epsilon$. We
conduct several experiments and the experimental results are consistent with
the exponentially fast convergence rate from theoretical analysis. Our result
gives the first bound on the convergence rate of the co-occurrence matrix and
the first sample complexity analysis in graph representation learning. | [
"stat.ML",
"cs.LG",
"math.PR"
]
|
We propose ScheduleNet, a RL-based real-time scheduler, that can solve
various types of multi-agent scheduling problems. We formulate these problems
as a semi-MDP with episodic reward (makespan) and learn ScheduleNet, a
decentralized decision-making policy that can effectively coordinate multiple
agents to complete tasks. The decision making procedure of ScheduleNet
includes: (1) representing the state of a scheduling problem with the
agent-task graph, (2) extracting node embeddings for agent and tasks nodes, the
important relational information among agents and tasks, by employing the
type-aware graph attention (TGA), and (3) computing the assignment probability
with the computed node embeddings. We validate the effectiveness of ScheduleNet
as a general learning-based scheduler for solving various types of multi-agent
scheduling tasks, including multiple salesman traveling problem (mTSP) and job
shop scheduling problem (JSP). | [
"cs.LG",
"cs.AI",
"cs.MA",
"cs.SY",
"eess.SY"
]
|
Value Iteration Networks (VINs) have emerged as a popular method to
incorporate planning algorithms within deep reinforcement learning, enabling
performance improvements on tasks requiring long-range reasoning and
understanding of environment dynamics. This came with several limitations,
however: the model is not incentivised in any way to perform meaningful
planning computations, the underlying state space is assumed to be discrete,
and the Markov decision process (MDP) is assumed fixed and known. We propose
eXecuted Latent Value Iteration Networks (XLVINs), which combine recent
developments across contrastive self-supervised learning, graph representation
learning and neural algorithmic reasoning to alleviate all of the above
limitations, successfully deploying VIN-style models on generic environments.
XLVINs match the performance of VIN-like models when the underlying MDP is
discrete, fixed and known, and provides significant improvements to model-free
baselines across three general MDP setups. | [
"cs.LG",
"cs.AI",
"stat.ML"
]
|
Recent years have witnessed the emergence and development of graph neural
networks (GNNs), which have been shown as a powerful approach for graph
representation learning in many tasks, such as node classification and graph
classification. The research on the robustness of these models has also started
to attract attentions in the machine learning field. However, most of the
existing work in this area focus on the GNNs for node-level tasks, while little
work has been done to study the robustness of the GNNs for the graph
classification task. In this paper, we aim to explore the vulnerability of the
Hierarchical Graph Pooling (HGP) Neural Networks, which are advanced GNNs that
perform very well in the graph classification in terms of prediction accuracy.
We propose an adversarial attack framework for this task. Specifically, we
design a surrogate model that consists of convolutional and pooling operators
to generate adversarial samples to fool the hierarchical GNN-based graph
classification models. We set the preserved nodes by the pooling operator as
our attack targets, and then we perturb the attack targets slightly to fool the
pooling operator in hierarchical GNNs so that they will select the wrong nodes
to preserve. We show the adversarial samples generated from multiple datasets
by our surrogate model have enough transferability to attack current
state-of-art graph classification models. Furthermore, we conduct the robust
train on the target models and demonstrate that the retrained graph
classification models are able to better defend against the attack from the
adversarial samples. To the best of our knowledge, this is the first work on
the adversarial attack against hierarchical GNN-based graph classification
models. | [
"cs.LG",
"cs.CR",
"stat.ML"
]
|
The Jaccard index, also known as Intersection-over-Union (IoU score), is one
of the most critical evaluation metrics in medical image segmentation. However,
directly optimizing the mean IoU (mIoU) score over multiple objective classes
is an open problem. Although some algorithms have been proposed to optimize its
surrogates, there is no guarantee provided for their generalization ability. In
this paper, we present a novel data-distribution-aware margin calibration
method for a better generalization of the mIoU over the whole
data-distribution, underpinned by a rigid lower bound. This scheme ensures a
better segmentation performance in terms of IoU scores in practice. We evaluate
the effectiveness of the proposed margin calibration method on two medical
image segmentation datasets, showing substantial improvements of IoU scores
over other learning schemes using deep segmentation models. | [
"cs.CV"
]
|
Zero-shot object detection (ZSD), the task that extends conventional
detection models to detecting objects from unseen categories, has emerged as a
new challenge in computer vision. Most existing approaches tackle the ZSD task
with a strict mapping-transfer strategy, which may lead to suboptimal ZSD
results: 1) the learning process of those models ignores the available unseen
class information, and thus can be easily biased towards the seen categories;
2) the original visual feature space is not well-structured and lack of
discriminative information. To address these issues, we develop a novel
Semantics-Guided Contrastive Network for ZSD, named ContrastZSD, a detection
framework that first brings contrastive learning mechanism into the realm of
zero-shot detection. Particularly, ContrastZSD incorporates two
semantics-guided contrastive learning subnets that contrast between
region-category and region-region pairs respectively. The pairwise contrastive
tasks take advantage of additional supervision signals derived from both ground
truth label and pre-defined class similarity distribution. Under the guidance
of those explicit semantic supervision, the model can learn more knowledge
about unseen categories to avoid the bias problem to seen concepts, while
optimizing the data structure of visual features to be more discriminative for
better visual-semantic alignment. Extensive experiments are conducted on two
popular benchmarks for ZSD, i.e., PASCAL VOC and MS COCO. Results show that our
method outperforms the previous state-of-the-art on both ZSD and generalized
ZSD tasks. | [
"cs.CV"
]
|
Solving complex computer vision tasks by deep learning techniques relies on
large amounts of (supervised) image data, typically unavailable in industrial
environments. The lack of training data starts to impede the successful
transfer of state-of-the-art methods in computer vision to industrial
applications. We introduce BlendTorch, an adaptive Domain Randomization (DR)
library, to help creating infinite streams of synthetic training data.
BlendTorch generates data by massively randomizing low-fidelity simulations and
takes care of distributing artificial training data for model learning in
real-time. We show that models trained with BlendTorch repeatedly perform
better in an industrial object detection task than those trained on real or
photo-realistic datasets. | [
"cs.CV",
"cs.LG"
]
|
Place recognition plays an essential role in the field of autonomous driving
and robot navigation. Although a number of point cloud based methods have been
proposed and achieved promising results, few of them take the size difference
of objects into consideration. For small objects like pedestrians and vehicles,
large receptive fields will capture unrelated information, while small
receptive fields would fail to encode complete geometric information for large
objects such as buildings. We argue that fixed receptive fields are not well
suited for place recognition, and propose a novel Adaptive Receptive Field
Module (ARFM), which can adaptively adjust the size of the receptive field
based on the input point cloud. We also present a novel network architecture,
named TransLoc3D, to obtain discriminative global descriptors of point clouds
for the place recognition task. TransLoc3D consists of a 3D sparse
convolutional module, an ARFM module, an external transformer network which
aims to capture long range dependency and a NetVLAD layer. Experiments show
that our method outperforms prior state-of-the-art results, with an improvement
of 1.1\% on average recall@1 on the Oxford RobotCar dataset, and 0.8\% on the
B.D. dataset. | [
"cs.CV"
]
|
It is widely recognized that the deeper networks or networks with more
feature maps have better performance. Existing studies mainly focus on
extending the network depth and increasing the feature maps of networks. At the
same time, horizontal expansion network (e.g. Inception Model) as an
alternative way to improve network performance has not been fully investigated.
Accordingly, we proposed NeuroTreeNet (NTN), as a new horizontal extension
network through the combination of random forest and Inception Model. Based on
the tree structure, in which each branch represents a network and the root node
features are shared to child nodes, network parameters are effectively reduced.
By combining all features of leaf nodes, even less feature maps achieved better
performance. In addition, the relationship between tree structure and the
performance of NTN was investigated in depth. Comparing to other networks (e.g.
VDSR\_5) with equal magnitude parameters, our model showed preferable
performance in super resolution reconstruction task. | [
"cs.CV"
]
|
Understanding intrinsic patterns and predicting spatiotemporal
characteristics of cities require a comprehensive representation of urban
neighborhoods. Existing works relied on either inter- or intra-region
connectivities to generate neighborhood representations but failed to fully
utilize the informative yet heterogeneous data within neighborhoods. In this
work, we propose Urban2Vec, an unsupervised multi-modal framework which
incorporates both street view imagery and point-of-interest (POI) data to learn
neighborhood embeddings. Specifically, we use a convolutional neural network to
extract visual features from street view images while preserving geospatial
similarity. Furthermore, we model each POI as a bag-of-words containing its
category, rating, and review information. Analog to document embedding in
natural language processing, we establish the semantic similarity between
neighborhood ("document") and the words from its surrounding POIs in the vector
space. By jointly encoding visual, textual, and geospatial information into the
neighborhood representation, Urban2Vec can achieve performances better than
baseline models and comparable to fully-supervised methods in downstream
prediction tasks. Extensive experiments on three U.S. metropolitan areas also
demonstrate the model interpretability, generalization capability, and its
value in neighborhood similarity analysis. | [
"cs.LG",
"cs.CV",
"stat.ML"
]
|
The recent success of Generative Adversarial Networks (GAN) is a result of
their ability to generate high quality images from a latent vector space. An
important application is the generation of images from a text description,
where the text description is encoded and further used in the conditioning of
the generated image. Thus the generative network has to additionally learn a
mapping from the text latent vector space to a highly complex and multi-modal
image data distribution, which makes the training of such models challenging.
To handle the complexities of fashion image and meta data, we propose Ontology
Generative Adversarial Networks (O-GANs) for fashion image synthesis that is
conditioned on an hierarchical fashion ontology in order to improve the image
generation fidelity. We show that the incorporation of the ontology leads to
better image quality as measured by Fr\'{e}chet Inception Distance and
Inception Score. Additionally, we show that the O-GAN achieves better
conditioning results evaluated by implicit similarity between the text and the
generated image. | [
"cs.LG",
"stat.ML"
]
|
We describe a method for visual question answering which is capable of
reasoning about contents of an image on the basis of information extracted from
a large-scale knowledge base. The method not only answers natural language
questions using concepts not contained in the image, but can provide an
explanation of the reasoning by which it developed its answer. The method is
capable of answering far more complex questions than the predominant long
short-term memory-based approach, and outperforms it significantly in the
testing. We also provide a dataset and a protocol by which to evaluate such
methods, thus addressing one of the key issues in general visual ques- tion
answering. | [
"cs.CV",
"cs.CL"
]
|
Time-Spatial data plays a crucial role for different fields such as traffic
management. These data can be collected via devices such as surveillance
sensors or tracking systems. However, how to efficiently an- alyze and
visualize these data to capture essential embedded pattern information is
becoming a big challenge today. Classic visualization ap- proaches focus on
revealing 2D and 3D spatial information and modeling statistical test. Those
methods would easily fail when data become mas- sive. Recent attempts concern
on how to simply cluster data and perform prediction with time-oriented
information. However, those approaches could still be further enhanced as they
also have limitations for han- dling massive clusters and labels. In this
paper, we propose a visualiza- tion methodology for mobility data using
artificial neural net techniques. This method aggregates three main parts that
are Back-end Data Model, Neural Net Algorithm including clustering method
Self-Organizing Map (SOM) and prediction approach Recurrent Neural Net (RNN)
for ex- tracting the features and lastly a solid front-end that displays the
results to users with an interactive system. SOM is able to cluster the
visiting patterns and detect the abnormal pattern. RNN can perform the predic-
tion for time series analysis using its dynamic architecture. Furthermore, an
interactive system will enable user to interpret the result with graph- ics,
animation and 3D model for a close-loop feedback. This method can be
particularly applied in two tasks that Commercial-based Promotion and abnormal
traffic patterns detection. | [
"cs.CV"
]
|
Deep multi-task learning attracts much attention in recent years as it
achieves good performance in many applications. Feature learning is important
to deep multi-task learning for sharing common information among tasks. In this
paper, we propose a Hierarchical Graph Neural Network (HGNN) to learn augmented
features for deep multi-task learning. The HGNN consists of two-level graph
neural networks. In the low level, an intra-task graph neural network is
responsible of learning a powerful representation for each data point in a task
by aggregating its neighbors. Based on the learned representation, a task
embedding can be generated for each task in a similar way to max pooling. In
the second level, an inter-task graph neural network updates task embeddings of
all the tasks based on the attention mechanism to model task relations. Then
the task embedding of one task is used to augment the feature representation of
data points in this task. Moreover, for classification tasks, an inter-class
graph neural network is introduced to conduct similar operations on a finer
granularity, i.e., the class level, to generate class embeddings for each class
in all the tasks use class embeddings to augment the feature representation.
The proposed feature augmentation strategy can be used in many deep multi-task
learning models. we analyze the HGNN in terms of training and generalization
losses. Experiments on real-world datastes show the significant performance
improvement when using this strategy. | [
"cs.LG",
"stat.ML"
]
|
In many applications of computer graphics, art and design, it is desirable
for a user to provide intuitive non-image input, such as text, sketch, stroke,
graph or layout, and have a computer system automatically generate
photo-realistic images that adhere to the input content. While classic works
that allow such automatic image content generation have followed a framework of
image retrieval and composition, recent advances in deep generative models such
as generative adversarial networks (GANs), variational autoencoders (VAEs), and
flow-based methods have enabled more powerful and versatile image generation
tasks. This paper reviews recent works for image synthesis given intuitive user
input, covering advances in input versatility, image generation methodology,
benchmark datasets, and evaluation metrics. This motivates new perspectives on
input representation and interactivity, cross pollination between major image
generation paradigms, and evaluation and comparison of generation methods. | [
"cs.CV",
"cs.GR",
"cs.LG"
]
|
Training robust supervised deep learning models for many geospatial
applications of computer vision is difficult due to dearth of class-balanced
and diverse training data. Conversely, obtaining enough training data for many
applications is financially prohibitive or may be infeasible, especially when
the application involves modeling rare or extreme events. Synthetically
generating data (and labels) using a generative model that can sample from a
target distribution and exploit the multi-scale nature of images can be an
inexpensive solution to address scarcity of labeled data. Towards this goal, we
present a deep conditional generative model, called VAE-Info-cGAN, that
combines a Variational Autoencoder (VAE) with a conditional Information
Maximizing Generative Adversarial Network (InfoGAN), for synthesizing
semantically rich images simultaneously conditioned on a pixel-level condition
(PLC) and a macroscopic feature-level condition (FLC). Dimensionally, the PLC
can only vary in the channel dimension from the synthesized image and is meant
to be a task-specific input. The FLC is modeled as an attribute vector in the
latent space of the generated image which controls the contributions of various
characteristic attributes germane to the target distribution. An interpretation
of the attribute vector to systematically generate synthetic images by varying
a chosen binary macroscopic feature is explored. Experiments on a GPS
trajectories dataset show that the proposed model can accurately generate
various forms of spatio-temporal aggregates across different geographic
locations while conditioned only on a raster representation of the road
network. The primary intended application of the VAE-Info-cGAN is synthetic
data (and label) generation for targeted data augmentation for computer
vision-based modeling of problems relevant to geospatial analysis and remote
sensing. | [
"cs.CV",
"cs.AI",
"cs.LG"
]
|
A central problem in neuroscience is reconstructing neuronal circuits on the
synapse level. Due to a wide range of scales in brain architecture such
reconstruction requires imaging that is both high-resolution and
high-throughput. Existing electron microscopy (EM) techniques possess required
resolution in the lateral plane and either high-throughput or high depth
resolution but not both. Here, we exploit recent advances in unsupervised
learning and signal processing to obtain high depth-resolution EM images
computationally without sacrificing throughput. First, we show that the brain
tissue can be represented as a sparse linear combination of localized basis
functions that are learned using high-resolution datasets. We then develop
compressive sensing-inspired techniques that can reconstruct the brain tissue
from very few (typically 5) tomographic views of each section. This enables
tracing of neuronal processes and, hence, high throughput reconstruction of
neural circuits on the level of individual synapses. | [
"cs.CV",
"q-bio.NC",
"stat.ML"
]
|
Perceiving and manipulating 3D articulated objects (e.g., cabinets, doors) in
human environments is an important yet challenging task for future
home-assistant robots. The space of 3D articulated objects is exceptionally
rich in their myriad semantic categories, diverse shape geometry, and
complicated part functionality. Previous works mostly abstract kinematic
structure with estimated joint parameters and part poses as the visual
representations for manipulating 3D articulated objects. In this paper, we
propose object-centric actionable visual priors as a novel
perception-interaction handshaking point that the perception system outputs
more actionable guidance than kinematic structure estimation, by predicting
dense geometry-aware, interaction-aware, and task-aware visual action
affordance and trajectory proposals. We design an interaction-for-perception
framework VAT-Mart to learn such actionable visual representations by
simultaneously training a curiosity-driven reinforcement learning policy
exploring diverse interaction trajectories and a perception module summarizing
and generalizing the explored knowledge for pointwise predictions among diverse
shapes. Experiments prove the effectiveness of the proposed approach using the
large-scale PartNet-Mobility dataset in SAPIEN environment and show promising
generalization capabilities to novel test shapes, unseen object categories, and
real-world data. Project page: https://hyperplane-lab.github.io/vat-mart | [
"cs.CV",
"cs.RO"
]
|
We propose an end-to-end learning framework for segmenting generic objects in
both images and videos. Given a novel image or video, our approach produces a
pixel-level mask for all "object-like" regions---even for object categories
never seen during training. We formulate the task as a structured prediction
problem of assigning an object/background label to each pixel, implemented
using a deep fully convolutional network. When applied to a video, our model
further incorporates a motion stream, and the network learns to combine both
appearance and motion and attempts to extract all prominent objects whether
they are moving or not. Beyond the core model, a second contribution of our
approach is how it leverages varying strengths of training annotations.
Pixel-level annotations are quite difficult to obtain, yet crucial for training
a deep network approach for segmentation. Thus we propose ways to exploit
weakly labeled data for learning dense foreground segmentation. For images, we
show the value in mixing object category examples with image-level labels
together with relatively few images with boundary-level annotations. For video,
we show how to bootstrap weakly annotated videos together with the network
trained for image segmentation. Through experiments on multiple challenging
image and video segmentation benchmarks, our method offers consistently strong
results and improves the state-of-the-art for fully automatic segmentation of
generic (unseen) objects. In addition, we demonstrate how our approach benefits
image retrieval and image retargeting, both of which flourish when given our
high-quality foreground maps. Code, models, and videos are
at:http://vision.cs.utexas.edu/projects/pixelobjectness/ | [
"cs.CV"
]
|
Fashion products typically feature in compositions of a variety of styles at
different clothing parts. In order to distinguish images of different fashion
products, we need to extract both appearance (i.e., "how to describe") and
localization (i.e.,"where to look") information, and their interactions. To
this end, we propose a biologically inspired framework for image-based fashion
product retrieval, which mimics the hypothesized twostream visual processing
system of human brain. The proposed attentional heterogeneous bilinear network
(AHBN) consists of two branches: a deep CNN branch to extract fine-grained
appearance attributes and a fully convolutional branch to extract landmark
localization information. A joint channel-wise attention mechanism is further
applied to the extracted heterogeneous features to focus on important channels,
followed by a compact bilinear pooling layer to model the interaction of the
two streams. Our proposed framework achieves satisfactory performance on three
image-based fashion product retrieval benchmarks. | [
"cs.CV"
]
|
Attention mechanism of late has been quite popular in the computer vision
community. A lot of work has been done to improve the performance of the
network, although almost always it results in increased computational
complexity. In this paper, we propose a new attention module that not only
achieves the best performance but also has lesser parameters compared to most
existing models. Our attention module can easily be integrated with other
convolutional neural networks because of its lightweight nature. The proposed
network named Dual Multi Scale Attention Network (DMSANet) is comprised of two
parts: the first part is used to extract features at various scales and
aggregate them, the second part uses spatial and channel attention modules in
parallel to adaptively integrate local features with their global dependencies.
We benchmark our network performance for Image Classification on ImageNet
dataset, Object Detection and Instance Segmentation both on MS COCO dataset. | [
"cs.CV",
"cs.LG"
]
|
Detecting the changes of buildings in urban environments is essential.
Existing methods that use only nadir images suffer from severe problems of
ambiguous features and occlusions between buildings and other regions.
Furthermore, buildings in urban environments vary significantly in scale, which
leads to performance issues when using single-scale features. To solve these
issues, this paper proposes a fused feature pyramid network, which utilizes
both color and depth data for the 3D verification of existing buildings 2D
footprints from oblique images. First, the color data of oblique images are
enriched with the depth information rendered from 3D mesh models. Second,
multiscale features are fused in the feature pyramid network to convolve both
the color and depth data. Finally, multi-view information from both the nadir
and oblique images is used in a robust voting procedure to label changes in
existing buildings. Experimental evaluations using both the ISPRS benchmark
datasets and Shenzhen datasets reveal that the proposed method outperforms the
ResNet and EfficientNet networks by 5\% and 2\%, respectively, in terms of
recall rate and precision. We demonstrate that the proposed method can
successfully detect all changed buildings; therefore, only those marked as
changed need to be manually checked during the pipeline updating procedure;
this significantly reduces the manual quality control requirements. Moreover,
ablation studies indicate that using depth data, feature pyramid modules, and
multi-view voting strategies can lead to clear and progressive improvements. | [
"cs.CV"
]
|
The failure of landing a job for college students could cause serious social
consequences such as drunkenness and suicide. In addition to academic
performance, unconscious biases can become one key obstacle for hunting jobs
for graduating students. Thus, it is necessary to understand these unconscious
biases so that we can help these students at an early stage with more
personalized intervention. In this paper, we develop a framework, i.e., MAYA
(Multi-mAjor emploYment stAtus) to predict students' employment status while
considering biases. The framework consists of four major components. Firstly,
we solve the heterogeneity of student courses by embedding academic performance
into a unified space. Then, we apply a generative adversarial network (GAN) to
overcome the class imbalance problem. Thirdly, we adopt Long Short-Term Memory
(LSTM) with a novel dropout mechanism to comprehensively capture sequential
information among semesters. Finally, we design a bias-based regularization to
capture the job market biases. We conduct extensive experiments on a
large-scale educational dataset and the results demonstrate the effectiveness
of our prediction framework. | [
"cs.LG",
"cs.CY",
"stat.ML"
]
|
In this work we introduce an Autoencoder for molecular conformations. Our
proposed model converts the discrete spatial arrangements of atoms in a given
molecular graph (conformation) into and from a continuous fixed-sized latent
representation. We demonstrate that in this latent representation, similar
conformations cluster together while distinct conformations split apart.
Moreover, by training a probabilistic model on a large dataset of molecular
conformations, we demonstrate how our model can be used to generate diverse
sets of energetically favorable conformations for a given molecule. Finally, we
show that the continuous representation allows us to utilize optimization
methods to find molecules that have conformations with favourable spatial
properties. | [
"cs.LG",
"physics.chem-ph",
"q-bio.QM"
]
|
Traditional computer vision approaches, based on neural networks (NN), are
typically trained on a large amount of image data. By minimizing the
cross-entropy loss between a prediction and a given class label, the NN and its
visual embedding space are learned to fulfill a given task. However, due to the
sole dependence on the image data distribution of the training domain, these
models tend to fail when applied to a target domain that differs from their
source domain. To learn a more robust NN to domain shifts, we propose the
knowledge graph neural network (KG-NN), a neuro-symbolic approach that
supervises the training using image-data-invariant auxiliary knowledge. The
auxiliary knowledge is first encoded in a knowledge graph with respective
concepts and their relationships, which is then transformed into a dense vector
representation via an embedding method. Using a contrastive loss function,
KG-NN learns to adapt its visual embedding space and thus its weights according
to the image-data invariant knowledge graph embedding space. We evaluate KG-NN
on visual transfer learning tasks for classification using the mini-ImageNet
dataset and its derivatives, as well as road sign recognition datasets from
Germany and China. The results show that a visual model trained with a
knowledge graph as a trainer outperforms a model trained with cross-entropy in
all experiments, in particular when the domain gap increases. Besides better
performance and stronger robustness to domain shifts, these KG-NN adapts to
multiple datasets and classes without suffering heavily from catastrophic
forgetting. | [
"cs.CV",
"cs.AI",
"cs.CL",
"cs.LG"
]
|
A key problem in salient object detection is how to effectively model the
semantic properties of salient objects in a data-driven manner. In this paper,
we propose a multi-task deep saliency model based on a fully convolutional
neural network (FCNN) with global input (whole raw images) and global output
(whole saliency maps). In principle, the proposed saliency model takes a
data-driven strategy for encoding the underlying saliency prior information,
and then sets up a multi-task learning scheme for exploring the intrinsic
correlations between saliency detection and semantic image segmentation.
Through collaborative feature learning from such two correlated tasks, the
shared fully convolutional layers produce effective features for object
perception. Moreover, it is capable of capturing the semantic information on
salient objects across different levels using the fully convolutional layers,
which investigate the feature-sharing properties of salient object detection
with great feature redundancy reduction. Finally, we present a graph Laplacian
regularized nonlinear regression model for saliency refinement. Experimental
results demonstrate the effectiveness of our approach in comparison with the
state-of-the-art approaches. | [
"cs.CV"
]
|
Single domain generalization is a challenging case of model generalization,
where the models are trained on a single domain and tested on other unseen
domains. A promising solution is to learn cross-domain invariant
representations by expanding the coverage of the training domain. These methods
have limited generalization performance gains in practical applications due to
the lack of appropriate safety and effectiveness constraints. In this paper, we
propose a novel learning framework called progressive domain expansion network
(PDEN) for single domain generalization. The domain expansion subnetwork and
representation learning subnetwork in PDEN mutually benefit from each other by
joint learning. For the domain expansion subnetwork, multiple domains are
progressively generated in order to simulate various photometric and geometric
transforms in unseen domains. A series of strategies are introduced to
guarantee the safety and effectiveness of the expanded domains. For the domain
invariant representation learning subnetwork, contrastive learning is
introduced to learn the domain invariant representation in which each class is
well clustered so that a better decision boundary can be learned to improve
it's generalization. Extensive experiments on classification and segmentation
have shown that PDEN can achieve up to 15.28% improvement compared with the
state-of-the-art single-domain generalization methods. | [
"cs.CV"
]
|
Differential privacy formalises privacy-preserving mechanisms that provide
access to a database. We pose the question of whether Bayesian inference itself
can be used directly to provide private access to data, with no modification.
The answer is affirmative: under certain conditions on the prior, sampling from
the posterior distribution can be used to achieve a desired level of privacy
and utility. To do so, we generalise differential privacy to arbitrary dataset
metrics, outcome spaces and distribution families. This allows us to also deal
with non-i.i.d or non-tabular datasets. We prove bounds on the sensitivity of
the posterior to the data, which gives a measure of robustness. We also show
how to use posterior sampling to provide differentially private responses to
queries, within a decision-theoretic framework. Finally, we provide bounds on
the utility and on the distinguishability of datasets. The latter are
complemented by a novel use of Le Cam's method to obtain lower bounds. All our
general results hold for arbitrary database metrics, including those for the
common definition of differential privacy. For specific choices of the metric,
we give a number of examples satisfying our assumptions. | [
"stat.ML",
"cs.LG"
]
|
We study the problem of object detection from a novel perspective in which
annotation budget constraints are taken into consideration, appropriately
coined Budget Aware Object Detection (BAOD). When provided with a fixed budget,
we propose a strategy for building a diverse and informative dataset that can
be used to optimally train a robust detector. We investigate both optimization
and learning-based methods to sample which images to annotate and what type of
annotation (strongly or weakly supervised) to annotate them with. We adopt a
hybrid supervised learning framework to train the object detector from both
these types of annotation. We conduct a comprehensive empirical study showing
that a handcrafted optimization method outperforms other selection techniques
including random sampling, uncertainty sampling and active learning. By
combining an optimal image/annotation selection scheme with hybrid supervised
learning to solve the BAOD problem, we show that one can achieve the
performance of a strongly supervised detector on PASCAL-VOC 2007 while saving
12.8% of its original annotation budget. Furthermore, when $100\%$ of the
budget is used, it surpasses this performance by 2.0 mAP percentage points. | [
"cs.CV"
]
|
Tensor network decomposition, originated from quantum physics to model
entangled many-particle quantum systems, turns out to be a promising
mathematical technique to efficiently represent and process big data in
parsimonious manner. In this study, we show that tensor networks can
systematically partition structured data, e.g. color images, for distributed
storage and communication in privacy-preserving manner. Leveraging the sea of
big data and metadata privacy, empirical results show that neighbouring
subtensors with implicit information stored in tensor network formats cannot be
identified for data reconstruction. This technique complements the existing
encryption and randomization techniques which store explicit data
representation at one place and highly susceptible to adversarial attacks such
as side-channel attacks and de-anonymization. Furthermore, we propose a theory
for adversarial examples that mislead convolutional neural networks to
misclassification using subspace analysis based on singular value decomposition
(SVD). The theory is extended to analyze higher-order tensors using
tensor-train SVD (TT-SVD); it helps to explain the level of susceptibility of
different datasets to adversarial attacks, the structural similarity of
different adversarial attacks including global and localized attacks, and the
efficacy of different adversarial defenses based on input transformation. An
efficient and adaptive algorithm based on robust TT-SVD is then developed to
detect strong and static adversarial attacks. | [
"cs.LG",
"cs.CV",
"stat.ML"
]
|
We present an efficient foveal framework to perform object detection. A scale
normalized image pyramid (SNIP) is generated that, like human vision, only
attends to objects within a fixed size range at different scales. Such a
restriction of objects' size during training affords better learning of
object-sensitive filters, and therefore, results in better accuracy. However,
the use of an image pyramid increases the computational cost. Hence, we propose
an efficient spatial sub-sampling scheme which only operates on fixed-size
sub-regions likely to contain objects (as object locations are known during
training). The resulting approach, referred to as Scale Normalized Image
Pyramid with Efficient Resampling or SNIPER, yields up to 3 times speed-up
during training. Unfortunately, as object locations are unknown during
inference, the entire image pyramid still needs processing. To this end, we
adopt a coarse-to-fine approach, and predict the locations and extent of
object-like regions which will be processed in successive scales of the image
pyramid. Intuitively, it's akin to our active human-vision that first skims
over the field-of-view to spot interesting regions for further processing and
only recognizes objects at the right resolution. The resulting algorithm is
referred to as AutoFocus and results in a 2.5-5 times speed-up during inference
when used with SNIP. | [
"cs.CV",
"cs.AI"
]
|
In this work we present an adversarial training algorithm that exploits
correlations in video to learn --without supervision-- an image generator model
with a disentangled latent space. The proposed methodology requires only a few
modifications to the standard algorithm of Generative Adversarial Networks
(GAN) and involves training with sets of frames taken from short videos. We
train our model over two datasets of face-centered videos which present
different people speaking or moving the head: VidTIMIT and YouTube Faces
datasets. We found that our proposal allows us to split the generator latent
space into two subspaces. One of them controls content attributes, those that
do not change along short video sequences. For the considered datasets, this is
the identity of the generated face. The other subspace controls motion
attributes, those attributes that are observed to change along short videos. We
observed that these motion attributes are face expressions, head orientation,
lips and eyes movement. The presented experiments provide quantitative and
qualitative evidence supporting that the proposed methodology induces a
disentangling of this two kinds of attributes in the latent space. | [
"cs.CV",
"cs.LG",
"stat.ML"
]
|
IQ tests are an accepted method for assessing human intelligence. The tests
consist of several parts that must be solved under a time constraint. Of all
the tested abilities, pattern recognition has been found to have the highest
correlation with general intelligence. This is primarily because pattern
recognition is the ability to find order in a noisy environment, a necessary
skill for intelligent agents. In this paper, we propose a convolutional neural
network (CNN) model for solving geometric pattern recognition problems. The CNN
receives as input multiple ordered input images and outputs the next image
according to the pattern. Our CNN is able to solve problems involving rotation,
reflection, color, size and shape patterns and score within the top 5% of human
performance. | [
"cs.LG",
"cs.AI",
"cs.CV"
]
|
From the frame/clip-level feature learning to the video-level representation
building, deep learning methods in action recognition have developed rapidly in
recent years. However, current methods suffer from the confusion caused by
partial observation training, or without end-to-end learning, or restricted to
single temporal scale modeling and so on. In this paper, we build upon
two-stream ConvNets and propose Deep networks with Temporal Pyramid Pooling
(DTPP), an end-to-end video-level representation learning approach, to address
these problems. Specifically, at first, RGB images and optical flow stacks are
sparsely sampled across the whole video. Then a temporal pyramid pooling layer
is used to aggregate the frame-level features which consist of spatial and
temporal cues. Lastly, the trained model has compact video-level representation
with multiple temporal scales, which is both global and sequence-aware.
Experimental results show that DTPP achieves the state-of-the-art performance
on two challenging video action datasets: UCF101 and HMDB51, either by ImageNet
pre-training or Kinetics pre-training. | [
"cs.CV"
]
|
Transformers have sprung up in the field of computer vision. In this work, we
explore whether the core self-attention module in Transformer is the key to
achieving excellent performance in image recognition. To this end, we build an
attention-free network called sMLPNet based on the existing MLP-based vision
models. Specifically, we replace the MLP module in the token-mixing step with a
novel sparse MLP (sMLP) module. For 2D image tokens, sMLP applies 1D MLP along
the axial directions and the parameters are shared among rows or columns. By
sparse connection and weight sharing, sMLP module significantly reduces the
number of model parameters and computational complexity, avoiding the common
over-fitting problem that plagues the performance of MLP-like models. When only
trained on the ImageNet-1K dataset, the proposed sMLPNet achieves 81.9% top-1
accuracy with only 24M parameters, which is much better than most CNNs and
vision Transformers under the same model size constraint. When scaling up to
66M parameters, sMLPNet achieves 83.4% top-1 accuracy, which is on par with the
state-of-the-art Swin Transformer. The success of sMLPNet suggests that the
self-attention mechanism is not necessarily a silver bullet in computer vision.
Code will be made publicly available. | [
"cs.CV"
]
|
We generalize a graph-based multiclass semi-supervised classification
technique based on diffuse interface methods to multilayer graphs. Besides the
treatment of various applications with an inherent multilayer structure, we
present a very flexible approach that interprets high-dimensional data in a
low-dimensional multilayer graph representation. Highly efficient numerical
methods involving the spectral decomposition of the corresponding differential
graph operators as well as fast matrix-vector products based on the
nonequispaced fast Fourier transform (NFFT) enable the rapid treatment of large
and high-dimensional data sets. We perform various numerical tests putting a
special focus on image segmentation. In particular, we test the performance of
our method on data sets with up to 10 million nodes per layer as well as up to
104 dimensions resulting in graphs with up to 52 layers. While all presented
numerical experiments can be run on an average laptop computer, the linear
dependence per iteration step of the runtime on the network size in all stages
of our algorithm makes it scalable to even larger and higher-dimensional
problems. | [
"cs.LG",
"cs.NA",
"math.NA",
"stat.ML",
"68R10, 05C50, 65F15, 65T50, 68T05, 62H30"
]
|
As both light transport simulation and reinforcement learning are ruled by
the same Fredholm integral equation of the second kind, reinforcement learning
techniques may be used for photorealistic image synthesis: Efficiency may be
dramatically improved by guiding light transport paths by an approximate
solution of the integral equation that is learned during rendering. In the
light of the recent advances in reinforcement learning for playing games, we
investigate the representation of an approximate solution of an integral
equation by artificial neural networks and derive a loss function for that
purpose. The resulting Monte Carlo and quasi-Monte Carlo methods train neural
networks with standard information instead of linear information and naturally
are able to generate an arbitrary number of training samples. The methods are
demonstrated for applications in light transport simulation. | [
"cs.LG",
"cs.GR"
]
|
Few-shot object detection has made substantial progressby representing novel
class objects using the feature representation learned upon a set of base class
objects. However,an implicit contradiction between novel class classification
and representation is unfortunately ignored. On the one hand, to achieve
accurate novel class classification, the distributions of either two base
classes must be far away fromeach other (max-margin). On the other hand, to
precisely represent novel classes, the distributions of base classes should be
close to each other to reduce the intra-class distance of novel classes
(min-margin). In this paper, we propose a class margin equilibrium (CME)
approach, with the aim to optimize both feature space partition and novel class
reconstruction in a systematic way. CME first converts the few-shot detection
problem to the few-shot classification problem by using a fully connected layer
to decouple localization features. CME then reserves adequate margin space for
novel classes by introducing simple-yet-effective class margin loss during
feature learning. Finally, CME pursues margin equilibrium by disturbing the
features of novel class instances in an adversarial min-max fashion.
Experiments on Pascal VOC and MS-COCO datasets show that CME significantly
improves upon two baseline detectors (up to $3\sim 5\%$ in average), achieving
state-of-the-art performance. Code is available at
https://github.com/Bohao-Lee/CME . | [
"cs.CV"
]
|
High-dimensional data pose challenges in statistical learning and modeling.
Sometimes the predictors can be naturally grouped where pursuing the
between-group sparsity is desired. Collinearity may occur in real-world
high-dimensional applications where the popular $l_1$ technique suffers from
both selection inconsistency and prediction inaccuracy. Moreover, the problems
of interest often go beyond Gaussian models. To meet these challenges,
nonconvex penalized generalized linear models with grouped predictors are
investigated and a simple-to-implement algorithm is proposed for computation. A
rigorous theoretical result guarantees its convergence and provides tight
preliminary scaling. This framework allows for grouped predictors and nonconvex
penalties, including the discrete $l_0$ and the `$l_0+l_2$' type penalties.
Penalty design and parameter tuning for nonconvex penalties are examined.
Applications of super-resolution spectrum estimation in signal processing and
cancer classification with joint gene selection in bioinformatics show the
performance improvement by nonconvex penalized estimation. | [
"stat.ML",
"stat.ME"
]
|
The staggering amount of streaming time series coming from the real world
calls for more efficient and effective online modeling solution. For time
series modeling, most existing works make some unrealistic assumptions such as
the input data is of fixed length or well aligned, which requires extra effort
on segmentation or normalization of the raw streaming data. Although some
literature claim their approaches to be invariant to data length and
misalignment, they are too time-consuming to model a streaming time series in
an online manner. We propose a novel and more practical online modeling and
classification scheme, DDE-MGM, which does not make any assumptions on the time
series while maintaining high efficiency and state-of-the-art performance. The
derivative delay embedding (DDE) is developed to incrementally transform time
series to the embedding space, where the intrinsic characteristics of data is
preserved as recursive patterns regardless of the stream length and
misalignment. Then, a non-parametric Markov geographic model (MGM) is proposed
to both model and classify the pattern in an online manner. Experimental
results demonstrate the effectiveness and superior classification accuracy of
the proposed DDE-MGM in an online setting as compared to the state-of-the-art. | [
"cs.LG"
]
|
Prior work demonstrated the ability of machine learning to automatically
recognize surgical workflow steps from videos. However, these studies focused
on only a single type of procedure. In this work, we analyze, for the first
time, surgical step recognition on four different laparoscopic surgeries:
Cholecystectomy, Right Hemicolectomy, Sleeve Gastrectomy, and Appendectomy.
Inspired by the traditional apprenticeship model, in which surgical training is
based on the Halstedian method, we paraphrase the "see one, do one, teach one"
approach for the surgical intelligence domain as "train one, classify one,
teach one". In machine learning, this approach is often referred to as transfer
learning. To analyze the impact of transfer learning across different
laparoscopic procedures, we explore various time-series architectures and
examine their performance on each target domain. We introduce a new
architecture, the Time-Series Adaptation Network (TSAN), an architecture
optimized for transfer learning of surgical step recognition, and we show how
TSAN can be pre-trained using self-supervised learning on a Sequence Sorting
task. Such pre-training enables TSAN to learn workflow steps of a new
laparoscopic procedure type from only a small number of labeled samples from
the target procedure. Our proposed architecture leads to better performance
compared to other possible architectures, reaching over 90% accuracy when
transferring from laparoscopic Cholecystectomy to the other three procedure
types. | [
"cs.CV",
"cs.AI"
]
|
Synthesizing high-quality realistic images from text descriptions is a
challenging task. Almost all existing text-to-image Generative Adversarial
Networks employ stacked architecture as the backbone. They utilize cross-modal
attention mechanisms to fuse text and image features, and introduce extra
networks to ensure text-image semantic consistency. In this work, we propose a
much simpler, but more effective text-to-image model than previous works.
Corresponding to the above three limitations, we propose: 1) a novel one-stage
text-to-image backbone which is able to synthesize high-quality images directly
by one pair of generator and discriminator, 2) a novel fusion module called
deep text-image fusion block which deepens the text-image fusion process in
generator, 3) a novel target-aware discriminator composed of matching-aware
gradient penalty and one-way output which promotes the generator to synthesize
more realistic and text-image semantic consistent images without introducing
extra networks. Compared with existing text-to-image models, our proposed
method (i.e., DF-GAN) is simpler but more efficient to synthesize realistic and
text-matching images and achieves better performance. Extensive experiments on
both Caltech-UCSD Birds 200 and COCO datasets demonstrate the superiority of
the proposed model in comparison to state-of-the-art models. | [
"cs.CV"
]
|
Training complex machine learning models for prediction often requires a
large amount of data that is not always readily available. Leveraging these
external datasets from related but different sources is therefore an important
task if good predictive models are to be built for deployment in settings where
data can be rare. In this paper we propose a novel approach to the problem in
which we use multiple GAN architectures to learn to translate from one dataset
to another, thereby allowing us to effectively enlarge the target dataset, and
therefore learn better predictive models than if we simply used the target
dataset. We show the utility of such an approach, demonstrating that our method
improves the prediction performance on the target domain over using just the
target dataset and also show that our framework outperforms several other
benchmarks on a collection of real-world medical datasets. | [
"cs.LG",
"stat.ML"
]
|
Adversarial learning has been proven to be effective for capturing long-range
and high-level label consistencies in semantic segmentation. Unique to medical
imaging, capturing 3D semantics in an effective yet computationally efficient
way remains an open problem. In this study, we address this computational
burden by proposing a novel projective adversarial network, called PAN, which
incorporates high-level 3D information through 2D projections. Furthermore, we
introduce an attention module into our framework that helps for a selective
integration of global information directly from our segmentor to our
adversarial network. For the clinical application we chose pancreas
segmentation from CT scans. Our proposed framework achieved state-of-the-art
performance without adding to the complexity of the segmentor. | [
"cs.CV",
"cs.LG",
"eess.IV"
]
|
Due to the phenomenon of "posterior collapse," current latent variable
generative models pose a challenging design choice that either weakens the
capacity of the decoder or requires augmenting the objective so it does not
only maximize the likelihood of the data. In this paper, we propose an
alternative that utilizes the most powerful generative models as decoders,
whilst optimising the variational lower bound all while ensuring that the
latent variables preserve and encode useful information. Our proposed
$\delta$-VAEs achieve this by constraining the variational family for the
posterior to have a minimum distance to the prior. For sequential latent
variable models, our approach resembles the classic representation learning
approach of slow feature analysis. We demonstrate the efficacy of our approach
at modeling text on LM1B and modeling images: learning representations,
improving sample quality, and achieving state of the art log-likelihood on
CIFAR-10 and ImageNet $32\times 32$. | [
"cs.LG",
"stat.ML"
]
|
The vast number of existing IP cameras in current road networks is an
opportunity to take advantage of the captured data and analyze the video and
detect any significant events. For this purpose, it is necessary to detect
moving vehicles, a task that was carried out using classical artificial vision
techniques until a few years ago. Nowadays, significant improvements have been
obtained by deep learning networks. Still, object detection is considered one
of the leading open issues within computer vision.
The current scenario is constantly evolving, and new models and techniques
are appearing trying to improve this field. In particular, new problems and
drawbacks appear regarding detecting small objects, which correspond mainly to
the vehicles that appear in the road scenes. All this means that new solutions
that try to improve the low detection rate of small elements are essential.
Among the different emerging research lines, this work focuses on the detection
of small objects. In particular, our proposal aims to vehicle detection from
images captured by video surveillance cameras.
In this work, we propose a new procedure for detecting small-scale objects by
applying super-resolution processes based on detections performed by
convolutional neural networks \emph{(CNN)}. The neural network is integrated
with processes that are in charge of increasing the resolution of the images to
improve the object detection performance. This solution has been tested for a
set of traffic images containing elements of different scales to test the
efficiency according to the detections obtained by the model, thus
demonstrating that our proposal achieves good results in a wide range of
situations. | [
"cs.CV"
]
|
Due to the vulnerability of deep neural networks to adversarial examples,
numerous works on adversarial attacks and defenses have been burgeoning over
the past several years. However, there seem to be some conventional views
regarding adversarial attacks and object detection approaches that most
researchers take for granted. In this work, we bring a fresh perspective on
those procedures by evaluating the impact of universal perturbations on object
detection at a class-level. We apply it to a carefully curated data set related
to autonomous driving. We use Faster-RCNN object detector on images of five
different categories: person, car, truck, stop sign and traffic light from the
COCO data set, while carefully perturbing the images using Universal Dense
Object Suppression algorithm. Our results indicate that person, car, traffic
light, truck and stop sign are resilient in that order (most to least) to
universal perturbations. To the best of our knowledge, this is the first time
such a ranking has been established which is significant for the security of
the data sets pertaining to autonomous vehicles and object detection in
general. | [
"cs.CV"
]
|
In this paper, we propose an algorithm, named hashing-based non-maximum
suppression (HNMS) to efficiently suppress the non-maximum boxes for object
detection. Non-maximum suppression (NMS) is an essential component to suppress
the boxes at closely located locations with similar shapes. The time cost tends
to be huge when the number of boxes becomes large, especially for crowded
scenes. The basic idea of HNMS is to firstly map each box to a discrete code
(hash cell) and then remove the boxes with lower confidences if they are in the
same cell. Considering the intersection-over-union (IoU) as the metric, we
propose a simple yet effective hashing algorithm, named IoUHash, which
guarantees that the boxes within the same cell are close enough by a lower IoU
bound. For two-stage detectors, we replace NMS in region proposal network with
HNMS, and observe significant speed-up with comparable accuracy. For one-stage
detectors, HNMS is used as a pre-filter to speed up the suppression with a
large margin. Extensive experiments are conducted on CARPK, SKU-110K,
CrowdHuman datasets to demonstrate the efficiency and effectiveness of HNMS.
Code is released at \url{https://github.com/microsoft/hnms.git}. | [
"cs.CV"
]
|
Over the last decade, electron microscopy has improved up to a point that
generating high quality gigavoxel sized datasets only requires a few hours.
Automated image analysis, particularly image segmentation, however, has not
evolved at the same pace. Even though state-of-the-art methods such as U-Net
and DeepLab have improved segmentation performance substantially, the required
amount of labels remains too expensive. Active learning is the subfield in
machine learning that aims to mitigate this burden by selecting the samples
that require labeling in a smart way. Many techniques have been proposed,
particularly for image classification, to increase the steepness of learning
curves. In this work, we extend these techniques to deep CNN based image
segmentation. Our experiments on three different electron microscopy datasets
show that active learning can improve segmentation quality by 10 to 15% in
terms of Jaccard score compared to standard randomized sampling. | [
"cs.CV"
]
|
Deep model-based Reinforcement Learning (RL) has the potential to
substantially improve the sample-efficiency of deep RL. While various
challenges have long held it back, a number of papers have recently come out
reporting success with deep model-based methods. This is a great development,
but the lack of a consistent metric to evaluate such methods makes it difficult
to compare various approaches. For example, the common single-task
sample-efficiency metric conflates improvements due to model-based learning
with various other aspects, such as representation learning, making it
difficult to assess true progress on model-based RL. To address this, we
introduce an experimental setup to evaluate model-based behavior of RL methods,
inspired by work from neuroscience on detecting model-based behavior in humans
and animals. Our metric based on this setup, the Local Change Adaptation (LoCA)
regret, measures how quickly an RL method adapts to a local change in the
environment. Our metric can identify model-based behavior, even if the method
uses a poor representation and provides insight in how close a method's
behavior is from optimal model-based behavior. We use our setup to evaluate the
model-based behavior of MuZero on a variation of the classic Mountain Car task. | [
"cs.LG",
"cs.AI",
"stat.ML"
]
|
Human vision is often adversely affected by complex environmental factors,
especially in night vision scenarios. Thus, infrared cameras are often
leveraged to help enhance the visual effects via detecting infrared radiation
in the surrounding environment, but the infrared videos are undesirable due to
the lack of detailed semantic information. In such a case, an effective
video-to-video translation method from the infrared domain to the visible light
counterpart is strongly needed by overcoming the intrinsic huge gap between
infrared and visible fields. To address this challenging problem, we propose an
infrared-to-visible (I2V) video translation method I2V-GAN to generate
fine-grained and spatial-temporal consistent visible light videos by given
unpaired infrared videos. Technically, our model capitalizes on three types of
constraints: 1)adversarial constraint to generate synthetic frames that are
similar to the real ones, 2)cyclic consistency with the introduced perceptual
loss for effective content conversion as well as style preservation, and
3)similarity constraints across and within domains to enhance the content and
motion consistency in both spatial and temporal spaces at a fine-grained level.
Furthermore, the current public available infrared and visible light datasets
are mainly used for object detection or tracking, and some are composed of
discontinuous images which are not suitable for video tasks. Thus, we provide a
new dataset for I2V video translation, which is named IRVI. Specifically, it
has 12 consecutive video clips of vehicle and monitoring scenes, and both
infrared and visible light videos could be apart into 24352 frames.
Comprehensive experiments validate that I2V-GAN is superior to the compared
SOTA methods in the translation of I2V videos with higher fluency and finer
semantic details. The code and IRVI dataset are available at
https://github.com/BIT-DA/I2V-GAN. | [
"cs.CV"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.