text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
The performance of optical flow algorithms greatly depends on the specifics
of the content and the application for which it is used. Existing and well
established optical flow datasets are limited to rather particular contents
from which none is close to crowd behavior analysis; whereas such applications
heavily utilize optical flow. We introduce a new optical flow dataset
exploiting the possibilities of a recent video engine to generate sequences
with ground-truth optical flow for large crowds in different scenarios. We
break with the development of the last decade of introducing ever increasing
displacements to pose new difficulties. Instead we focus on real-world
surveillance scenarios where numerous small, partly independent, non rigidly
moving objects observed over a long temporal range pose a challenge. By
evaluating different optical flow algorithms, we find that results of
established datasets can not be transferred to these new challenges. In
exhaustive experiments we are able to provide new insight into optical flow for
crowd analysis. Finally, the results have been validated on the real-world UCF
crowd tracking benchmark while achieving competitive results compared to more
sophisticated state-of-the-art crowd tracking approaches. | [
"cs.CV"
] |
The field of face recognition (FR) has witnessed great progress with the
surge of deep learning. Existing methods mainly focus on extracting
discriminative features, and directly compute the cosine or L2 distance by the
point-to-point way without considering the context information. In this study,
we make a key observation that the local con-text represented by the
similarities between the instance and its inter-class neighbors1plays an
important role forFR. Specifically, we attempt to incorporate the local
in-formation in the feature space into the metric, and pro-pose a unified
framework calledInter-class DiscrepancyAlignment(IDA), with two dedicated
modules, Discrepancy Alignment Operator(IDA-DAO) andSupport Set
Estimation(IDA-SSE). IDA-DAO is used to align the similarity scores considering
the discrepancy between the images and its neighbors, which is defined by
adaptive support sets on the hypersphere. For practical inference, it is
difficult to acquire support set during online inference. IDA-SSE can provide
convincing inter-class neighbors by introducing virtual candidate images
generated with GAN. Further-more, we propose the learnable IDA-SSE, which can
implicitly give estimation without the need of any other images in the
evaluation process. The proposed IDA can be incorporated into existing FR
systems seamlessly and efficiently. Extensive experiments demonstrate that this
frame-work can 1) significantly improve the accuracy, and 2) make the model
robust to the face images of various distributions.Without bells and whistles,
our method achieves state-of-the-art performance on multiple standard FR
benchmarks. | [
"cs.CV"
] |
Recent object detectors find instances while categorizing candidate regions.
As each region is evaluated independently, the number of candidate regions from
a detector is usually larger than the number of objects. Since the final goal
of detection is to assign a single detection to each object, a heuristic
algorithm, such as non-maximum suppression (NMS), is used to select a single
bounding box for an object. While simple heuristic algorithms are effective for
stand-alone objects, they can fail to detect overlapped objects. In this paper,
we address this issue by training a network to distinguish different objects
using the relationship between candidate boxes. We propose an instance-aware
detection network (IDNet), which can learn to extract features from candidate
regions and measure their similarities. Based on pairwise similarities and
detection qualities, the IDNet selects a subset of candidate bounding boxes
using instance-aware determinantal point process inference (IDPP). Extensive
experiments demonstrate that the proposed algorithm achieves significant
improvements for detecting overlapped objects compared to existing
state-of-the-art detection methods on the PASCAL VOC and MS COCO datasets. | [
"cs.CV"
] |
Video activity localisation has recently attained increasing attention due to
its practical values in automatically localising the most salient visual
segments corresponding to their language descriptions (sentences) from
untrimmed and unstructured videos. For supervised model training, a temporal
annotation of both the start and end time index of each video segment for a
sentence (a video moment) must be given. This is not only very expensive but
also sensitive to ambiguity and subjective annotation bias, a much harder task
than image labelling. In this work, we develop a more accurate
weakly-supervised solution by introducing Cross-Sentence Relations Mining (CRM)
in video moment proposal generation and matching when only a paragraph
description of activities without per-sentence temporal annotation is
available. Specifically, we explore two cross-sentence relational constraints:
(1) Temporal ordering and (2) semantic consistency among sentences in a
paragraph description of video activities. Existing weakly-supervised
techniques only consider within-sentence video segment correlations in training
without considering cross-sentence paragraph context. This can mislead due to
ambiguous expressions of individual sentences with visually indiscriminate
video moment proposals in isolation. Experiments on two publicly available
activity localisation datasets show the advantages of our approach over the
state-of-the-art weakly supervised methods, especially so when the video
activity descriptions become more complex. | [
"cs.CV"
] |
Historically, Recurrent neural networks (RNNs) and its variants such as LSTM
and GRU and more recently Transformers have been the standard go-to components
when processing sequential data with neural networks. One notable issue is the
relative difficulty to deal with long sequences (i.e. more than 20,000 steps).
We introduce IGLOO, a new neural network architecture which aims at being
efficient for short sequences but also at being able to deal with long
sequences. IGLOOs core idea is to use the relationships between non-local
patches sliced out of the features maps of successively applied convolutions to
build a representation for the sequence. We show that the model can deal with
dependencies of more than 20,000 steps in a reasonable time frame. We stress
test IGLOO on the copy-memory and addition tasks, as well as permuted MNIST
(98.4%). For a larger task we apply this new structure to the Wikitext-2
dataset Merity et al. (2017b) and achieve a perplexity in line with baseline
Transformers but lower than baseline AWD-LSTM. We also present how IGLOO is
already used today in production for bioinformatics tasks. | [
"cs.LG",
"stat.ML"
] |
Many approaches have been proposed for early classification of time series in
light of its significance in a wide range of applications including healthcare,
transportation and finance. However, recently a preprint saved on Arxiv claim
that all research done for almost 20 years now on the Early Classification of
Time Series is useless, or, at the very least, ill-oriented because severely
lacking a strong ground. In this paper, we answer in detail the main issues and
misunderstandings raised by the authors of the preprint, and propose directions
to further expand the fields of application of early classification of time
series. | [
"cs.LG"
] |
Modern CNN-based object detectors focus on feature configuration during
training but often ignore feature optimization during inference. In this paper,
we propose a new feature optimization approach to enhance features and suppress
background noise in both the training and inference stages. We introduce a
generic Inference-aware Feature Filtering (IFF) module that can easily be
combined with modern detectors, resulting in our iffDetector. Unlike
conventional open-loop feature calculation approaches without feedback, the IFF
module performs closed-loop optimization by leveraging high-level semantics to
enhance the convolutional features. By applying Fourier transform analysis, we
demonstrate that the IFF module acts as a negative feedback that theoretically
guarantees the stability of feature learning. IFF can be fused with CNN-based
object detectors in a plug-and-play manner with negligible computational cost
overhead. Experiments on the PASCAL VOC and MS COCO datasets demonstrate that
our iffDetector consistently outperforms state-of-the-art methods by
significant margins\footnote{The test code and model are anonymously available
in https://github.com/anonymous2020new/iffDetector }. | [
"cs.CV"
] |
Existing color-guided depth super-resolution (DSR) approaches require paired
RGB-D data as training samples where the RGB image is used as structural
guidance to recover the degraded depth map due to their geometrical similarity.
However, the paired data may be limited or expensive to be collected in actual
testing environment. Therefore, we explore for the first time to learn the
cross-modality knowledge at training stage, where both RGB and depth modalities
are available, but test on the target dataset, where only single depth modality
exists. Our key idea is to distill the knowledge of scene structural guidance
from RGB modality to the single DSR task without changing its network
architecture. Specifically, we construct an auxiliary depth estimation (DE)
task that takes an RGB image as input to estimate a depth map, and train both
DSR task and DE task collaboratively to boost the performance of DSR. Upon
this, a cross-task interaction module is proposed to realize bilateral cross
task knowledge transfer. First, we design a cross-task distillation scheme that
encourages DSR and DE networks to learn from each other in a teacher-student
role-exchanging fashion. Then, we advance a structure prediction (SP) task that
provides extra structure regularization to help both DSR and DE networks learn
more informative structure representations for depth recovery. Extensive
experiments demonstrate that our scheme achieves superior performance in
comparison with other DSR methods. | [
"cs.CV"
] |
In this paper, we explore the unsupervised learning of a semantic embedding
space for co-occurring sensory inputs. Specifically, we focus on the task of
learning a semantic vector space for both spoken and handwritten digits using
the TIDIGITs and MNIST datasets. Current techniques encode image and
audio/textual inputs directly to semantic embeddings. In contrast, our
technique maps an input to the mean and log variance vectors of a diagonal
Gaussian from which sample semantic embeddings are drawn. In addition to
encouraging semantic similarity between co-occurring inputs,our loss function
includes a regularization term borrowed from variational autoencoders (VAEs)
which drives the posterior distributions over embeddings to be unit Gaussian.
We can use this regularization term to filter out modality information while
preserving semantic information. We speculate this technique may be more
broadly applicable to other areas of cross-modality/domain information
retrieval and transfer learning. | [
"cs.LG",
"cs.CL",
"cs.CV"
] |
There has been a significant increase from 2010 to 2016 in the number of
people suffering from spine problems. The automatic image segmentation of the
spine obtained from a computed tomography (CT) image is important for
diagnosing spine conditions and for performing surgery with computer-assisted
surgery systems. The spine has a complex anatomy that consists of 33 vertebrae,
23 intervertebral disks, the spinal cord, and connecting ribs. As a result, the
spinal surgeon is faced with the challenge of needing a robust algorithm to
segment and create a model of the spine. In this study, we developed an
automatic segmentation method to segment the spine, and we compared our
segmentation results with reference segmentations obtained by experts. We
developed a fully automatic approach for spine segmentation from CT based on a
hybrid method. This method combines the convolutional neural network (CNN) and
fully convolutional network (FCN), and utilizes class redundancy as a soft
constraint to greatly improve the segmentation results. The proposed method was
found to significantly enhance the accuracy of the segmentation results and the
system processing time. Our comparison was based on 12 measurements: the Dice
coefficient (94%), Jaccard index (93%), volumetric similarity (96%),
sensitivity (97%), specificity (99%), precision (over segmentation; 8.3 and
under segmentation 2.6), accuracy (99%), Matthews correlation coefficient
(0.93), mean surface distance (0.16 mm), Hausdorff distance (7.4 mm), and
global consistency error (0.02). We experimented with CT images from 32
patients, and the experimental results demonstrated the efficiency of the
proposed method. | [
"cs.CV"
] |
With the rapid growth of video data and the increasing demands of various
applications such as intelligent video search and assistance toward
visually-impaired people, video captioning task has received a lot of attention
recently in computer vision and natural language processing fields. The
state-of-the-art video captioning methods focus more on encoding the temporal
information, while lack of effective ways to remove irrelevant temporal
information and also neglecting the spatial details. However, the current RNN
encoding module in single time order can be influenced by the irrelevant
temporal information, especially the irrelevant temporal information is at the
beginning of the encoding. In addition, neglecting spatial information will
lead to the relationship confusion of the words and detailed loss. Therefore,
in this paper, we propose a novel recurrent video encoding method and a novel
visual spatial feature for the video captioning task. The recurrent encoding
module encodes the video twice with the predicted key frame to avoid the
irrelevant temporal information often occurring at the beginning and the end of
a video. The novel spatial features represent the spatial information in
different regions of a video and enrich the details of a caption. Experiments
on two benchmark datasets show superior performance of the proposed method. | [
"cs.CV"
] |
Conventionally, model-based reinforcement learning (MBRL) aims to learn a
global model for the dynamics of the environment. A good model can potentially
enable planning algorithms to generate a large variety of behaviors and solve
diverse tasks. However, learning an accurate model for complex dynamical
systems is difficult, and even then, the model might not generalize well
outside the distribution of states on which it was trained. In this work, we
combine model-based learning with model-free learning of primitives that make
model-based planning easy. To that end, we aim to answer the question: how can
we discover skills whose outcomes are easy to predict? We propose an
unsupervised learning algorithm, Dynamics-Aware Discovery of Skills (DADS),
which simultaneously discovers predictable behaviors and learns their dynamics.
Our method can leverage continuous skill spaces, theoretically, allowing us to
learn infinitely many behaviors even for high-dimensional state-spaces. We
demonstrate that zero-shot planning in the learned latent space significantly
outperforms standard MBRL and model-free goal-conditioned RL, can handle
sparse-reward tasks, and substantially improves over prior hierarchical RL
methods for unsupervised skill discovery. | [
"cs.LG",
"cs.RO",
"stat.ML"
] |
Although deep convolutional networks have been widely studied for head and
neck (HN) organs at risk (OAR) segmentation, their use for routine clinical
treatment planning is limited by a lack of robustness to imaging artifacts, low
soft tissue contrast on CT, and the presence of abnormal anatomy. In order to
address these challenges, we developed a computationally efficient nested block
self-attention (NBSA) method that can be combined with any convolutional
network. Our method achieves computational efficiency by performing non-local
calculations within memory blocks of fixed spatial extent. Contextual
dependencies are captured by passing information in a raster scan order between
blocks, as well as through a second attention layer that causes bi-directional
attention flow. We implemented our approach on three different networks to
demonstrate feasibility. Following training using 200 cases, we performed
comprehensive evaluations using conventional and clinical metrics on a separate
set of 172 test scans sourced from external and internal institution datasets
without any exclusion criteria. NBSA required a similar number of computations
(15.7 gflops) as the most efficient criss-cross attention (CCA) method and
generated significantly more accurate segmentations for brain stem (Dice of
0.89 vs. 0.86) and parotid glands (0.86 vs. 0.84) than CCA. NBSA's
segmentations were less variable than multiple 3D methods, including for small
organs with low soft-tissue contrast such as the submandibular glands (surface
Dice of 0.90). | [
"cs.CV"
] |
The prevalence of relation networks in computer vision is in stark contrast
to underexplored point-based methods. In this paper, we explore the
possibilities of local relation operators and survey their feasibility. We
propose a scalable and efficient module, called group relation aggregator. The
module computes a feature of a group based on the aggregation of the features
of the inner-group points weighted by geometric relations and semantic
relations. We adopt this module to design our RPNet. We further verify the
expandability of RPNet, in terms of both depth and width, on the tasks of
classification and segmentation. Surprisingly, empirical results show that
wider RPNet fits for classification, while deeper RPNet works better on
segmentation. RPNet achieves state-of-the-art for classification and
segmentation on challenging benchmarks. We also compare our local aggregator
with PointNet++, with around 30% parameters and 50% computation saving.
Finally, we conduct experiments to reveal the robustness of RPNet with regard
to rigid transformation and noises. | [
"cs.CV",
"cs.AI",
"cs.GR",
"cs.LG",
"cs.RO"
] |
In this paper, we aim at improving the computational efficiency of graph
convolutional networks (GCNs) for learning on point clouds. The basic graph
convolution that is typically composed of a $K$-nearest neighbor (KNN) search
and a multilayer perceptron (MLP) is examined. By mathematically analyzing the
operations there, two findings to improve the efficiency of GCNs are obtained.
(1) The local geometric structure information of 3D representations propagates
smoothly across the GCN that relies on KNN search to gather neighborhood
features. This motivates the simplification of multiple KNN searches in GCNs.
(2) Shuffling the order of graph feature gathering and an MLP leads to
equivalent or similar composite operations. Based on those findings, we
optimize the computational procedure in GCNs. A series of experiments show that
the optimized networks have reduced computational complexity, decreased memory
consumption, and accelerated inference speed while maintaining comparable
accuracy for learning on point clouds. Code will be available at
\url{https://github.com/ofsoundof/EfficientGCN.git}. | [
"cs.CV"
] |
Poultry farms are a major contributor to the human food chain. However,
around the world, there have been growing concerns about the quality of life
for the livestock in poultry farms; and increasingly vocal demands for improved
standards of animal welfare. Recent advances in sensing technologies and
machine learning allow the possibility of monitoring birds, and employing the
lessons learned to improve the welfare for all birds. This task superficially
appears to be easy, yet, studying behavioral patterns involves collecting
enormous amounts of data, justifying the term Big Data. Before the big data can
be used for analytical purposes to tease out meaningful, well-conserved
behavioral patterns, the collected data needs to be pre-processed. The
pre-processing refers to processes for cleansing and preparing data so that it
is in the format ready to be analyzed by downstream algorithms, such as
classification and clustering algorithms. However, as we shall demonstrate,
efficient pre-processing of chicken big data is both non-trivial and crucial
towards success of further analytics. | [
"cs.LG"
] |
Image-only and pseudo-LiDAR representations are commonly used for monocular
3D object detection. However, methods based on them have shortcomings of either
not well capturing the spatial relationships in neighbored image pixels or
being hard to handle the noisy nature of the monocular pseudo-LiDAR point
cloud. To overcome these issues, in this paper we propose a novel
object-centric voxel representation tailored for monocular 3D object detection.
Specifically, voxels are built on each object proposal, and their sizes are
adaptively determined by the 3D spatial distribution of the points, allowing
the noisy point cloud to be organized effectively within a voxel grid. This
representation is proved to be able to locate the object in 3D space
accurately. Furthermore, prior works would like to estimate the orientation via
deep features extracted from an entire image or a noisy point cloud. By
contrast, we argue that the local RoI information from the object image patch
alone with a proper resizing scheme is a better input as it provides complete
semantic clues meanwhile excludes irrelevant interferences. Besides, we
decompose the confidence mechanism in monocular 3D object detection by
considering the relationship between 3D objects and the associated 2D boxes.
Evaluated on KITTI, our method outperforms state-of-the-art methods by a large
margin. The code will be made publicly available soon. | [
"cs.CV"
] |
Retinex theory is developed mainly to decompose an image into the
illumination and reflectance components by analyzing local image derivatives.
In this theory, larger derivatives are attributed to the changes in
reflectance, while smaller derivatives are emerged in the smooth illumination.
In this paper, we utilize exponentiated local derivatives (with an exponent
{\gamma}) of an observed image to generate its structure map and texture map.
The structure map is produced by been amplified with {\gamma} > 1, while the
texture map is generated by been shrank with {\gamma} < 1. To this end, we
design exponential filters for the local derivatives, and present their
capability on extracting accurate structure and texture maps, influenced by the
choices of exponents {\gamma}. The extracted structure and texture maps are
employed to regularize the illumination and reflectance components in Retinex
decomposition. A novel Structure and Texture Aware Retinex (STAR) model is
further proposed for illumination and reflectance decomposition of a single
image. We solve the STAR model by an alternating optimization algorithm. Each
sub-problem is transformed into a vectorized least squares regression, with
closed-form solutions. Comprehensive experiments on commonly tested datasets
demonstrate that, the proposed STAR model produce better quantitative and
qualitative performance than previous competing methods, on illumination and
reflectance decomposition, low-light image enhancement, and color correction.
The code is publicly available at https://github.com/csjunxu/STAR. | [
"cs.CV"
] |
Wood-composite materials are widely used today as they homogenize humidity
related directional deformations. Quantification of these deformations as
coefficients is important for construction and engineering and topic of current
research but still a manual process.
This work introduces a novel computer vision approach that automatically
extracts these properties directly from scans of the wooden specimens, taken at
different humidity levels during the long lasting humidity conditioning
process. These scans are used to compute a humidity dependent deformation field
for each pixel, from which the desired coefficients can easily be calculated.
The overall method includes automated registration of the wooden blocks,
numerical optimization to compute a variational optical flow field which is
further used to calculate dense strain fields and finally the engineering
coefficients and their variance throughout the wooden blocks. The methods
regularization is fully parameterizable which allows to model and suppress
artifacts due to surface appearance changes of the specimens from mold, cracks,
etc. that typically arise in the conditioning process. | [
"cs.CV"
] |
Graph representation learning for hypergraphs can be used to extract patterns
among higher-order interactions that are critically important in many real
world problems. Current approaches designed for hypergraphs, however, are
unable to handle different types of hypergraphs and are typically not generic
for various learning tasks. Indeed, models that can predict variable-sized
heterogeneous hyperedges have not been available. Here we develop a new
self-attention based graph neural network called Hyper-SAGNN applicable to
homogeneous and heterogeneous hypergraphs with variable hyperedge sizes. We
perform extensive evaluations on multiple datasets, including four benchmark
network datasets and two single-cell Hi-C datasets in genomics. We demonstrate
that Hyper-SAGNN significantly outperforms the state-of-the-art methods on
traditional tasks while also achieving great performance on a new task called
outsider identification. Hyper-SAGNN will be useful for graph representation
learning to uncover complex higher-order interactions in different
applications. | [
"cs.LG",
"stat.ML"
] |
A major challenge in brain tumor treatment planning and quantitative
evaluation is determination of the tumor extent. The noninvasive magnetic
resonance imaging (MRI) technique has emerged as a front-line diagnostic tool
for brain tumors without ionizing radiation. Manual segmentation of brain tumor
extent from 3D MRI volumes is a very time-consuming task and the performance is
highly relied on operator's experience. In this context, a reliable fully
automatic segmentation method for the brain tumor segmentation is necessary for
an efficient measurement of the tumor extent. In this study, we propose a fully
automatic method for brain tumor segmentation, which is developed using U-Net
based deep convolutional networks. Our method was evaluated on Multimodal Brain
Tumor Image Segmentation (BRATS 2015) datasets, which contain 220 high-grade
brain tumor and 54 low-grade tumor cases. Cross-validation has shown that our
method can obtain promising segmentation efficiently. | [
"cs.CV"
] |
Scale variance is one of the crucial challenges in multi-scale object
detection. Early approaches address this problem by exploiting the image and
feature pyramid, which raises suboptimal results with computation burden and
constrains from inherent network structures. Pioneering works also propose
multi-scale (i.e., multi-level and multi-branch) feature fusions to remedy the
issue and have achieved encouraging progress. However, existing fusions still
have certain limitations such as feature scale inconsistency, ignorance of
level-wise semantic transformation, and coarse granularity. In this work, we
present a novel module, the Fluff block, to alleviate drawbacks of current
multi-scale fusion methods and facilitate multi-scale object detection.
Specifically, Fluff leverages both multi-level and multi-branch schemes with
dilated convolutions to have rapid, effective and finer-grained feature
fusions. Furthermore, we integrate Fluff to SSD as FluffNet, a powerful
real-time single-stage detector for multi-scale object detection. Empirical
results on MS COCO and PASCAL VOC have demonstrated that FluffNet obtains
remarkable efficiency with state-of-the-art accuracy. Additionally, we indicate
the great generality of the Fluff block by showing how to embed it to other
widely-used detectors as well. | [
"cs.CV",
"cs.AI"
] |
In this paper, we propose a multiple-domain model for producing a custom-size
furniture layout in the interior scene. This model is aimed to support
professional interior designers to produce interior decoration solutions with
custom-size furniture more quickly. The proposed model combines a deep layout
module, a domain attention module, a dimensional domain transfer module, and a
custom-size module in the end-end training. Compared with the prior work on
scene synthesis, our proposed model enhances the ability of auto-layout of
custom-size furniture in the interior room. We conduct our experiments on a
real-world interior layout dataset that contains $710,700$ designs from
professional designers. Our numerical results demonstrate that the proposed
model yields higher-quality layouts of custom-size furniture in comparison with
the state-of-art model. | [
"cs.CV"
] |
This paper surveys state-of-the-art methods and models dedicated to time
series analysis and modeling, with the final aim of prediction. This review
aims to offer a structured and comprehensive view of the full process flow, and
encompasses time series decomposition, stationary tests, modeling and
forecasting. Besides, to meet didactic purposes, a unified presentation has
been adopted throughout this survey, to present decomposition frameworks on the
one hand and linear and nonlinear time series models on the other hand. First,
we decrypt the relationships between stationarity and linearity, and further
examine the main classes of methods used to test for weak stationarity. Next,
the main frameworks for time series decomposition are presented in a unified
way: depending on the time series, a more or less complex decomposition scheme
seeks to obtain nonstationary effects (the deterministic components) and a
remaining stochastic component. An appropriate modeling of the latter is a
critical step to guarantee prediction accuracy. We then present three popular
linear models, together with two more flexible variants of the latter. A step
further in model complexity, and still in a unified way, we present five major
nonlinear models used for time series. Amongst nonlinear models, artificial
neural networks hold a place apart as deep learning has recently gained
considerable attention. A whole section is therefore dedicated to time series
forecasting relying on deep learning approaches. A final section provides a
list of R and Python implementations for the methods, models and tests
presented throughout this review. In this document, our intention is to bring
sufficient in-depth knowledge, while covering a broad range of models and
forecasting methods: this compilation spans from well-established conventional
approaches to more recent adaptations of deep learning to time series
forecasting. | [
"cs.LG",
"cs.AI",
"68Txx",
"I.2.6"
] |
Instance segmentation in point clouds is one of the most fine-grained ways to
understand the 3D scene. Due to its close relationship to semantic
segmentation, many works approach these two tasks simultaneously and leverage
the benefits of multi-task learning. However, most of them only considered
simple strategies such as element-wise feature fusion, which may not lead to
mutual promotion. In this work, we build a Bi-Directional Attention module on
backbone neural networks for 3D point cloud perception, which uses similarity
matrix measured from features for one task to help aggregate non-local
information for the other task, avoiding the potential feature exclusion and
task conflict. From comprehensive experiments and ablation studies on the S3DIS
dataset and the PartNet dataset, the superiority of our method is verified.
Moreover, the mechanism of how bi-directional attention module helps joint
instance and semantic segmentation is also analyzed. | [
"cs.CV"
] |
We present a novel Neural Embedding Spatio-Temporal (NEST) point process
model for spatio-temporal discrete event data and develop an efficient
imitation learning (a type of reinforcement learning) based approach for model
fitting. Despite the rapid development of one-dimensional temporal point
processes for discrete event data, the study of spatial-temporal aspects of
such data is relatively scarce. Our model captures complex spatio-temporal
dependence between discrete events by carefully design a mixture of
heterogeneous Gaussian diffusion kernels, whose parameters are parameterized by
neural networks. This new kernel is the key that our model can capture
intricate spatial dependence patterns and yet still lead to interpretable
results as we examine maps of Gaussian diffusion kernel parameters. The
imitation learning model fitting for the NEST is more robust than the maximum
likelihood estimate. It directly measures the divergence between the empirical
distributions between the training data and the model-generated data. Moreover,
our imitation learning-based approach enjoys computational efficiency due to
the explicit characterization of the reward function related to the likelihood
function; furthermore, the likelihood function under our model enjoys tractable
expression due to Gaussian kernel parameterization. Experiments based on real
data show our method's good performance relative to the state-of-the-art and
the good interpretability of NEST's result. | [
"cs.LG",
"stat.AP",
"stat.ML"
] |
Nowadays, full face synthesis and partial face manipulation by virtue of the
generative adversarial networks (GANs) have raised wide public concerns. In the
multi-media forensics area, detecting and ultimately locating the image forgery
have become imperative. We investigated the architecture of existing GAN-based
face manipulation methods and observed that the imperfection of upsampling
methods therewithin could be served as an important asset for GAN-synthesized
fake images detection and forgery localization. Based on this basic
observation, we have proposed a novel approach to obtain high localization
accuracy, at full resolution, on manipulated facial images. To the best of our
knowledge, this is the very first attempt to solve the GAN-based fake
localization problem with a gray-scale fakeness prediction map that preserves
more information of fake regions. To improve the universality of FakeLocator
across multifarious facial attributes, we introduce an attention mechanism to
guide the training of the model. Experimental results on the CelebA and FFHQ
databases with seven different state-of-the-art GAN-based face generation
methods show the effectiveness of our method. Compared with the baseline, our
method performs two times better on various metrics. Moreover, the proposed
method is robust against various real-world facial image degradations such as
JPEG compression, low-resolution, noise, and blur. | [
"cs.CV",
"cs.LG"
] |
The outbreak of the novel coronavirus (COVID-19) is unfolding as a major
international crisis whose influence extends to every aspect of our daily
lives. Effective testing allows infected individuals to be quarantined, thus
reducing the spread of COVID-19, saving countless lives, and helping to restart
the economy safely and securely. Developing a good testing strategy can be
greatly aided by contact tracing that provides health care providers
information about the whereabouts of infected patients in order to determine
whom to test. Countries that have been more successful in corralling the virus
typically use a ``test, treat, trace, test'' strategy that begins with testing
individuals with symptoms, traces contacts of positively tested individuals via
a combinations of patient memory, apps, WiFi, GPS, etc., followed by testing
their contacts, and repeating this procedure. The problem is that such
strategies are myopic and do not efficiently use the testing resources. This is
especially the case with COVID-19, where symptoms may show up several days
after the infection (or not at all, there is evidence to suggest that many
COVID-19 carriers are asymptotic, but may spread the virus). Such greedy
strategies, miss out population areas where the virus may be dormant and flare
up in the future.
In this paper, we show that the testing problem can be cast as a sequential
learning-based resource allocation problem with constraints, where the input to
the problem is provided by a time-varying social contact graph obtained through
various contact tracing tools. We then develop efficient learning strategies
that minimize the number of infected individuals. These strategies are based on
policy iteration and look-ahead rules. We investigate fundamental performance
bounds, and ensure that our solution is robust to errors in the input graph as
well as in the tests themselves. | [
"cs.LG",
"math.OC",
"stat.ML"
] |
Learning inter-domain mappings from unpaired data can improve performance in
structured prediction tasks, such as image segmentation, by reducing the need
for paired data. CycleGAN was recently proposed for this problem, but
critically assumes the underlying inter-domain mapping is approximately
deterministic and one-to-one. This assumption renders the model ineffective for
tasks requiring flexible, many-to-many mappings. We propose a new model, called
Augmented CycleGAN, which learns many-to-many mappings between domains. We
examine Augmented CycleGAN qualitatively and quantitatively on several image
datasets. | [
"cs.LG"
] |
We present a model for the joint estimation of disparity and motion. The
model is based on learning about the interrelations between images from
multiple cameras, multiple frames in a video, or the combination of both. We
show that learning depth and motion cues, as well as their combinations, from
data is possible within a single type of architecture and a single type of
learning algorithm, by using biologically inspired "complex cell" like units,
which encode correlations between the pixels across image pairs. Our
experimental results show that the learning of depth and motion makes it
possible to achieve state-of-the-art performance in 3-D activity analysis, and
to outperform existing hand-engineered 3-D motion features by a very large
margin. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
Person re-identification in a multi-camera environment is an important part
of modern surveillance systems. Person re-identification from color images has
been the focus of much active research, due to the numerous challenges posed
with such analysis tasks, such as variations in illumination, pose and
viewpoints. In this paper, we suggest that hyperspectral imagery has the
potential to provide unique information that is expected to be beneficial for
the re-identification task. Specifically, we assert that by accurately
characterizing the unique spectral signature for each person's skin,
hyperspectral imagery can provide very useful descriptors (e.g. spectral
signatures from skin pixels) for re-identification. Towards this end, we
acquired proof-of-concept hyperspectral re-identification data under
challenging (practical) conditions from 15 people. Our results indicate that
hyperspectral data result in a substantially enhanced re-identification
performance compared to color (RGB) images, when using spectral signatures over
skin as the feature descriptor. | [
"cs.CV"
] |
Event cameras, which are asynchronous bio-inspired vision sensors, have shown
great potential in computer vision and artificial intelligence. However, the
application of event cameras to object-level motion estimation or tracking is
still in its infancy. The main idea behind this work is to propose a novel deep
neural network to learn and regress a parametric object-level motion/transform
model for event-based object tracking. To achieve this goal, we propose a
synchronous Time-Surface with Linear Time Decay (TSLTD) representation, which
effectively encodes the spatio-temporal information of asynchronous retinal
events into TSLTD frames with clear motion patterns. We feed the sequence of
TSLTD frames to a novel Retinal Motion Regression Network (RMRNet) to perform
an end-to-end 5-DoF object motion regression. Our method is compared with
state-of-the-art object tracking methods, that are based on conventional
cameras or event cameras. The experimental results show the superiority of our
method in handling various challenging environments such as fast motion and low
illumination conditions. | [
"cs.CV"
] |
Modern vehicles can be thought of as complex distributed embedded systems
that run a variety of automotive applications with real-time constraints.
Recent advances in the automotive industry towards greater autonomy are driving
vehicles to be increasingly connected with various external systems (e.g.,
roadside beacons, other vehicles), which makes emerging vehicles highly
vulnerable to cyber-attacks. Additionally, the increased complexity of
automotive applications and the in-vehicle networks results in poor attack
visibility, which makes detecting such attacks particularly challenging in
automotive systems. In this work, we present a novel anomaly detection
framework called LATTE to detect cyber-attacks in Controller Area Network (CAN)
based networks within automotive platforms. Our proposed LATTE framework uses a
stacked Long Short Term Memory (LSTM) predictor network with novel attention
mechanisms to learn the normal operating behavior at design time. Subsequently,
a novel detection scheme (also trained at design time) is used to detect
various cyber-attacks (as anomalies) at runtime. We evaluate our proposed LATTE
framework under different automotive attack scenarios and present a detailed
comparison with the best-known prior works in this area, to demonstrate the
potential of our approach. | [
"cs.LG",
"cs.DC",
"cs.SY",
"eess.SY"
] |
High-definition map (HD map) construction is a crucial problem for autonomous
driving. This problem typically involves collecting high-quality point clouds,
fusing multiple point clouds of the same scene, annotating map elements, and
updating maps constantly. This pipeline, however, requires a vast amount of
human efforts and resources which limits its scalability. Additionally,
traditional HD maps are coupled with centimeter-level accurate localization
which is unreliable in many scenarios. In this paper, we argue that online map
learning, which dynamically constructs the HD maps based on local sensor
observations, is a more scalable way to provide semantic and geometry priors to
self-driving vehicles than traditional pre-annotated HD maps. Meanwhile, we
introduce an online map learning method, titled HDMapNet. It encodes image
features from surrounding cameras and/or point clouds from LiDAR, and predicts
vectorized map elements in the bird's-eye view. We benchmark HDMapNet on the
nuScenes dataset and show that in all settings, it performs better than
baseline methods. Of note, our fusion-based HDMapNet outperforms existing
methods by more than 50% in all metrics. To accelerate future research, we
develop customized metrics to evaluate map learning performance, including both
semantic-level and instance-level ones. By introducing this method and metrics,
we invite the community to study this novel map learning problem. We will
release our code and evaluation kit to facilitate future development. | [
"cs.CV",
"cs.AI"
] |
We present a detailed description and reference implementation of
preprocessing steps necessary to prepare the public Retrospective Image
Registration Evaluation (RIRE) dataset for the task of magnetic resonance
imaging (MRI) to X-ray computed tomography (CT) translation. Furthermore we
describe and implement three state of the art convolutional neural network
(CNN) and generative adversarial network (GAN) models where we report
statistics and visual results of two of them. | [
"cs.CV"
] |
Facial expressions vary from the visible to the subtle. In recent years, the
analysis of micro-expressions $-$ a natural occurrence resulting from the
suppression of one's true emotions, has drawn the attention of researchers with
a broad range of potential applications. However, spotting microexpressions in
long videos becomes increasingly challenging when intertwined with normal or
macro-expressions. In this paper, we propose a shallow optical flow
three-stream CNN (SOFTNet) model to predict a score that captures the
likelihood of a frame being in an expression interval. By fashioning the
spotting task as a regression problem, we introduce pseudo-labeling to
facilitate the learning process. We demonstrate the efficacy and efficiency of
the proposed approach on the recent MEGC 2020 benchmark, where state-of-the-art
performance is achieved on CAS(ME)$^{2}$ with equally promising results on SAMM
Long Videos. | [
"cs.CV",
"cs.MM",
"I.4; I.5.1"
] |
In this paper, we present a new network named Attention Aware Network (AASeg)
for real time semantic image segmentation. Our network incorporates spatial and
channel information using Spatial Attention (SA) and Channel Attention (CA)
modules respectively. It also uses dense local multi-scale context information
using Multi Scale Context (MSC) module. The feature maps are concatenated
individually to produce the final segmentation map. We demonstrate the
effectiveness of our method using a comprehensive analysis, quantitative
experimental results and ablation study using Cityscapes, ADE20K and Camvid
datasets. Our network performs better than most previous architectures with a
74.4\% Mean IOU on Cityscapes test dataset while running at 202.7 FPS. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Inferring objects and their relationships from an image in the form of a
scene graph is useful in many applications at the intersection of vision and
language. In this work, we consider a challenging problem of compositional
generalization that emerges in this task due to a long tail data distribution.
Current scene graph generation models are trained on a tiny fraction of the
distribution corresponding to the most frequent compositions, e.g. <cup, on,
table>. However, test images might contain zero- and few-shot compositions of
objects and relationships, e.g. <cup, on, surfboard>. Despite each of the
object categories and the predicate (e.g. 'on') being frequent in the training
data, the models often fail to properly understand such unseen or rare
compositions. To improve generalization, it is natural to attempt increasing
the diversity of the training distribution. However, in the graph domain this
is non-trivial. To that end, we propose a method to synthesize rare yet
plausible scene graphs by perturbing real ones. We then propose and empirically
study a model based on conditional generative adversarial networks (GANs) that
allows us to generate visual features of perturbed scene graphs and learn from
them in a joint fashion. When evaluated on the Visual Genome dataset, our
approach yields marginal, but consistent improvements in zero- and few-shot
metrics. We analyze the limitations of our approach indicating promising
directions for future research. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
Many scientific datasets are compositional in nature. Important examples
include species abundances in ecology, rock compositions in geology, topic
compositions in large-scale text corpora, and sequencing count data in
molecular biology. Here, we provide a causal view on compositional data in an
instrumental variable setting where the composition acts as the cause.
Throughout, we pay particular attention to the interpretation of compositional
causes from the viewpoint of interventions and crisply articulate potential
pitfalls for practitioners. Focusing on modern high-dimensional microbiome
sequencing data as a timely illustrative use case, our analysis first reveals
that popular one-dimensional information-theoretic summary statistics, such as
diversity and richness, may be insufficient for drawing causal conclusions from
ecological data. Instead, we advocate for multivariate alternatives using
statistical data transformations and regression techniques that take the
special structure of the compositional sample space into account. In a
comparative analysis on synthetic and semi-synthetic data we show the
advantages and limitations of our proposal. We posit that our framework may
provide a useful starting point for cause-effect estimation in the context of
compositional data. | [
"cs.LG",
"q-bio.QM",
"stat.AP",
"stat.ML"
] |
Forecasting the future behaviors of dynamic actors is an important task in
many robotics applications such as self-driving. It is extremely challenging as
actors have latent intentions and their trajectories are governed by complex
interactions between the other actors, themselves, and the maps. In this paper,
we propose LaneRCNN, a graph-centric motion forecasting model. Importantly,
relying on a specially designed graph encoder, we learn a local lane graph
representation per actor (LaneRoI) to encode its past motions and the local map
topology. We further develop an interaction module which permits efficient
message passing among local graph representations within a shared global lane
graph. Moreover, we parameterize the output trajectories based on lane graphs,
a more amenable prediction parameterization. Our LaneRCNN captures the
actor-to-actor and the actor-to-map relations in a distributed and map-aware
manner. We demonstrate the effectiveness of our approach on the large-scale
Argoverse Motion Forecasting Benchmark. We achieve the 1st place on the
leaderboard and significantly outperform previous best results. | [
"cs.CV",
"cs.RO"
] |
Color separations (most often cyan, magenta, yellow, and black) are commonly
used in printing to reproduce multi-color images. For mechanical reasons, these
color separations are generally not perfectly aligned with respect to each
other when they are rendered by their respective imaging stations. This
phenomenon, called color plane misregistration, causes gap and halo artifacts
in the printed image. Color trapping is an image processing technique that aims
to reduce these artifacts by modifying the susceptible edge boundaries to
create small, unnoticeable overlaps between the color planes. We propose three
low-complexity algorithms for automatic color trapping which hide the effects
of small color plane mis-registrations. Our algorithms are designed for
software or embedded firmware implementation. The trapping method they follow
is based on a hardware-friendly technique proposed by J. Trask (JTHBCT03) which
is too computationally expensive for software or firmware implementation. The
first two algorithms are based on the use of look-up tables (LUTs). The first
LUT-based algorithm corrects all registration errors of one pixel in extent and
reduces several cases of misregistration errors of two pixels in extent using
only 727 Kbytes of storage space. This algorithm is particularly attractive for
implementation in the embedded firmware of low-cost formatter-based printers.
The second LUT-based algorithm corrects all types of misregistration errors of
up to two pixels in extent using 3.7 Mbytes of storage space. The third
algorithm is a hybrid one that combines LUTs and feature extraction to minimize
the storage requirements (724 Kbytes) while still correcting all
misregistration errors of up to two pixels in extent. This algorithm is
suitable for both embedded firmware implementation on low-cost formatter-based
printers and software implementation on host-based printers. | [
"cs.CV"
] |
We present ShapeFlow, a flow-based model for learning a deformation space for
entire classes of 3D shapes with large intra-class variations. ShapeFlow allows
learning a multi-template deformation space that is agnostic to shape topology,
yet preserves fine geometric details. Different from a generative space where a
latent vector is directly decoded into a shape, a deformation space decodes a
vector into a continuous flow that can advect a source shape towards a target.
Such a space naturally allows the disentanglement of geometric style (coming
from the source) and structural pose (conforming to the target). We parametrize
the deformation between geometries as a learned continuous flow field via a
neural network and show that such deformations can be guaranteed to have
desirable properties, such as be bijectivity, freedom from self-intersections,
or volume preservation. We illustrate the effectiveness of this learned
deformation space for various downstream applications, including shape
generation via deformation, geometric style transfer, unsupervised learning of
a consistent parameterization for entire classes of shapes, and shape
interpolation. | [
"cs.CV",
"cs.GR"
] |
Special cameras that provide useful features for face anti-spoofing are
desirable, but not always an option. In this work we propose a method to
utilize the difference in dynamic appearance between bona fide and spoof
samples by creating artificial modalities from RGB videos. We introduce two
types of artificial transforms: rank pooling and optical flow, combined in
end-to-end pipeline for spoof detection. We demonstrate that using intermediate
representations that contain less identity and fine-grained features increase
model robustness to unseen attacks as well as to unseen ethnicities. The
proposed method achieves state-of-the-art on the largest cross-ethnicity face
anti-spoofing dataset CASIA-SURF CeFA (RGB). | [
"cs.CV"
] |
Multivariate time series is a very active topic in the research community and
many machine learning tasks are being used in order to extract information from
this type of data. However, in real-world problems data has missing values,
which may difficult the application of machine learning techniques to extract
information. In this paper we focus on the task of imputation of time series.
Many imputation methods for time series are based on regression methods.
Unfortunately, these methods perform poorly when the variables are categorical.
To address this case, we propose a new imputation method based on Expectation
Maximization over dynamic Bayesian networks. The approach is assessed with
synthetic and real data, and it outperforms several state-of-the art methods. | [
"cs.LG",
"stat.ML"
] |
In Computer Vision domain, moving Object Tracking considered as one of the
toughest problem.As there so many factors associated like illumination of
light, noise, occlusion, sudden start and stop of moving object, shading which
makes tracking even harder problem not only for dynamic background but also for
static background.In this paper we present a new object tracking algorithm
based on Dominant points on tracked object using Quantum particle swarm
optimization (QPSO) which is a new different version of PSO based on Quantum
theory. The novelty in our approach is that it can be successfully applicable
in variable background as well as static background and application of quantum
PSO makes the algorithm runs lot faster where other basic PSO algorithm failed
to do so due to heavy computation.In our approach firstly dominants points of
tracked objects detected, then a group of particles form a swarm are
initialized randomly over the image search space and then start searching the
curvature connected between two consecutive dominant points until they satisfy
fitness criteria. Obviously it is a Multi-Swarm approach as there are multiple
dominant points, as they moves, the curvature moves and the curvature movement
is tracked by the swarm throughout the video and eventually when the swarm
reaches optimal solution , a bounding box drawn based on particles final
position.Experimental results demonstrate this proposed QPSO based method work
efficiently and effectively in visual object tracking in both dynamic and
static environments and run time shows that it runs closely 90% faster than
basic PSO.in our approach we also apply parallelism using MatLab Parfor command
to show how very less number of iteration and swarm size will enable us to
successfully track object. | [
"cs.CV",
"cs.AI"
] |
The task of video-based commonsense captioning aims to generate event-wise
captions and meanwhile provide multiple commonsense descriptions (e.g.,
attribute, effect and intention) about the underlying event in the video. Prior
works explore the commonsense captions by using separate networks for different
commonsense types, which is time-consuming and lacks mining the interaction of
different commonsense. In this paper, we propose a Hybrid Reasoning Network
(HybridNet) to endow the neural networks with the capability of semantic-level
reasoning and word-level reasoning. Firstly, we develop multi-commonsense
learning for semantic-level reasoning by jointly training different commonsense
types in a unified network, which encourages the interaction between the clues
of multiple commonsense descriptions, event-wise captions and videos. Then,
there are two steps to achieve the word-level reasoning: (1) a memory module
records the history predicted sequence from the previous generation processes;
(2) a memory-routed multi-head attention (MMHA) module updates the word-level
attention maps by incorporating the history information from the memory module
into the transformer decoder for word-level reasoning. Moreover, the multimodal
features are used to make full use of diverse knowledge for commonsense
reasoning. Experiments and abundant analysis on the large-scale
Video-to-Commonsense benchmark show that our HybridNet achieves
state-of-the-art performance compared with other methods. | [
"cs.CV",
"cs.CL",
"68T07"
] |
Bayesian networks represent relations between variables using a directed
acyclic graph (DAG). Learning the DAG is an NP-hard problem and exact learning
algorithms are feasible only for small sets of variables. We propose two
scalable heuristics for learning DAGs in the linear structural equation case.
Our methods learn the DAG by alternating between unconstrained gradient
descent-based step to optimize an objective function and solving a maximum
acyclic subgraph problem to enforce acyclicity. Thanks to this decoupling, our
methods scale up beyond thousands of variables. | [
"cs.LG"
] |
Generalization Performance of Deep Learning models trained using Empirical
Risk Minimization can be improved significantly by using Data Augmentation
strategies such as simple transformations, or using Mixed Samples. We attempt
to empirically analyze the impact of such strategies on the transfer of
generalization between teacher and student models in a distillation setup. We
observe that if a teacher is trained using any of the mixed sample augmentation
strategies, such as MixUp or CutMix, the student model distilled from it is
impaired in its generalization capabilities. We hypothesize that such
strategies limit a model's capability to learn example-specific features,
leading to a loss in quality of the supervision signal during distillation. We
present a novel Class-Discrimination metric to quantitatively measure this
dichotomy in performance and link it to the discriminative capacity induced by
the different strategies on a network's latent space. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
The computer vision community has paid much attention to the development of
visible image super-resolution (SR) using deep neural networks (DNNs) and has
achieved impressive results. The advancement of non-visible light sensors, such
as acoustic imaging sensors, has attracted much attention, as they allow people
to visualize the intensity of sound waves beyond the visible spectrum. However,
because of the limitations imposed on acquiring acoustic data, new methods for
improving the resolution of the acoustic images are necessary. At this time,
there is no acoustic imaging dataset designed for the SR problem. This work
proposed a novel backprojection model architecture for the acoustic image
super-resolution problem, together with Acoustic Map Imaging VUB-ULB Dataset
(AMIVU). The dataset provides large simulated and real captured images at
different resolutions. The proposed XCycles BackProjection model (XCBP), in
contrast to the feedforward model approach, fully uses the iterative correction
procedure in each cycle to reconstruct the residual error correction for the
encoded features in both low- and high-resolution space. The proposed approach
was evaluated on the dataset and showed high outperformance compared to the
classical interpolation operators and to the recent feedforward
state-of-the-art models. It also contributed to a drastically reduced
sub-sampling error produced during the data acquisition. | [
"cs.CV",
"cs.LG",
"cs.SD",
"eess.AS"
] |
In recent years, many spatial-temporal graph convolutional network (STGCN)
models are proposed to deal with the spatial-temporal network data forecasting
problem. These STGCN models have their own advantages, i.e., each of them puts
forward many effective operations and achieves good prediction results in the
real applications. If users can effectively utilize and combine these excellent
operations integrating the advantages of existing models, then they may obtain
more effective STGCN models thus create greater value using existing work.
However, they fail to do so due to the lack of domain knowledge, and there is
lack of automated system to help users to achieve this goal. In this paper, we
fill this gap and propose Auto-STGCN algorithm, which makes use of existing
models to automatically explore high-performance STGCN model for specific
scenarios. Specifically, we design Unified-STGCN framework, which summarizes
the operations of existing architectures, and use parameters to control the
usage and characteristic attributes of each operation, so as to realize the
parameterized representation of the STGCN architecture and the reorganization
and fusion of advantages. Then, we present Auto-STGCN, an optimization method
based on reinforcement learning, to quickly search the parameter search space
provided by Unified-STGCN, and generate optimal STGCN models automatically.
Extensive experiments on real-world benchmark datasets show that our Auto-STGCN
can find STGCN models superior to existing STGCN models with heuristic
parameters, which demonstrates the effectiveness of our proposed method. | [
"cs.LG",
"cs.AI"
] |
Scene understanding is crucial for autonomous systems which intend to operate
in the real world. Single task vision networks extract information only based
on some aspects of the scene. In multi-task learning (MTL), on the other hand,
these single tasks are jointly learned, thereby providing an opportunity for
tasks to share information and obtain a more comprehensive understanding. To
this end, we develop UniNet, a unified scene understanding network that
accurately and efficiently infers vital vision tasks including object
detection, semantic segmentation, instance segmentation, monocular depth
estimation, and monocular instance depth prediction. As these tasks look at
different semantic and geometric information, they can either complement or
conflict with each other. Therefore, understanding inter-task relationships can
provide useful cues to enable complementary information sharing. We evaluate
the task relationships in UniNet through the lens of adversarial attacks based
on the notion that they can exploit learned biases and task interactions in the
neural network. Extensive experiments on the Cityscapes dataset, using
untargeted and targeted attacks reveal that semantic tasks strongly interact
amongst themselves, and the same holds for geometric tasks. Additionally, we
show that the relationship between semantic and geometric tasks is asymmetric
and their interaction becomes weaker as we move towards higher-level
representations. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Transfer learning is an important field of machine learning in general, and
particularly in the context of fully autonomous driving, which needs to be
solved simultaneously for many different domains, such as changing weather
conditions and country-specific driving behaviors. Traditional transfer
learning methods often focus on image data and are black-box models. In this
work we propose a transfer learning framework, core of which is learning an
explicit mapping between domains. Due to its interpretability, this is
beneficial for safety-critical applications, like autonomous driving. We show
its general applicability by considering image classification problems and then
move on to time-series data, particularly predicting lane changes. In our
evaluation we adapt a pre-trained model to a dataset exhibiting different
driving and sensory characteristics. | [
"cs.LG",
"cs.CV",
"eess.IV"
] |
Motivated by a $2$-dimensional (unsupervised) image segmentation task whereby
local regions of pixels are clustered via edge detection methods, a more
general probabilistic mathematical framework is devised. Critical thresholds
are calculated that indicate strong correlation between randomly-generated,
high dimensional data points that have been projected into structures in a
partition of a bounded, $2$-dimensional area, of which, an image is a special
case. A neighbor concept for structures in the partition is defined and a
critical radius is uncovered. Measured from a central structure in localized
regions of the partition, the radius indicates strong, long and short range
correlation in the count of occupied structures. The size of a short interval
of radii is estimated upon which the transition from short-to-long range
correlation is virtually assured, which defines a demarcation of when an image
ceases to be "interesting". | [
"cs.LG",
"60D05, 62C99"
] |
Can a machine learn Machine Learning? This work trains a machine learning
model to solve machine learning problems from a University undergraduate level
course. We generate a new training set of questions and answers consisting of
course exercises, homework, and quiz questions from MIT's 6.036 Introduction to
Machine Learning course and train a machine learning model to answer these
questions. Our system demonstrates an overall accuracy of 96% for open-response
questions and 97% for multiple-choice questions, compared with MIT students'
average of 93%, achieving grade A performance in the course, all in real-time.
Questions cover all 12 topics taught in the course, excluding coding questions
or questions with images. Topics include: (i) basic machine learning
principles; (ii) perceptrons; (iii) feature extraction and selection; (iv)
logistic regression; (v) regression; (vi) neural networks; (vii) advanced
neural networks; (viii) convolutional neural networks; (ix) recurrent neural
networks; (x) state machines and MDPs; (xi) reinforcement learning; and (xii)
decision trees. Our system uses Transformer models within an encoder-decoder
architecture with graph and tree representations. An important aspect of our
approach is a data-augmentation scheme for generating new example problems. We
also train a machine learning model to generate problem hints. Thus, our system
automatically generates new questions across topics, answers both open-response
questions and multiple-choice questions, classifies problems, and generates
problem hints, pushing the envelope of AI for STEM education. | [
"cs.LG"
] |
We present MMDetection, an object detection toolbox that contains a rich set
of object detection and instance segmentation methods as well as related
components and modules. The toolbox started from a codebase of MMDet team who
won the detection track of COCO Challenge 2018. It gradually evolves into a
unified platform that covers many popular detection methods and contemporary
modules. It not only includes training and inference codes, but also provides
weights for more than 200 network models. We believe this toolbox is by far the
most complete detection toolbox. In this paper, we introduce the various
features of this toolbox. In addition, we also conduct a benchmarking study on
different methods, components, and their hyper-parameters. We wish that the
toolbox and benchmark could serve the growing research community by providing a
flexible toolkit to reimplement existing methods and develop their own new
detectors. Code and models are available at
https://github.com/open-mmlab/mmdetection. The project is under active
development and we will keep this document updated. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Neural sequence generation is typically performed token-by-token and
left-to-right. Whenever a token is generated only previously produced tokens
are taken into consideration. In contrast, for problems such as sequence
classification, bidirectional attention, which takes both past and future
tokens into consideration, has been shown to perform much better. We propose to
make the sequence generation process bidirectional by employing special
placeholder tokens. Treated as a node in a fully connected graph, a placeholder
token can take past and future tokens into consideration when generating the
actual output token. We verify the effectiveness of our approach experimentally
on two conversational tasks where the proposed bidirectional model outperforms
competitive baselines by a large margin. | [
"stat.ML",
"cs.CL",
"cs.LG"
] |
Blind video deblurring restores sharp frames from a blurry sequence without
any prior. It is a challenging task because the blur due to camera shake,
object movement and defocusing is heterogeneous in both temporal and spatial
dimensions. Traditional methods train on datasets synthesized with a single
level of blur, and thus do not generalize well across levels of blurriness. To
address this challenge, we propose a dual attention mechanism to dynamically
aggregate temporal cues for deblurring with an end-to-end trainable network
structure. Specifically, an internal attention module adaptively selects the
optimal temporal scales for restoring the sharp center frame. An external
attention module adaptively aggregates and refines multiple sharp frame
estimates, from several internal attention modules designed for different blur
levels. To train and evaluate on more diverse blur severity levels, we propose
a Challenging DVD dataset generated from the raw DVD video set by pooling
frames with different temporal windows. Our framework achieves consistently
better performance on this more challenging dataset while obtaining strongly
competitive results on the original DVD benchmark. Extensive ablative studies
and qualitative visualizations further demonstrate the advantage of our method
in handling real video blur. | [
"cs.CV"
] |
We propose a method for creating a matte -- the per-pixel foreground color
and alpha -- of a person by taking photos or videos in an everyday setting with
a handheld camera. Most existing matting methods require a green screen
background or a manually created trimap to produce a good matte. Automatic,
trimap-free methods are appearing, but are not of comparable quality. In our
trimap free approach, we ask the user to take an additional photo of the
background without the subject at the time of capture. This step requires a
small amount of foresight but is far less time-consuming than creating a
trimap. We train a deep network with an adversarial loss to predict the matte.
We first train a matting network with supervised loss on ground truth data with
synthetic composites. To bridge the domain gap to real imagery with no
labeling, we train another matting network guided by the first network and by a
discriminator that judges the quality of composites. We demonstrate results on
a wide variety of photos and videos and show significant improvement over the
state of the art. | [
"cs.CV"
] |
The generative adversarial network (GAN) is a well-known model for learning
high-dimensional distributions, but the mechanism for its generalization
ability is not understood. In particular, GAN is vulnerable to the memorization
phenomenon, the eventual convergence to the empirical distribution. We consider
a simplified GAN model with the generator replaced by a density, and analyze
how the discriminator contributes to generalization. We show that with early
stopping, the generalization error measured by Wasserstein metric escapes from
the curse of dimensionality, despite that in the long term, memorization is
inevitable. In addition, we present a hardness of learning result for WGAN. | [
"cs.LG",
"stat.ML",
"68T07, 62G07, 60-08"
] |
Region Proposal Network (RPN) provides strong support for handling the scale
variation of objects in two-stage object detection. For one-stage detectors
which do not have RPN, it is more demanding to have powerful sub-networks
capable of directly capturing objects of unknown sizes. To enhance such
capability, we propose an extremely efficient neural architecture search
method, named Fast And Diverse (FAD), to better explore the optimal
configuration of receptive fields and convolution types in the sub-networks for
one-stage detectors. FAD consists of a designed search space and an efficient
architecture search algorithm. The search space contains a rich set of diverse
transformations designed specifically for object detection. To cope with the
designed search space, a novel search algorithm termed Representation Sharing
(RepShare) is proposed to effectively identify the best combinations of the
defined transformations. In our experiments, FAD obtains prominent improvements
on two types of one-stage detectors with various backbones. In particular, our
FAD detector achieves 46.4 AP on MS-COCO (under single-scale testing),
outperforming the state-of-the-art detectors, including the most recent
NAS-based detectors, Auto-FPN (searched for 16 GPU-days) and NAS-FCOS (28
GPU-days), while significantly reduces the search cost to 0.6 GPU-days. Beyond
object detection, we further demonstrate the generality of FAD on the more
challenging instance segmentation, and expect it to benefit more tasks. | [
"cs.CV"
] |
This study is concerned with the top-down visual processing benefit in the
task of occluded object recognition. To this end, a psychophysical experiment
is designed and carried out which aimed at investigating the effect of
consistency of contextual information on the recognition of objects which are
partially occluded. The results demonstrate the facilitative impact of
consistent contextual clues on the task of object recognition in presence of
occlusion. | [
"cs.CV"
] |
Policy distillation in deep reinforcement learning provides an effective way
to transfer control policies from a larger network to a smaller untrained
network without a significant degradation in performance. However, policy
distillation is underexplored in deep reinforcement learning, and existing
approaches are computationally inefficient, resulting in a long distillation
time. In addition, the effectiveness of the distillation process is still
limited to the model capacity. We propose a new distillation mechanism, called
real-time policy distillation, in which training the teacher model and
distilling the policy to the student model occur simultaneously. Accordingly,
the teacher's latest policy is transferred to the student model in real time.
This reduces the distillation time to half the original time or even less and
also makes it possible for extremely small student models to learn skills at
the expert level. We evaluated the proposed algorithm in the Atari 2600 domain.
The results show that our approach can achieve full distillation in most games,
even with compression ratios up to 1.7%. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
In modern transportation systems, an enormous amount of traffic data is
generated every day. This has led to rapid progress in short-term traffic
prediction (STTP), in which deep learning methods have recently been applied.
In traffic networks with complex spatiotemporal relationships, deep neural
networks (DNNs) often perform well because they are capable of automatically
extracting the most important features and patterns. In this study, we survey
recent STTP studies applying deep networks from four perspectives. 1) We
summarize input data representation methods according to the number and type of
spatial and temporal dependencies involved. 2) We briefly explain a wide range
of DNN techniques from the earliest networks, including Restricted Boltzmann
Machines, to the most recent, including graph-based and meta-learning networks.
3) We summarize previous STTP studies in terms of the type of DNN techniques,
application area, dataset and code availability, and the type of the
represented spatiotemporal dependencies. 4) We compile public traffic datasets
that are popular and can be used as the standard benchmarks. Finally, we
suggest challenging issues and possible future research directions in STTP. | [
"cs.LG",
"cs.AI",
"eess.SP"
] |
Mixture-of-Experts (MoE) is a widely popular model for ensemble learning and
is a basic building block of highly successful modern neural networks as well
as a component in Gated Recurrent Units (GRU) and Attention networks. However,
present algorithms for learning MoE including the EM algorithm, and gradient
descent are known to get stuck in local optima. From a theoretical viewpoint,
finding an efficient and provably consistent algorithm to learn the parameters
remains a long standing open problem for more than two decades. In this paper,
we introduce the first algorithm that learns the true parameters of a MoE model
for a wide class of non-linearities with global consistency guarantees. While
existing algorithms jointly or iteratively estimate the expert parameters and
the gating paramters in the MoE, we propose a novel algorithm that breaks the
deadlock and can directly estimate the expert parameters by sensing its echo in
a carefully designed cross-moment tensor between the inputs and the output.
Once the experts are known, the recovery of gating parameters still requires an
EM algorithm; however, we show that the EM algorithm for this simplified
problem, unlike the joint EM algorithm, converges to the true parameters. We
empirically validate our algorithm on both the synthetic and real data sets in
a variety of settings, and show superior performance to standard baselines. | [
"cs.LG"
] |
Class imbalance poses a challenge for developing unbiased, accurate
predictive models. In particular, in image segmentation neural networks may
overfit to the foreground samples from small structures, which are often
heavily under-represented in the training set, leading to poor generalization.
In this study, we provide new insights on the problem of overfitting under
class imbalance by inspecting the network behavior. We find empirically that
when training with limited data and strong class imbalance, at test time the
distribution of logit activations may shift across the decision boundary, while
samples of the well-represented class seem unaffected. This bias leads to a
systematic under-segmentation of small structures. This phenomenon is
consistently observed for different databases, tasks and network architectures.
To tackle this problem, we introduce new asymmetric variants of popular loss
functions and regularization techniques including a large margin loss, focal
loss, adversarial training, mixup and data augmentation, which are explicitly
designed to counter logit shift of the under-represented classes. Extensive
experiments are conducted on several challenging segmentation tasks. Our
results demonstrate that the proposed modifications to the objective function
can lead to significantly improved segmentation accuracy compared to baselines
and alternative approaches. | [
"cs.CV"
] |
In this thesis we address two related aspects of visual object recognition:
the use of motion information, and the use of internal supervision, to help
unsupervised learning. These two aspects are inter-related in the current
study, since image motion is used for internal supervision, via the detection
of spatiotemporal events of active-motion and the use of tracking. Most current
work in object recognition deals with static images during both learning and
recognition. In contrast, we are interested in a dynamic scene where visual
processes, such as detecting motion events and tracking, contribute
spatiotemporal information, which is useful for object attention, motion
segmentation, 3-D understanding and object interactions. We explore the use of
these sources of information in both learning and recognition processes. In the
first part of the work, we demonstrate how motion can be used for adaptive
detection of object-parts in dynamic environments, while automatically learning
new object appearances and poses. In the second and main part of the study we
develop methods for using specific types of visual motion to solve two
difficult problems in unsupervised visual learning: learning to recognize hands
by their appearance and by context, and learning to extract direction of gaze.
We use our conclusions in this part to propose a model for several aspects of
learning by human infants from their visual environment. | [
"cs.CV"
] |
Every year millions of people die due to disease of Cancer. Due to its
invasive nature it is very complex to cure even in primary stages. Hence, only
method to survive this disease completely is via forecasting by analyzing the
early mutation in cells of the patient biopsy. Cell Segmentation can be used to
find cell which have left their nuclei. This enables faster cure and high rate
of survival. Cell counting is a hard, yet tedious task that would greatly
benefit from automation. To accomplish this task, segmentation of cells need to
be accurate. In this paper, we have improved the learning of training data by
our network. It can annotate precise masks on test data. we examine the
strength of activation functions in medical image segmentation task by
improving learning rates by our proposed Carving Technique. Identifying the
cells nuclei is the starting point for most analyses, identifying nuclei allows
researchers to identify each individual cell in a sample, and by measuring how
cells react to various treatments, the researcher can understand the underlying
biological processes at work. Experimental results shows the efficiency of the
proposed work. | [
"cs.CV",
"cs.LG"
] |
Recently, referring image segmentation has aroused widespread interest.
Previous methods perform the multi-modal fusion between language and vision at
the decoding side of the network. And, linguistic feature interacts with visual
feature of each scale separately, which ignores the continuous guidance of
language to multi-scale visual features. In this work, we propose an encoder
fusion network (EFN), which transforms the visual encoder into a multi-modal
feature learning network, and uses language to refine the multi-modal features
progressively. Moreover, a co-attention mechanism is embedded in the EFN to
realize the parallel update of multi-modal features, which can promote the
consistent of the cross-modal information representation in the semantic space.
Finally, we propose a boundary enhancement module (BEM) to make the network pay
more attention to the fine structure. The experiment results on four benchmark
datasets demonstrate that the proposed approach achieves the state-of-the-art
performance under different evaluation metrics without any post-processing. | [
"cs.CV"
] |
Pests and diseases pose a key challenge to passion fruit farmers across
Uganda and East Africa in general. They lead to loss of investment as yields
reduce and losses increases. As the majority of the farmers, including passion
fruit farmers, in the country are smallholder farmers from low-income
households, they do not have the sufficient information and means to combat
these challenges. While, passion fruits have the potential to improve the
well-being of these farmers as they have a short maturity period and high
market value , without the required knowledge about the health of their crops,
farmers cannot intervene promptly to turn the situation around.
For this work, we have partnered with the Uganda National Crop Research
Institute (NaCRRI) to develop a dataset of expertly labelled passion fruit
plant leaves and fruits, both diseased and healthy. We have made use of their
extension service to collect images from 5 districts in Uganda,
With the dataset in place, we are employing state-of-the-art techniques in
machine learning, and specifically deep learning, techniques at scale for
object detection and classification to correctly determine the health status of
passion fruit plants and provide an accurate diagnosis for positive
detections.This work focuses on two major diseases woodiness (viral) and brown
spot (fungal) diseases. | [
"cs.CV"
] |
Although Generative Adversarial Networks (GANs) have made significant
progress in face synthesis, there lacks enough understanding of what GANs have
learned in the latent representation to map a random code to a photo-realistic
image. In this work, we propose a framework called InterFaceGAN to interpret
the disentangled face representation learned by the state-of-the-art GAN models
and study the properties of the facial semantics encoded in the latent space.
We first find that GANs learn various semantics in some linear subspaces of the
latent space. After identifying these subspaces, we can realistically
manipulate the corresponding facial attributes without retraining the model. We
then conduct a detailed study on the correlation between different semantics
and manage to better disentangle them via subspace projection, resulting in
more precise control of the attribute manipulation. Besides manipulating the
gender, age, expression, and presence of eyeglasses, we can even alter the face
pose and fix the artifacts accidentally made by GANs. Furthermore, we perform
an in-depth face identity analysis and a layer-wise analysis to evaluate the
editing results quantitatively. Finally, we apply our approach to real face
editing by employing GAN inversion approaches and explicitly training
feed-forward models based on the synthetic data established by InterFaceGAN.
Extensive experimental results suggest that learning to synthesize faces
spontaneously brings a disentangled and controllable face representation. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
This article introduces an approach to facilitate cooperative exploration and
mapping of large-scale, near-ground, underground, or indoor spaces via a novel
integration framework for locally-dense agent map data. The effort targets
limited Size, Weight, and Power (SWaP) agents with an emphasis on limiting
required communications and redundant processing. The approach uses a unique
organization of batch optimization engines to enable a highly efficient
two-tier optimization structure. Tier I consist of agents that create and
potentially share local maplets (local maps, limited in size) which are
generated using Simultaneous Localization and Mapping (SLAM) map-building
software and then marginalized to a more compact parameterization. Maplets are
generated in an overlapping manner and used to estimate the transform and
uncertainty between those overlapping maplets, providing accurate and compact
odometry or delta-pose representation between maplet's local frames. The delta
poses can be shared between agents, and in cases where maplets have salient
features (for loop closures), the compact representation of the maplet can also
be shared.
The second optimization tier consists of a global optimizer that seeks to
optimize those maplet-to-maplet transformations, including any loop closures
identified. This can provide an accurate global "skeleton"' of the traversed
space without operating on the high-density point cloud. This compact version
of the map data allows for scalable, cooperative exploration with limited
communication requirements where most of the individual maplets, or low
fidelity renderings, are only shared if desired. | [
"cs.CV"
] |
Nowadays underwater vision systems are being widely applied in ocean
research. However, the largest portion of the ocean - the deep sea - still
remains mostly unexplored. Only relatively few image sets have been taken from
the deep sea due to the physical limitations caused by technical challenges and
enormous costs. Deep sea images are very different from the images taken in
shallow waters and this area did not get much attention from the community. The
shortage of deep sea images and the corresponding ground truth data for
evaluation and training is becoming a bottleneck for the development of
underwater computer vision methods. Thus, this paper presents a physical
model-based image simulation solution, which uses an in-air texture and depth
information as inputs, to generate underwater image sequences taken by robots
in deep ocean scenarios. Different from shallow water conditions, artificial
illumination plays a vital role in deep sea image formation as it strongly
affects the scene appearance. Our radiometric image formation model considers
both attenuation and scattering effects with co-moving spotlights in the dark.
By detailed analysis and evaluation of the underwater image formation model, we
propose a 3D lookup table structure in combination with a novel rendering
strategy to improve simulation performance. This enables us to integrate an
interactive deep sea robotic vision simulation in the Unmanned Underwater
Vehicles simulator. To inspire further deep sea vision research by the
community, we will release the source code of our deep sea image converter to
the public. | [
"cs.CV",
"eess.IV"
] |
Owing to the lack of defect samples in industrial product quality inspection,
trained segmentation model tends to overfit when applied online. To address
this problem, we propose a defect sample simulation algorithm based on neural
style transfer. The simulation algorithm requires only a small number of defect
samples for training, and can efficiently generate simulation samples for
next-step segmentation task. In our work, we introduce a masked histogram
matching module to maintain color consistency of the generated area and the
true defect. To preserve the texture consistency with the surrounding pixels,
we take the fast style transfer algorithm to blend the generated area into the
background. At the same time, we also use the histogram loss to further improve
the quality of the generated image. Besides, we propose a novel structure of
segment net to make it more suitable for defect segmentation task. We train the
segment net with the real defect samples and the generated simulation samples
separately on the button datasets. The results show that the F1 score of the
model trained with only the generated simulation samples reaches 0.80, which is
better than the real sample result. | [
"cs.CV",
"eess.IV"
] |
In this paper we investigate a link between state- space models and Gaussian
Processes (GP) for time series modeling and forecasting. In particular, several
widely used state- space models are transformed into continuous time form and
corresponding Gaussian Process kernels are derived. Experimen- tal results
demonstrate that the derived GP kernels are correct and appropriate for
Gaussian Process Regression. An experiment with a real world dataset shows that
the modeling is identical with state-space models and with the proposed GP
kernels. The considered connection allows the researchers to look at their
models from a different angle and facilitate sharing ideas between these two
different modeling approaches. | [
"stat.ML"
] |
Variational Bayes (VB) is a scalable alternative to Markov chain Monte Carlo
(MCMC) for Bayesian posterior inference. Though popular, VB comes with few
theoretical guarantees, most of which focus on well-specified models. However,
models are rarely well-specified in practice. In this work, we study VB under
model misspecification. We prove the VB posterior is asymptotically normal and
centers at the value that minimizes the Kullback-Leibler (KL) divergence to the
true data-generating distribution. Moreover, the VB posterior mean centers at
the same value and is also asymptotically normal. These results generalize the
variational Bernstein--von Mises theorem [29] to misspecified models. As a
consequence of these results, we find that the model misspecification error
dominates the variational approximation error in VB posterior predictive
distributions. It explains the widely observed phenomenon that VB achieves
comparable predictive accuracy with MCMC even though VB uses an approximating
family. As illustrations, we study VB under three forms of model
misspecification, ranging from model over-/under-dispersion to latent
dimensionality misspecification. We conduct two simulation studies that
demonstrate the theoretical results. | [
"stat.ML",
"cs.LG",
"math.ST",
"stat.TH"
] |
Heterogeneous face recognition is a challenging task due to the large
modality discrepancy and insufficient cross-modal samples. Most existing works
focus on discriminative feature transformation, metric learning and cross-modal
face synthesis. However, the fact that cross-modal faces are always coupled by
domain (modality) and identity information has received little attention.
Therefore, how to learn and utilize the domain-private feature and
domain-agnostic feature for modality adaptive face recognition is the focus of
this work. Specifically, this paper proposes a Feature Aggregation Network
(FAN), which includes disentangled representation module (DRM), feature fusion
module (FFM) and adaptive penalty metric (APM) learning session. First, in DRM,
two subnetworks, i.e. domain-private network and domain-agnostic network are
specially designed for learning modality features and identity features,
respectively. Second, in FFM, the identity features are fused with domain
features to achieve cross-modal bi-directional identity feature transformation,
which, to a large extent, further disentangles the modality information and
identity information. Third, considering that the distribution imbalance
between easy and hard pairs exists in cross-modal datasets, which increases the
risk of model bias, the identity preserving guided metric learning with
adaptive hard pairs penalization is proposed in our FAN. The proposed APM also
guarantees the cross-modality intra-class compactness and inter-class
separation. Extensive experiments on benchmark cross-modal face datasets show
that our FAN outperforms SOTA methods. | [
"cs.CV"
] |
Given the importance of remote sensing, surprisingly little attention has
been paid to it by the representation learning community. To address it and to
establish baselines and a common evaluation protocol in this domain, we provide
simplified access to 5 diverse remote sensing datasets in a standardized form.
Specifically, we investigate in-domain representation learning to develop
generic remote sensing representations and explore which characteristics are
important for a dataset to be a good source for remote sensing representation
learning. The established baselines achieve state-of-the-art performance on
these datasets. | [
"cs.CV"
] |
Deep learning based computer vision fails to work when labeled images are
scarce. Recently, Meta learning algorithm has been confirmed as a promising way
to improve the ability of learning from few images for computer vision.
However, previous Meta learning approaches expose problems:
1) they ignored the importance of attention mechanism for the Meta learner;
2) they didn't give the Meta learner the ability of well using the past
knowledge which can help to express images into high representations, resulting
in that the Meta learner has to solve few shot learning task directly from the
original high dimensional RGB images.
In this paper, we argue that the attention mechanism and the past knowledge
are crucial for the Meta learner, and the Meta learner should be trained on
high representations of the RGB images instead of directly on the original
ones. Based on these arguments, we propose two methods: Attention augmented
Meta Learning (AML) and Representation based and Attention augmented Meta
Learning(RAML). The method AML aims to improve the Meta learner's attention
ability by explicitly embedding an attention model into its network. The method
RAML aims to give the Meta learner the ability of leveraging the past learned
knowledge to reduce the dimension of the original input data by expressing it
into high representations, and help the Meta learner to perform well. Extensive
experiments demonstrate the effectiveness of the proposed models, with
state-of-the-art few shot learning performances on several few shot learning
benchmarks. The source code of our proposed methods will be released soon to
facilitate further studies on those aforementioned problem. | [
"cs.LG",
"cs.AI"
] |
Even though convolutional neural networks (CNNs) are driving progress in
medical image segmentation, standard models still have some drawbacks. First,
the use of multi-scale approaches, i.e., encoder-decoder architectures, leads
to a redundant use of information, where similar low-level features are
extracted multiple times at multiple scales. Second, long-range feature
dependencies are not efficiently modeled, resulting in non-optimal
discriminative feature representations associated with each semantic class. In
this paper we attempt to overcome these limitations with the proposed
architecture, by capturing richer contextual dependencies based on the use of
guided self-attention mechanisms. This approach is able to integrate local
features with their corresponding global dependencies, as well as highlight
interdependent channel maps in an adaptive manner. Further, the additional loss
between different modules guides the attention mechanisms to neglect irrelevant
information and focus on more discriminant regions of the image by emphasizing
relevant feature associations. We evaluate the proposed model in the context of
semantic segmentation on three different datasets: abdominal organs,
cardiovascular structures and brain tumors. A series of ablation experiments
support the importance of these attention modules in the proposed architecture.
In addition, compared to other state-of-the-art segmentation networks our model
yields better segmentation performance, increasing the accuracy of the
predictions while reducing the standard deviation. This demonstrates the
efficiency of our approach to generate precise and reliable automatic
segmentations of medical images. Our code is made publicly available at
https://github.com/sinAshish/Multi-Scale-Attention | [
"cs.CV"
] |
Group invariant and equivariant Multilayer Perceptrons (MLP), also known as
Equivariant Networks, have achieved remarkable success in learning on a variety
of data structures, such as sequences, images, sets, and graphs. Using tools
from group theory, this paper proves the universality of a broad class of
equivariant MLPs with a single hidden layer. In particular, it is shown that
having a hidden layer on which the group acts regularly is sufficient for
universal equivariance (invariance). A corollary is unconditional universality
of equivariant MLPs for Abelian groups, such as CNNs with a single hidden
layer. A second corollary is the universality of equivariant MLPs with a
high-order hidden layer, where we give both group-agnostic bounds and means for
calculating group-specific bounds on the order of hidden layer that guarantees
universal equivariance (invariance). | [
"cs.LG",
"cs.NE",
"math.GR",
"stat.ML"
] |
The Reward-Biased Maximum Likelihood Estimate (RBMLE) for adaptive control of
Markov chains was proposed to overcome the central obstacle of what is
variously called the fundamental "closed-identifiability problem" of adaptive
control, the "dual control problem", or, contemporaneously, the "exploration
vs. exploitation problem". It exploited the key observation that since the
maximum likelihood parameter estimator can asymptotically identify the
closed-transition probabilities under a certainty equivalent approach, the
limiting parameter estimates must necessarily have an optimal reward that is
less than the optimal reward attainable for the true but unknown system. Hence
it proposed a counteracting reverse bias in favor of parameters with larger
optimal rewards, providing a solution to the fundamental problem alluded to
above. It thereby proposed an optimistic approach of favoring parameters with
larger optimal rewards, now known as "optimism in the face of uncertainty". The
RBMLE approach has been proved to be long-term average reward optimal in a
variety of contexts. However, modern attention is focused on the much finer
notion of "regret", or finite-time performance. Recent analysis of RBMLE for
multi-armed stochastic bandits and linear contextual bandits has shown that it
not only has state-of-the-art regret, but it also exhibits empirical
performance comparable to or better than the best current contenders, and leads
to strikingly simple index policies. Motivated by this, we examine the
finite-time performance of RBMLE for reinforcement learning tasks that involve
the general problem of optimal control of unknown Markov Decision Processes. We
show that it has a regret of $\mathcal{O}( \log T)$ over a time horizon of $T$
steps, similar to state-of-the-art algorithms. Simulation studies show that
RBMLE outperforms other algorithms such as UCRL2 and Thompson Sampling. | [
"cs.LG",
"cs.SY",
"eess.SY"
] |
Complex black-box predictive models may have high performance, but lack of
interpretability causes problems like lack of trust, lack of stability,
sensitivity to concept drift. On the other hand, achieving satisfactory
accuracy of interpretable models require more time-consuming work related to
feature engineering. Can we train interpretable and accurate models, without
timeless feature engineering? We propose a method that uses elastic black-boxes
as surrogate models to create a simpler, less opaque, yet still accurate and
interpretable glass-box models. New models are created on newly engineered
features extracted with the help of a surrogate model. We supply the analysis
by a large-scale benchmark on several tabular data sets from the OpenML
database. There are two results 1) extracting information from complex models
may improve the performance of linear models, 2) questioning a common myth that
complex machine learning models outperform linear models. | [
"cs.LG",
"stat.ML"
] |
Stochastic variance-reduced gradient (SVRG) is an optimization method
originally designed for tackling machine learning problems with a finite sum
structure. SVRG was later shown to work for policy evaluation, a problem in
reinforcement learning in which one aims to estimate the value function of a
given policy. SVRG makes use of gradient estimates at two scales. At the slower
scale, SVRG computes a full gradient over the whole dataset, which could lead
to prohibitive computation costs. In this work, we show that two variants of
SVRG for policy evaluation could significantly diminish the number of gradient
calculations while preserving a linear convergence speed. More importantly, our
theoretical result implies that one does not need to use the entire dataset in
every epoch of SVRG when it is applied to policy evaluation with linear
function approximation. Our experiments demonstrate large computational savings
provided by the proposed methods. | [
"cs.LG",
"stat.ML"
] |
Image Retrieval is a fundamental task of obtaining images similar to the
query one from a database. A common image retrieval practice is to firstly
retrieve candidate images via similarity search using global image features and
then re-rank the candidates by leveraging their local features. Previous
learning-based studies mainly focus on either global or local image
representation learning to tackle the retrieval task. In this paper, we abandon
the two-stage paradigm and seek to design an effective single-stage solution by
integrating local and global information inside images into compact image
representations. Specifically, we propose a Deep Orthogonal Local and Global
(DOLG) information fusion framework for end-to-end image retrieval. It
attentively extracts representative local information with multi-atrous
convolutions and self-attention at first. Components orthogonal to the global
image representation are then extracted from the local information. At last,
the orthogonal components are concatenated with the global representation as a
complementary, and then aggregation is performed to generate the final
representation. The whole framework is end-to-end differentiable and can be
trained with image-level labels. Extensive experimental results validate the
effectiveness of our solution and show that our model achieves state-of-the-art
image retrieval performances on Revisited Oxford and Paris datasets. | [
"cs.CV"
] |
Generation of stroke-based non-photorealistic imagery, is an important
problem in the computer vision community. As an endeavor in this direction,
substantial recent research efforts have been focused on teaching machines "how
to paint", in a manner similar to a human painter. However, the applicability
of previous methods has been limited to datasets with little variation in
position, scale and saliency of the foreground object. As a consequence, we
find that these methods struggle to cover the granularity and diversity
possessed by real world images. To this end, we propose a Semantic Guidance
pipeline with 1) a bi-level painting procedure for learning the distinction
between foreground and background brush strokes at training time. 2) We also
introduce invariance to the position and scale of the foreground object through
a neural alignment model, which combines object localization and spatial
transformer networks in an end to end manner, to zoom into a particular
semantic instance. 3) The distinguishing features of the in-focus object are
then amplified by maximizing a novel guided backpropagation based focus reward.
The proposed agent does not require any supervision on human stroke-data and
successfully handles variations in foreground object attributes, thus,
producing much higher quality canvases for the CUB-200 Birds and Stanford
Cars-196 datasets. Finally, we demonstrate the further efficacy of our method
on complex datasets with multiple foreground object instances by evaluating an
extension of our method on the challenging Virtual-KITTI dataset. Source code
and models are available at https://github.com/1jsingh/semantic-guidance. | [
"cs.CV",
"cs.CG",
"cs.LG"
] |
The Probabilistic Object Detection Challenge evaluates object detection
methods using a new evaluation measure, Probability-based Detection Quality
(PDQ), on a new synthetic image dataset. We present our submission to the
challenge, a fine-tuned version of Mask-RCNN with some additional
post-processing. Our method, submitted under username pammirato, is currently
second on the leaderboard with a score of 21.432, while also achieving the
highest spatial quality and average overall quality of detections. We hope this
method can provide some insight into how detectors designed for mean average
precision (mAP) evaluation behave under PDQ, as well as a strong baseline for
future work. | [
"cs.CV"
] |
Autonomous driving is getting a lot of attention in the last decade and will
be the hot topic at least until the first successful certification of a car
with Level 5 autonomy. There are many public datasets in the academic
community. However, they are far away from what a robust industrial production
system needs. There is a large gap between academic and industrial setting and
a substantial way from a research prototype, built on public datasets, to a
deployable solution which is a challenging task. In this paper, we focus on bad
practices that often happen in the autonomous driving from an industrial
deployment perspective. Data design deserves at least the same amount of
attention as the model design. There is very little attention paid to these
issues in the scientific community, and we hope this paper encourages better
formalization of dataset design. More specifically, we focus on the datasets
design and validation scheme for autonomous driving, where we would like to
highlight the common problems, wrong assumptions, and steps towards avoiding
them, as well as some open problems. | [
"cs.CV",
"cs.LG",
"cs.RO",
"stat.ML"
] |
Real-time semantic segmentation has received considerable attention due to
growing demands in many practical applications, such as autonomous vehicles,
robotics, etc. Existing real-time segmentation approaches often utilize feature
fusion to improve segmentation accuracy. However, they fail to fully consider
the feature information at different resolutions and the receptive fields of
the networks are relatively limited, thereby compromising the performance. To
tackle this problem, we propose a light Cascaded Selective Resolution Network
(CSRNet) to improve the performance of real-time segmentation through multiple
context information embedding and enhanced feature aggregation. The proposed
network builds a three-stage segmentation system, which integrates feature
information from low resolution to high resolution and achieves feature
refinement progressively. CSRNet contains two critical modules: the Shorted
Pyramid Fusion Module (SPFM) and the Selective Resolution Module (SRM). The
SPFM is a computationally efficient module to incorporate the global context
information and significantly enlarge the receptive field at each stage. The
SRM is designed to fuse multi-resolution feature maps with various receptive
fields, which assigns soft channel attentions across the feature maps and helps
to remedy the problem caused by multi-scale objects. Comprehensive experiments
on two well-known datasets demonstrate that the proposed CSRNet effectively
improves the performance for real-time segmentation. | [
"cs.CV"
] |
Justifying draconian measures during the Covid-19 pandemic was difficult not
only because of the restriction of individual rights, but also because of its
economic impact. The objective of this work is to present a machine learning
approach to identify regions that should implement similar health policies. For
that end, we successfully developed a system that gives a notion of economic
impact given the prediction of new incidental cases through unsupervised
learning and time series forecasting. This system was built taking into account
computational restrictions and low maintenance requirements in order to improve
the system's resilience. Finally this system was deployed as part of a web
application for simulation and data analysis of COVID-19, in Colombia,
available at (https://covid19.dis.eafit.edu.co). | [
"cs.LG",
"cs.CY"
] |
Machine learning has been used in all kinds of fields. In this article, we
introduce how machine learning can be applied into time series problem.
Especially, we use the airline ticket prediction problem as our specific
problem. Airline companies use many different variables to determine the flight
ticket prices: indicator whether the travel is during the holidays, the number
of free seats in the plane etc. Some of the variables are observed, but some of
them are hidden. Based on the data over a 103 day period, we trained our
models, getting the best model - which is AdaBoost-Decision Tree
Classification. This algorithm has best performance over the observed 8 routes
which has 61.35$\%$ better performance than the random purchase strategy, and
relatively small variance over these routes. And we also considered the
situation that we cannot get too much historical datas for some routes (for
example the route is new and does not have historical data) or we do not want
to train historical data to predict to buy or wait quickly, in which problem,
we used HMM Sequence Classification based AdaBoost-Decision Tree Classification
to perform our prediction on 12 new routes. Finally, we got 31.71$\%$ better
performance than the random purchase strategy. | [
"cs.LG"
] |
Analog computing hardwares, such as Processing-in-memory (PIM) accelerators,
have gradually received more attention for accelerating the neural network
computations. However, PIM accelerators often suffer from intrinsic noise in
the physical components, making it challenging for neural network models to
achieve the same performance as on the digital hardware. Previous works in
mitigating intrinsic noise assumed the knowledge of the noise model, and
retraining the neural networks accordingly was required. In this paper, we
propose a noise-agnostic method to achieve robust neural network performance
against any noise setting. Our key observation is that the degradation of
performance is due to the distribution shifts in network activations, which are
caused by the noise. To properly track the shifts and calibrate the biased
distributions, we propose a "noise-aware" batch normalization layer, which is
able to align the distributions of the activations under variational noise
inherent in the analog environments. Our method is simple, easy to implement,
general to various noise settings, and does not need to retrain the models. We
conduct experiments on several tasks in computer vision, including
classification, object detection and semantic segmentation. The results
demonstrate the effectiveness of our method, achieving robust performance under
a wide range of noise settings, more reliable than existing methods. We believe
that our simple yet general method can facilitate the adoption of analog
computing devices for neural networks. | [
"cs.CV",
"cs.LG"
] |
3D image scans are an assessment tool for neurological damage in Parkinson's
disease (PD) patients. This diagnosis process can be automatized to help
medical staff through Decision Support Systems (DSSs), and Convolutional Neural
Networks (CNNs) are good candidates, because they are effective when applied to
spatial data. This paper proposes a 3D CNN ordinal model for assessing the
level or neurological damage in PD patients. Given that CNNs need large
datasets to achieve acceptable performance, a data augmentation method is
adapted to work with spatial data. We consider the Ordinal Graph-based
Oversampling via Shortest Paths (OGO-SP) method, which applies a gamma
probability distribution for inter-class data generation. A modification of
OGO-SP is proposed, the OGO-SP-$\beta$ algorithm, which applies the beta
distribution for generating synthetic samples in the inter-class region, a
better suited distribution when compared to gamma. The evaluation of the
different methods is based on a novel 3D image dataset provided by the Hospital
Universitario 'Reina Sof\'ia' (C\'ordoba, Spain). We show how the ordinal
methodology improves the performance with respect to the nominal one, and how
OGO-SP-$\beta$ yields better performance than OGO-SP. | [
"cs.CV",
"cs.LG"
] |
Stochastic approximation, a data-driven approach for finding the fixed point
of an unknown operator, provides a unified framework for treating many problems
in stochastic optimization and reinforcement learning. Motivated by a growing
interest in multi-agent and multi-task learning, we consider in this paper a
decentralized variant of stochastic approximation. A network of agents, each
with their own unknown operator and data observations, cooperatively find the
fixed point of the aggregate operator. The agents work by running a local
stochastic approximation algorithm using noisy samples from their operators
while averaging their iterates with their neighbors' on a decentralized
communication graph. Our main contribution provides a finite-time analysis of
this decentralized stochastic approximation algorithm and characterizes the
impacts of the underlying communication topology between agents. Our model for
the data observed at each agent is that it is sampled from a Markov processes;
this lack of independence makes the iterates biased and (potentially)
unbounded. Under mild assumptions on the Markov processes, we show that the
convergence rate of the proposed methods is essentially the same as if the
samples were independent, differing only by a log factor that represents the
mixing time of the Markov process. We also present applications of the proposed
method on a number of interesting learning problems in multi-agent systems,
including a decentralized variant of Q-learning for solving multi-task
reinforcement learning. | [
"cs.LG",
"math.OC"
] |
We address the task of jointly determining what a person is doing and where
they are looking based on the analysis of video captured by a headworn camera.
To facilitate our research, we first introduce the EGTEA Gaze+ dataset. Our
dataset comes with videos, gaze tracking data, hand masks and action
annotations, thereby providing the most comprehensive benchmark for First
Person Vision (FPV). Moving beyond the dataset, we propose a novel deep model
for joint gaze estimation and action recognition in FPV. Our method describes
the participant's gaze as a probabilistic variable and models its distribution
using stochastic units in a deep network. We further sample from these
stochastic units, generating an attention map to guide the aggregation of
visual features for action recognition. Our method is evaluated on our EGTEA
Gaze+ dataset and achieves a performance level that exceeds the
state-of-the-art by a significant margin. More importantly, we demonstrate that
our model can be applied to larger scale FPV dataset---EPIC-Kitchens even
without using gaze, offering new state-of-the-art results on FPV action
recognition. | [
"cs.CV"
] |
This paper presents the novel approach towards table structure recognition by
leveraging the guided anchors. The concept differs from current
state-of-the-art approaches for table structure recognition that naively apply
object detection methods. In contrast to prior techniques, first, we estimate
the viable anchors for table structure recognition. Subsequently, these anchors
are exploited to locate the rows and columns in tabular images. Furthermore,
the paper introduces a simple and effective method that improves the results by
using tabular layouts in realistic scenarios. The proposed method is
exhaustively evaluated on the two publicly available datasets of table
structure recognition i.e ICDAR-2013 and TabStructDB. We accomplished
state-of-the-art results on the ICDAR-2013 dataset with an average F-Measure of
95.05$\%$ (94.6$\%$ for rows and 96.32$\%$ for columns) and surpassed the
baseline results on the TabStructDB dataset with an average F-Measure of
94.17$\%$ (94.08$\%$ for rows and 95.06$\%$ for columns). | [
"cs.CV"
] |
How can we explain the predictions of a black-box model? In this paper, we
use influence functions -- a classic technique from robust statistics -- to
trace a model's prediction through the learning algorithm and back to its
training data, thereby identifying training points most responsible for a given
prediction. To scale up influence functions to modern machine learning
settings, we develop a simple, efficient implementation that requires only
oracle access to gradients and Hessian-vector products. We show that even on
non-convex and non-differentiable models where the theory breaks down,
approximations to influence functions can still provide valuable information.
On linear models and convolutional neural networks, we demonstrate that
influence functions are useful for multiple purposes: understanding model
behavior, debugging models, detecting dataset errors, and even creating
visually-indistinguishable training-set attacks. | [
"stat.ML",
"cs.AI",
"cs.LG"
] |
The self-attention mechanism has attracted wide publicity for its most
important advantage of modeling long dependency, and its variations in computer
vision tasks, the non-local block tries to model the global dependency of the
input feature maps. Gathering global contextual information will inevitably
need a tremendous amount of memory and computing resources, which has been
extensively studied in the past several years. However, there is a further
problem with the self-attention scheme: is all information gathered from the
global scope helpful for the contextual modelling? To our knowledge, few
studies have focused on the problem. Aimed at both questions this paper
proposes the salient positions-based attention scheme SPANet, which is inspired
by some interesting observations on the attention maps and affinity matrices
generated in self-attention scheme. We believe these observations are
beneficial for better understanding of the self-attention. SPANet uses the
salient positions selection algorithm to select only a limited amount of
salient points to attend in the attention map computing. This approach will not
only spare a lot of memory and computing resources, but also try to distill the
positive information from the transformation of the input feature maps. In the
implementation, considering the feature maps with channel high dimensions,
which are completely different from the general visual image, we take the
squared power of the feature maps along the channel dimension as the saliency
metric of the positions. In general, different from the non-local block method,
SPANet models the contextual information using only the selected positions
instead of all, along the channel dimension instead of space dimension. Our
source code is available at https://github.com/likyoo/SPANet. | [
"cs.CV"
] |
We introduce and analyze a natural algorithm for multi-venue exploration from
censored data, which is motivated by the Dark Pool Problem of modern
quantitative finance. We prove that our algorithm converges in polynomial time
to a near-optimal allocation policy; prior results for similar problems in
stochastic inventory control guaranteed only asymptotic convergence and
examined variants in which each venue could be treated independently. Our
analysis bears a strong resemblance to that of efficient exploration/
exploitation schemes in the reinforcement learning literature. We describe an
extensive experimental evaluation of our algorithm on the Dark Pool Problem
using real trading data. | [
"cs.LG",
"cs.GT"
] |
Optimism about the poorly understood states and actions is the main driving
force of exploration for many provably-efficient reinforcement learning
algorithms. We propose optimism in the face of sensible value functions (OFVF)-
a novel data-driven Bayesian algorithm to constructing Plausibility sets for
MDPs to explore robustly minimizing the worst case exploration cost. The method
computes policies with tighter optimistic estimates for exploration by
introducing two new ideas. First, it is based on Bayesian posterior
distributions rather than distribution-free bounds. Second, OFVF does not
construct plausibility sets as simple confidence intervals. Confidence
intervals as plausibility sets are a sufficient but not a necessary condition.
OFVF uses the structure of the value function to optimize the location and
shape of the plausibility set to guarantee upper bounds directly without
necessarily enforcing the requirement for the set to be a confidence interval.
OFVF proceeds in an episodic manner, where the duration of the episode is fixed
and known. Our algorithm is inherently Bayesian and can leverage prior
information. Our theoretical analysis shows the robustness of OFVF, and the
empirical results demonstrate its practical promise. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
This paper introduces a new type of image enhancement problem. Compared to
traditional image enhancement methods, which mostly deal with pixel-wise
modifications of a given photo, our proposed task is to crop an image which is
embedded within a photo and enhance the quality of the cropped image. We split
our proposed approach into two deep networks: deep photo cropper and deep image
enhancer. In the photo cropper network, we employ a spatial transformer to
extract the embedded image. In the photo enhancer, we employ super-resolution
to increase the number of pixels in the embedded image and reduce the effect of
stretching and distortion of pixels. We use cosine distance loss between image
features and ground truth for the cropper and the mean square loss for the
enhancer. Furthermore, we propose a new dataset to train and test the proposed
method. Finally, we analyze the proposed method with respect to qualitative and
quantitative evaluations. | [
"cs.CV",
"eess.IV"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.