text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
In this paper we propose to represent a scene as an abstraction of 'things'.
We start from 'things' as generated by modern object proposals, and we
investigate their immediately observable properties: position, size, aspect
ratio and color, and those only. Where the recent successes and excitement of
the field lie in object identification, we represent the scene composition
independent of object identities. We make three contributions in this work.
First, we study simple observable properties of 'things', and call it things
syntax. Second, we propose translating the things syntax in linguistic abstract
statements and study their descriptive effect to retrieve scenes. Thirdly, we
propose querying of scenes with abstract block illustrations and study their
effectiveness to discriminate among different types of scenes. The benefit of
abstract statements and block illustrations is that we generate them directly
from the images, without any learning beforehand as in the standard attribute
learning. Surprisingly, we show that even though we use the simplest of
features from 'things' layout and no learning at all, we can still retrieve
scenes reasonably well. | [
"cs.CV"
] |
The great success achieved by deep neural networks attracts increasing
attention from the manufacturing and healthcare communities. However, the
limited availability of data and high costs of data collection are the major
challenges for the applications in those fields. We propose in this work AISEL,
an active image synthesis method for efficient labeling to improve the
performance of the small-data learning tasks. Specifically, a complementary
AISEL dataset is generated, with labels actively acquired via a physics-based
method to incorporate underlining physical knowledge at hand. An important
component of our AISEL method is the bidirectional generative invertible
network (GIN), which can extract interpretable features from the training
images and generate physically meaningful virtual images. Our AISEL method then
efficiently samples virtual images not only further exploits the uncertain
regions, but also explores the entire image space. We then discuss the
interpretability of GIN both theoretically and experimentally, demonstrating
clear visual improvements over the benchmarks. Finally, we demonstrate the
effectiveness of our AISEL framework on aortic stenosis application, in which
our method lower the labeling cost by $90\%$ while achieving a $15\%$
improvement in prediction accuracy. | [
"cs.CV"
] |
Despite their renowned predictive power on i.i.d. data, convolutional neural
networks are known to rely more on high-frequency patterns that humans deem
superficial than on low-frequency patterns that agree better with intuitions
about what constitutes category membership. This paper proposes a method for
training robust convolutional networks by penalizing the predictive power of
the local representations learned by earlier layers. Intuitively, our networks
are forced to discard predictive signals such as color and texture that can be
gleaned from local receptive fields and to rely instead on the global
structures of the image. Across a battery of synthetic and benchmark domain
adaptation tasks, our method confers improved generalization out of the domain.
Also, to evaluate cross-domain transfer, we introduce ImageNet-Sketch, a new
dataset consisting of sketch-like images, that matches the ImageNet
classification validation set in categories and scale. | [
"cs.CV"
] |
For most diseases, building large databases of labeled genetic data is an
expensive and time-demanding task. To address this, we introduce genetic
Generative Adversarial Networks (gGAN), a semi-supervised approach based on an
innovative GAN architecture to create large synthetic genetic data sets
starting with a small amount of labeled data and a large amount of unlabeled
data. Our goal is to determine the propensity of a new individual to develop
the severe form of the illness from their genetic profile alone. The proposed
model achieved satisfactory results using real genetic data from different
datasets and populations, in which the test populations may not have the same
genetic profiles. The proposed model is self-aware and capable of determining
whether a new genetic profile has enough compatibility with the data on which
the network was trained and is thus suitable for prediction. The code and
datasets used can be found at https://github.com/caio-davi/gGAN. | [
"cs.LG",
"q-bio.GN",
"stat.ML",
"I.5"
] |
Advances in computing technology have allowed researchers across many fields
of endeavor to collect and maintain vast amounts of observational statistical
data such as clinical data,biological patient data,data regarding access of web
sites,financial data,and the like.Brain Magnetic Resonance
Imaging(MRI)segmentation is a complex problem in the field of medical imaging
despite various presented methods.MR image of human brain can be divided into
several sub regions especially soft tissues such as gray matter,white matter
and cerebrospinal fluid.Although edge information is the main clue in image
segmentation,it can not get a better result in analysis the content of images
without combining other information.The segmentation of brain tissue in the
magnetic resonance imaging(MRI)is very important for detecting the existence
and outlines of tumors.In this paper,an algorithm about segmentation based on
the symmetry character of brain MRI image is presented.Our goal is to detect
the position and boundary of tumors automatically.Experiments were conducted on
real pictures,and the results show that the algorithm is flexible and
convenient. | [
"cs.CV"
] |
Cutting and pasting image segments feels intuitive: the choice of source
templates gives artists flexibility in recombining existing source material.
Formally, this process takes an image set as input and outputs a collage of the
set elements. Such selection from sets of source templates does not fit easily
in classical convolutional neural models requiring inputs of fixed size.
Inspired by advances in attention and set-input machine learning, we present a
novel architecture that can generate in one forward pass image collages of
source templates using set-structured representations. This paper has the
following contributions: (i) a novel framework for image generation called
Memory Attentive Generation of Image Collages (MAGIC) which gives artists new
ways to create digital collages; (ii) from the machine-learning perspective, we
show a novel Generative Adversarial Networks (GAN) architecture that uses
Set-Transformer layers and set-pooling to blend sets of random image samples -
a hybrid non-parametric approach. | [
"cs.CV",
"cs.LG",
"eess.IV",
"stat.ML"
] |
Obstacle avoidance is a fundamental and challenging problem for autonomous
navigation of mobile robots. In this paper, we consider the problem of obstacle
avoidance in simple 3D environments where the robot has to solely rely on a
single monocular camera. In particular, we are interested in solving this
problem without relying on localization, mapping, or planning techniques. Most
of the existing work consider obstacle avoidance as two separate problems,
namely obstacle detection, and control. Inspired by the recent advantages of
deep reinforcement learning in Atari games and understanding highly complex
situations in Go, we tackle the obstacle avoidance problem as a data-driven
end-to-end deep learning approach. Our approach takes raw images as input and
generates control commands as output. We show that discrete action spaces are
outperforming continuous control commands in terms of expected average reward
in maze-like environments. Furthermore, we show how to accelerate the learning
and increase the robustness of the policy by incorporating predicted depth maps
by a generative adversarial network. | [
"cs.LG",
"cs.CV",
"cs.RO"
] |
How to learn long-range dependencies from 3D point clouds is a challenging
problem in 3D point cloud analysis. Addressing this problem, we propose a
global attention network for point cloud semantic segmentation, named as
GA-Net, consisting of a point-independent global attention module and a
point-dependent global attention module for obtaining contextual information of
3D point clouds in this paper. The point-independent global attention module
simply shares a global attention map for all 3D points. In the point-dependent
global attention module, for each point, a novel random cross attention block
using only two randomly sampled subsets is exploited to learn the contextual
information of all the points. Additionally, we design a novel point-adaptive
aggregation block to replace linear skip connection for aggregating more
discriminate features. Extensive experimental results on three 3D public
datasets demonstrate that our method outperforms state-of-the-art methods in
most cases. | [
"cs.CV"
] |
Despite the intense attention and investment into clinical machine learning
(CML) research, relatively few applications convert to clinical practice. While
research is important in advancing the state-of-the-art, translation is equally
important in bringing these technologies into a position to ultimately impact
patient care and live up to extensive expectations surrounding AI in
healthcare. To better characterize a holistic perspective among researchers and
practitioners, we survey several participants with experience in developing CML
for clinical deployment about their learned experiences. We collate these
insights and identify several main categories of barriers and pitfalls in order
to better design and develop clinical machine learning applications. | [
"cs.LG",
"cs.CY"
] |
Exploration in reinforcement learning is a challenging problem: in the worst
case, the agent must search for high-reward states that could be hidden
anywhere in the state space. Can we define a more tractable class of RL
problems, where the agent is provided with examples of successful outcomes? In
this problem setting, the reward function can be obtained automatically by
training a classifier to categorize states as successful or not. If trained
properly, such a classifier can provide a well-shaped objective landscape that
both promotes progress toward good states and provides a calibrated exploration
bonus. In this work, we show that an uncertainty aware classifier can solve
challenging reinforcement learning problems by both encouraging exploration and
provided directed guidance towards positive outcomes. We propose a novel
mechanism for obtaining these calibrated, uncertainty-aware classifiers based
on an amortized technique for computing the normalized maximum likelihood (NML)
distribution. To make this tractable, we propose a novel method for computing
the NML distribution by using meta-learning. We show that the resulting
algorithm has a number of intriguing connections to both count-based
exploration methods and prior algorithms for learning reward functions, while
also providing more effective guidance towards the goal. We demonstrate that
our algorithm solves a number of challenging navigation and robotic
manipulation tasks which prove difficult or impossible for prior methods. | [
"cs.LG",
"cs.RO"
] |
The advent of the era of machines has limited human interaction and this has
increased their presence in the last decade. The requirement to increase the
effectiveness, durability and reliability in the robots has also risen quite
drastically too. Present paper covers the various embedded system and computer
vision methodologies, techniques and innovations used in the field of spray
painting robots. There have been many advancements in the sphere of painting
robots utilized for high rise buildings, wall painting, road marking paintings,
etc. Review focuses on image processing, computational and computer vision
techniques that can be applied in the product to increase efficiency of the
performance drastically. Image analysis, filtering, enhancement, object
detection, edge detection methods, path and localization methods and fine
tuning of parameters are being discussed in depth to use while developing such
products. Dynamic system design is being deliberated by using which results in
reduction of human interaction, environment sustainability and better quality
of work in detail. Embedded systems involving the micro-controllers,
processors, communicating devices, sensors and actuators, soft-ware to use
them; is being explained for end-to-end development and enhancement of accuracy
and precision in Spray Painting Robots. | [
"cs.CV",
"I.4.0; I.4.3; I.4.6; I.4.9; I.4.m"
] |
In recent years, Artificial Intelligence (AI) has proven its relevance for
medical decision support. However, the "black-box" nature of successful AI
algorithms still holds back their wide-spread deployment. In this paper, we
describe an eXplanatory Artificial Intelligence (XAI) that reaches the same
level of performance as black-box AI, for the task of classifying Diabetic
Retinopathy (DR) severity using Color Fundus Photography (CFP). This algorithm,
called ExplAIn, learns to segment and categorize lesions in images; the final
image-level classification directly derives from these multivariate lesion
segmentations. The novelty of this explanatory framework is that it is trained
from end to end, with image supervision only, just like black-box AI
algorithms: the concepts of lesions and lesion categories emerge by themselves.
For improved lesion localization, foreground/background separation is trained
through self-supervision, in such a way that occluding foreground pixels
transforms the input image into a healthy-looking image. The advantage of such
an architecture is that automatic diagnoses can be explained simply by an image
and/or a few sentences. ExplAIn is evaluated at the image level and at the
pixel level on various CFP image datasets. We expect this new framework, which
jointly offers high classification performance and explainability, to
facilitate AI deployment. | [
"cs.CV"
] |
In recent years, spiking neural networks (SNNs) emerge as an alternative to
deep neural networks (DNNs). SNNs present a higher computational efficiency
using low-power neuromorphic hardware and require less labeled data for
training using local and unsupervised learning rules such as spike
timing-dependent plasticity (STDP). SNN have proven their effectiveness in
image classification on simple datasets such as MNIST. However, to process
natural images, a pre-processing step is required. Difference-of-Gaussians
(DoG) filtering is typically used together with on-center/off-center coding,
but it results in a loss of information that is detrimental to the
classification performance. In this paper, we propose to use whitening as a
pre-processing step before learning features with STDP. Experiments on CIFAR-10
show that whitening allows STDP to learn visual features that are closer to the
ones learned with standard neural networks, with a significantly increased
classification performance as compared to DoG filtering. We also propose an
approximation of whitening as convolution kernels that is computationally
cheaper to learn and more suited to be implemented on neuromorphic hardware.
Experiments on CIFAR-10 show that it performs similarly to regular whitening.
Cross-dataset experiments on CIFAR-10 and STL-10 also show that it is fairly
stable across datasets, making it possible to learn a single whitening
transformation to process different datasets. | [
"cs.CV",
"cs.LG",
"cs.NE"
] |
There has recently been a surge in research in batch Deep Reinforcement
Learning (DRL), which aims for learning a high-performing policy from a given
dataset without additional interactions with the environment. We propose a new
algorithm, Best-Action Imitation Learning (BAIL), which strives for both
simplicity and performance. BAIL learns a V function, uses the V function to
select actions it believes to be high-performing, and then uses those actions
to train a policy network using imitation learning. For the MuJoCo benchmark,
we provide a comprehensive experimental study of BAIL, comparing its
performance to four other batch Q-learning and imitation-learning schemes for a
large variety of batch datasets. Our experiments show that BAIL's performance
is much higher than the other schemes, and is also computationally much faster
than the batch Q-learning schemes. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Chinese is one of the most widely used languages in the world, yet online
handwritten Chinese character recognition (OLHCCR) remains challenging. To
recognize Chinese characters, one popular choice is to adopt the 2D
convolutional neural network (2D-CNN) on the extracted feature images, and
another one is to employ the recurrent neural network (RNN) or 1D-CNN on the
time-series features. Instead of viewing characters as either static images or
temporal trajectories, here we propose to represent characters as geometric
graphs, retaining both spatial structures and temporal orders. Accordingly, we
propose a novel spatial graph convolution network (SGCN) to effectively
classify those character graphs for the first time. Specifically, our SGCN
incorporates the local neighbourhood information via spatial graph convolutions
and further learns the global shape properties with a hierarchical residual
structure. Experiments on IAHCC-UCAS2016, ICDAR-2013, and UNIPEN datasets
demonstrate that the SGCN can achieve comparable recognition performance with
the state-of-the-art methods for character recognition. | [
"cs.CV"
] |
Most recent transformer-based models show impressive performance on vision
tasks, even better than Convolution Neural Networks (CNN). In this work, we
present a novel, flexible, and effective transformer-based model for
high-quality instance segmentation. The proposed method, Segmenting Objects
with TRansformers (SOTR), simplifies the segmentation pipeline, building on an
alternative CNN backbone appended with two parallel subtasks: (1) predicting
per-instance category via transformer and (2) dynamically generating
segmentation mask with the multi-level upsampling module. SOTR can effectively
extract lower-level feature representations and capture long-range context
dependencies by Feature Pyramid Network (FPN) and twin transformer,
respectively. Meanwhile, compared with the original transformer, the proposed
twin transformer is time- and resource-efficient since only a row and a column
attention are involved to encode pixels. Moreover, SOTR is easy to be
incorporated with various CNN backbones and transformer model variants to make
considerable improvements for the segmentation accuracy and training
convergence. Extensive experiments show that our SOTR performs well on the MS
COCO dataset and surpasses state-of-the-art instance segmentation approaches.
We hope our simple but strong framework could serve as a preferment baseline
for instance-level recognition. Our code is available at
https://github.com/easton-cau/SOTR. | [
"cs.CV"
] |
In this paper, we propose an efficient method to estimate the Weingarten map
for point cloud data sampled from manifold embedded in Euclidean space. A
statistical model is established to analyze the asymptotic property of the
estimator. In particular, we show the convergence rate as the sample size tends
to infinity. We verify the convergence rate through simulated data and apply
the estimated Weingarten map to curvature estimation and point cloud
simplification to multiple real data sets. | [
"stat.ML",
"cs.CV",
"cs.LG",
"math.DG"
] |
Finding valuable training data points for deep neural networks has been a
core research challenge with many applications. In recent years, various
techniques for calculating the "value" of individual training datapoints have
been proposed for explaining trained models. However, the value of a training
datapoint also depends on other selected training datapoints - a notion that is
not explicitly captured by existing methods. In this paper, we study the
problem of selecting high-value subsets of training data. The key idea is to
design a learnable framework for online subset selection, which can be learned
using mini-batches of training data, thus making our method scalable. This
results in a parameterized convex subset selection problem that is amenable to
a differentiable convex programming paradigm, thus allowing us to learn the
parameters of the selection model in end-to-end training. Using this framework,
we design an online alternating minimization-based algorithm for jointly
learning the parameters of the selection model and ML model. Extensive
evaluation on a synthetic dataset, and three standard datasets, show that our
algorithm finds consistently higher value subsets of training data, compared to
the recent state-of-the-art methods, sometimes ~20% higher value than existing
methods. The subsets are also useful in finding mislabelled training data. Our
algorithm takes running time comparable to the existing valuation functions. | [
"cs.LG"
] |
The emergence of novel pathogens and zoonotic diseases like the SARS-CoV-2
have underlined the need for developing novel diagnosis and intervention
pipelines that can learn rapidly from small amounts of labeled data. Combined
with technological advances in next-generation sequencing, metagenome-based
diagnostic tools hold much promise to revolutionize rapid point-of-care
diagnosis. However, there are significant challenges in developing such an
approach, the chief among which is to learn self-supervised representations
that can help detect novel pathogen signatures with very low amounts of labeled
data. This is particularly a difficult task given that closely related
pathogens can share more than 90% of their genome structure. In this work, we
address these challenges by proposing MG-Net, a self-supervised representation
learning framework that leverages multi-modal context using pseudo-imaging data
derived from clinical metagenome sequences. We show that the proposed framework
can learn robust representations from unlabeled data that can be used for
downstream tasks such as metagenome sequence classification with limited access
to labeled data. Extensive experiments show that the learned features
outperform current baseline metagenome representations, given only 1000 samples
per class. | [
"cs.LG",
"q-bio.GN"
] |
Video Analytics Software as a Service (VA SaaS) has been rapidly growing in
recent years. VA SaaS is typically accessed by users using a lightweight
client. Because the transmission bandwidth between the client and cloud is
usually limited and expensive, it brings great benefits to design cloud video
analysis algorithms with a limited data transmission requirement. Although
considerable research has been devoted to video analysis, to our best
knowledge, little of them has paid attention to the transmission bandwidth
limitation in SaaS. As the first attempt in this direction, this work
introduces a problem of few-frame action recognition, which aims at maintaining
high recognition accuracy, when accessing only a few frames during both
training and test. Unlike previous work that processed dense frames, we present
Temporal Sequence Distillation (TSD), which distills a long video sequence into
a very short one for transmission. By end-to-end training with 3D CNNs for
video action recognition, TSD learns a compact and discriminative temporal and
spatial representation of video frames. On Kinetics dataset, TSD+I3D typically
requires only 50\% of the number of frames compared to I3D, a state-of-the-art
video action recognition algorithm, to achieve almost the same accuracies. The
proposed TSD has three appealing advantages. Firstly, TSD has a lightweight
architecture and can be deployed in the client, eg. mobile devices, to produce
compressed representative frames to save transmission bandwidth. Secondly, TSD
significantly reduces the computations to run video action recognition with
compressed frames on the cloud, while maintaining high recognition accuracies.
Thirdly, TSD can be plugged in as a preprocessing module of any existing 3D
CNNs. Extensive experiments show the effectiveness and characteristics of TSD. | [
"cs.CV"
] |
Semantic image segmentation is an important computer vision task that is
difficult because it consists of both recognition and segmentation. The task is
often cast as a structured output problem on an exponentially large
output-space, which is typically modeled by a discrete probabilistic model. The
best segmentation is found by inferring the Maximum a-Posteriori (MAP) solution
over the output distribution defined by the model. Due to limitations in
optimization, the model cannot be arbitrarily complex. This leads to a
trade-off: devise a more accurate model that incorporates rich high-order
interactions between image elements at the cost of inaccurate and possibly
intractable optimization OR leverage a tractable model which produces less
accurate MAP solutions but may contain high quality solutions as other modes of
its output distribution.
This thesis investigates the latter and presents a two stage approach to
semantic segmentation. In the first stage a tractable segmentation model
outputs a set of high probability segmentations from the underlying
distribution that are not just minor perturbations of each other. Critically
the output of this stage is a diverse set of plausible solutions and not just a
single one. In the second stage, a discriminatively trained re-ranking model
selects the best segmentation from this set. The re-ranking stage can use much
more complex features than what could be tractably used in the segmentation
model, allowing a better exploration of the solution space than simply
returning the MAP solution. The formulation is agnostic to the underlying
segmentation model (e.g. CRF, CNN, etc.) and optimization algorithm, which
makes it applicable to a wide range of models and inference methods. Evaluation
of the approach on a number of semantic image segmentation benchmark datasets
highlight its superiority over inferring the MAP solution. | [
"cs.CV"
] |
In this paper, we focus on exploring the fusion of images and point clouds
for 3D object detection in view of the complementary nature of the two
modalities, i.e., images possess more semantic information while point clouds
specialize in distance sensing. To this end, we present a novel two-stage
multi-modal fusion network for 3D object detection, taking both binocular
images and raw point clouds as input. The whole architecture facilitates
two-stage fusion. The first stage aims at producing 3D proposals through sparse
point-wise feature fusion. Within the first stage, we further exploit a joint
anchor mechanism that enables the network to utilize 2D-3D classification and
regression simultaneously for better proposal generation.
The second stage works on the 2D and 3D proposal regions and fuses their
dense features. In addition, we propose to use pseudo LiDAR points from stereo
matching as a data augmentation method to densify the LiDAR points, as we
observe that objects missed by the detection network mostly have too few points
especially for far-away objects. Our experiments on the KITTI dataset show that
the proposed multi-stage fusion helps the network to learn better
representations. | [
"cs.CV"
] |
The performance of image recognition like human pose detection, trained with
simulated images would usually get worse due to the divergence between real and
simulated data. To make the distribution of a simulated image close to that of
real one, there are several works applying GAN-based image-to-image
transformation methods, e.g., SimGAN and CycleGAN. However, these methods would
not be sensitive enough to the various change in pose and shape of subjects,
especially when the training data are imbalanced, e.g., some particular poses
and shapes are minor in the training data. To overcome this problem, we propose
to introduce the label information of subjects, e.g., pose and type of objects
in the training of CycleGAN, and lead it to obtain label-wise transforamtion
models. We evaluate our proposed method called Label-CycleGAN, through
experiments on the digit image transformation from SVHN to MNIST and the
surveillance camera image transformation from simulated to real images. | [
"cs.CV"
] |
Explainable artificial intelligence is the attempt to elucidate the workings
of systems too complex to be directly accessible to human cognition through
suitable side-information referred to as "explanations". We present a trainable
explanation module for convolutional image classifiers we call bounded logit
attention (BLA). The BLA module learns to select a subset of the convolutional
feature map for each input instance, which then serves as an explanation for
the classifier's prediction. BLA overcomes several limitations of the
instancewise feature selection method "learning to explain" (L2X) introduced by
Chen et al. (2018): 1) BLA scales to real-world sized image classification
problems, and 2) BLA offers a canonical way to learn explanations of variable
size. Due to its modularity BLA lends itself to transfer learning setups and
can also be employed as a post-hoc add-on to trained classifiers. Beyond
explainability, BLA may serve as a general purpose method for differentiable
approximation of subset selection. In a user study we find that BLA
explanations are preferred over explanations generated by the popular
(Grad-)CAM method. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
The problem at the heart of this tutorial consists in modeling the path
choice behavior of network users. This problem has been extensively studied in
transportation science, where it is known as the route choice problem. In this
literature, individuals' choice of paths are typically predicted using discrete
choice models. This article is a tutorial on a specific category of discrete
choice models called recursive, and it makes three main contributions: First,
for the purpose of assisting future research on route choice, we provide a
comprehensive background on the problem, linking it to different fields
including inverse optimization and inverse reinforcement learning. Second, we
formally introduce the problem and the recursive modeling idea along with an
overview of existing models, their properties and applications. Third, we
extensively analyze illustrative examples from different angles so that a
novice reader can gain intuition on the problem and the advantages provided by
recursive models in comparison to path-based ones. | [
"stat.ML",
"cs.LG"
] |
This work showcases a new approach for causal discovery by leveraging user
experiments and recent advances in photo-realistic image editing, demonstrating
a potential of identifying causal factors and understanding complex systems
counterfactually. We introduce the beauty learning problem as an example, which
has been discussed metaphysically for centuries and been proved exists, is
quantifiable, and can be learned by deep models in our recent paper, where we
utilize a natural image generator coupled with user studies to infer causal
effects from facial semantics to beauty outcomes, the results of which also
align with existing empirical studies. We expect the proposed framework for a
broader application in causal inference. | [
"cs.CV"
] |
In this work, we present a Multi-Channel deep convolutional Pyramid Person
Matching Network (MC-PPMN) based on the combination of the semantic-components
and the color-texture distributions to address the problem of person
re-identification. In particular, we learn separate deep representations for
semantic-components and color-texture distributions from two person images and
then employ pyramid person matching network (PPMN) to obtain correspondence
representations. These correspondence representations are fused to perform the
re-identification task. Further, the proposed framework is optimized via a
unified end-to-end deep learning scheme. Extensive experiments on several
benchmark datasets demonstrate the effectiveness of our approach against the
state-of-the-art literature, especially on the rank-1 recognition rate. | [
"cs.CV"
] |
The chromaticity diagram associated with the CIE 1931 color matching
functions is shown to be slightly non-convex. While having no impact on
practical colorimetric computations, the non-convexity does have a significant
impact on the shape of some optimal object color reflectance distributions
associated with the outer surface of the object color solid. Instead of the
usual two-transition Schrodinger form, many optimal colors exhibit higher
transition counts. A linear programming formulation is developed and is used to
locate where these higher-transition optimal object colors reside on the object
color solid surface. The regions of higher transition count appear to have a
point-symmetric complementary structure. The final peer-reviewed version (to
appear) contains additional material concerning convexification of the
color-matching functions and and additional analysis of modern
"physiologically-relevant" CMFs transformed from cone fundamentals. | [
"cs.CV",
"eess.IV"
] |
When sailing at sea, the smart ship will inevitably produce swaying motion
due to the action of wind, wave and current, which makes the image collected by
the visual sensor appear motion blur. This will have an adverse effect on the
object detection algorithm based on the vision sensor, thereby affect the
navigation safety of the smart ship. In order to remove the motion blur in the
images during the navigation of the smart ship, we propose SharpGAN, a new
image deblurring method based on the generative adversarial network. First of
all, the Receptive Field Block Net (RFBNet) is introduced to the deblurring
network to strengthen the network's ability to extract the features of blurred
image. Secondly, we propose a feature loss that combines different levels of
image features to guide the network to perform higher-quality deblurring and
improve the feature similarity between the restored images and the sharp image.
Finally, we propose to use the lightweight RFB-s module to improve the
real-time performance of deblurring network. Compared with the existing
deblurring methods on large-scale real sea image datasets and large-scale
deblurring datasets, the proposed method not only has better deblurring
performance in visual perception and quantitative criteria, but also has higher
deblurring efficiency. | [
"cs.CV",
"cs.LG",
"eess.IV",
"I.2.10"
] |
Realistic environments often provide agents with very limited feedback. When
the environment is initially unknown, the feedback, in the beginning, can be
completely absent, and the agents may first choose to devote all their effort
on exploring efficiently. The exploration remains a challenge while it has been
addressed with many hand-tuned heuristics with different levels of generality
on one side, and a few theoretically-backed exploration strategies on the
other. Many of them are incarnated by intrinsic motivation and in particular
explorations bonuses. A common rule of thumb for exploration bonuses is to use
$1/\sqrt{n}$ bonus that is added to the empirical estimates of the reward,
where $n$ is a number of times this particular state (or a state-action pair)
was visited. We show that, surprisingly, for a pure-exploration objective of
reward-free exploration, bonuses that scale with $1/n$ bring faster learning
rates, improving the known upper bounds with respect to the dependence on the
horizon $H$. Furthermore, we show that with an improved analysis of the
stopping time, we can improve by a factor $H$ the sample complexity in the
best-policy identification setting, which is another pure-exploration
objective, where the environment provides rewards but the agent is not
penalized for its behavior during the exploration phase. | [
"cs.LG",
"stat.ML"
] |
A pre-trained generator has been frequently adopted in compressed sensing
(CS) due to its ability to effectively estimate signals with the prior of NNs.
In order to further refine the NN-based prior, we propose a framework that
allows the generator to utilize additional information from a given measurement
for prior learning, thereby yielding more accurate prediction for signals. As
our framework has a simple form, it is easily applied to existing CS methods
using pre-trained generators. We demonstrate through extensive experiments that
our framework exhibits uniformly superior performances by large margin and can
reduce the reconstruction error up to an order of magnitude for some
applications. We also explain the experimental success in theory by showing
that our framework can slightly relax the stringent signal presence condition,
which is required to guarantee the success of signal recovery. | [
"cs.LG",
"stat.ML"
] |
How can we reuse existing knowledge, in the form of available datasets, when
solving a new and apparently unrelated target task from a set of unlabeled
data? In this work we make a first contribution to answer this question in the
context of image classification. We frame this quest as an active learning
problem and use zero-shot classifiers to guide the learning process by linking
the new task to the existing classifiers. By revisiting the dual formulation of
adaptive SVM, we reveal two basic conditions to choose greedily only the most
relevant samples to be annotated. On this basis we propose an effective active
learning algorithm which learns the best possible target classification model
with minimum human labeling effort. Extensive experiments on two challenging
datasets show the value of our approach compared to the state-of-the-art active
learning methodologies, as well as its potential to reuse past datasets with
minimal effort for future tasks. | [
"cs.CV"
] |
Deep neural networks (DNNs) are known to perform well when deployed to test
distributions that shares high similarity with the training distribution.
Feeding DNNs with new data sequentially that were unseen in the training
distribution has two major challenges -- fast adaptation to new tasks and
catastrophic forgetting of old tasks. Such difficulties paved way for the
on-going research on few-shot learning and continual learning. To tackle these
problems, we introduce Attentive Independent Mechanisms (AIM). We incorporate
the idea of learning using fast and slow weights in conjunction with the
decoupling of the feature extraction and higher-order conceptual learning of a
DNN. AIM is designed for higher-order conceptual learning, modeled by a mixture
of experts that compete to learn independent concepts to solve a new task. AIM
is a modular component that can be inserted into existing deep learning
frameworks. We demonstrate its capability for few-shot learning by adding it to
SIB and trained on MiniImageNet and CIFAR-FS, showing significant improvement.
AIM is also applied to ANML and OML trained on Omniglot, CIFAR-100 and
MiniImageNet to demonstrate its capability in continual learning. Code made
publicly available at https://github.com/huang50213/AIM-Fewshot-Continual. | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
Recognizing the phases of a laparoscopic surgery (LS) operation form its
video constitutes a fundamental step for efficient content representation,
indexing and retrieval in surgical video databases. In the literature, most
techniques focus on phase segmentation of the entire LS video using
hand-crafted visual features, instrument usage signals, and recently
convolutional neural networks (CNNs). In this paper we address the problem of
phase recognition of short video shots (10s) of the operation, without
utilizing information about the preceding/forthcoming video frames, their phase
labels or the instruments used. We investigate four state-of-the-art CNN
architectures (Alexnet, VGG19, GoogleNet, and ResNet101), for feature
extraction via transfer learning. Visual saliency was employed for selecting
the most informative region of the image as input to the CNN. Video shot
representation was based on two temporal pooling mechanisms. Most importantly,
we investigate the role of 'elapsed time' (from the beginning of the
operation), and we show that inclusion of this feature can increase performance
dramatically (69% vs. 75% mean accuracy). Finally, a long short-term memory
(LSTM) network was trained for video shot classification based on the fusion of
CNN features with 'elapsed time', increasing the accuracy to 86%. Our results
highlight the prominent role of visual saliency, long-range temporal recursion
and 'elapsed time' (a feature so far ignored), for surgical phase recognition. | [
"cs.CV"
] |
Generative Adversarial Networks (GAN) have received wide attention in the
machine learning field for their potential to learn high-dimensional, complex
real data distribution. Specifically, they do not rely on any assumptions about
the distribution and can generate real-like samples from latent space in a
simple manner. This powerful property leads GAN to be applied to various
applications such as image synthesis, image attribute editing, image
translation, domain adaptation and other academic fields. In this paper, we aim
to discuss the details of GAN for those readers who are familiar with, but do
not comprehend GAN deeply or who wish to view GAN from various perspectives. In
addition, we explain how GAN operates and the fundamental meaning of various
objective functions that have been suggested recently. We then focus on how the
GAN can be combined with an autoencoder framework. Finally, we enumerate the
GAN variants that are applied to various tasks and other fields for those who
are interested in exploiting GAN for their research. | [
"cs.LG"
] |
We propose a novel skeleton-based representation for 3D action recognition in
videos using Deep Convolutional Neural Networks (D-CNNs). Two key issues have
been addressed: First, how to construct a robust representation that easily
captures the spatial-temporal evolutions of motions from skeleton sequences.
Second, how to design D-CNNs capable of learning discriminative features from
the new representation in a effective manner. To address these tasks, a
skeletonbased representation, namely, SPMF (Skeleton Pose-Motion Feature) is
proposed. The SPMFs are built from two of the most important properties of a
human action: postures and their motions. Therefore, they are able to
effectively represent complex actions. For learning and recognition tasks, we
design and optimize new D-CNNs based on the idea of Inception Residual networks
to predict actions from SPMFs. Our method is evaluated on two challenging
datasets including MSR Action3D and NTU-RGB+D. Experimental results indicated
that the proposed method surpasses state-of-the-art methods whilst requiring
less computation. | [
"cs.CV"
] |
Human pose transfer, which aims at transferring the appearance of a given
person to a target pose, is very challenging and important in many
applications. Previous work ignores the guidance of pose features or only uses
local attention mechanism, leading to implausible and blurry results. We
propose a new human pose transfer method using a generative adversarial network
(GAN) with simplified cascaded blocks. In each block, we propose a pose-guided
non-local attention (PoNA) mechanism with a long-range dependency scheme to
select more important regions of image features to transfer. We also design
pre-posed image-guided pose feature update and post-posed pose-guided image
feature update to better utilize the pose and image features. Our network is
simple, stable, and easy to train. Quantitative and qualitative results on
Market-1501 and DeepFashion datasets show the efficacy and efficiency of our
model. Compared with state-of-the-art methods, our model generates sharper and
more realistic images with rich details, while having fewer parameters and
faster speed. Furthermore, our generated images can help to alleviate data
insufficiency for person re-identification. | [
"cs.CV"
] |
Weakly supervised object detection (WSOD) using only image-level annotations
has attracted growing attention over the past few years. Existing approaches
using multiple instance learning easily fall into local optima, because such
mechanism tends to learn from the most discriminative object in an image for
each category. Therefore, these methods suffer from missing object instances
which degrade the performance of WSOD. To address this problem, this paper
introduces an end-to-end object instance mining (OIM) framework for weakly
supervised object detection. OIM attempts to detect all possible object
instances existing in each image by introducing information propagation on the
spatial and appearance graphs, without any additional annotations. During the
iterative learning process, the less discriminative object instances from the
same class can be gradually detected and utilized for training. In addition, we
design an object instance reweighted loss to learn larger portion of each
object instance to further improve the performance. The experimental results on
two publicly available databases, VOC 2007 and 2012, demonstrate the efficacy
of proposed approach. | [
"cs.CV"
] |
Despite the significant progress of deep reinforcement learning (RL) in
solving sequential decision making problems, RL agents often overfit to
training environments and struggle to adapt to new, unseen environments. This
prevents robust applications of RL in real world situations, where system
dynamics may deviate wildly from the training settings. In this work, our
primary contribution is to propose an information theoretic regularization
objective and an annealing-based optimization method to achieve better
generalization ability in RL agents. We demonstrate the extreme generalization
benefits of our approach in different domains ranging from maze navigation to
robotic tasks; for the first time, we show that agents can generalize to test
parameters more than 10 standard deviations away from the training parameter
distribution. This work provides a principled way to improve generalization in
RL by gradually removing information that is redundant for task-solving; it
opens doors for the systematic study of generalization from training to
extremely different testing settings, focusing on the established connections
between information theory and machine learning. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Recent geometric methods need reliable estimates of 3D motion parameters to
procure accurate dense depth map of a complex dynamic scene from monocular
images \cite{kumar2017monocular, ranftl2016dense}. Generally, to estimate
\textbf{precise} measurements of relative 3D motion parameters and to validate
its accuracy using image data is a challenging task. In this work, we propose
an alternative approach that circumvents the 3D motion estimation requirement
to obtain a dense depth map of a dynamic scene. Given per-pixel optical flow
correspondences between two consecutive frames and, the sparse depth prior for
the reference frame, we show that, we can effectively recover the dense depth
map for the successive frames without solving for 3D motion parameters. Our
method assumes a piece-wise planar model of a dynamic scene, which undergoes
rigid transformation locally, and as-rigid-as-possible transformation globally
between two successive frames. Under our assumption, we can avoid the explicit
estimation of 3D rotation and translation to estimate scene depth. In essence,
our formulation provides an unconventional way to think and recover the dense
depth map of a complex dynamic scene which is incremental and motion free in
nature. Our proposed method does not make object level or any other high-level
prior assumption about the dynamic scene, as a result, it is applicable to a
wide range of scenarios. Experimental results on the benchmarks dataset show
the competence of our approach for multiple frames. | [
"cs.CV"
] |
Model-based reinforcement learning (RL) algorithms can attain excellent
sample efficiency, but often lag behind the best model-free algorithms in terms
of asymptotic performance. This is especially true with high-capacity
parametric function approximators, such as deep networks. In this paper, we
study how to bridge this gap, by employing uncertainty-aware dynamics models.
We propose a new algorithm called probabilistic ensembles with trajectory
sampling (PETS) that combines uncertainty-aware deep network dynamics models
with sampling-based uncertainty propagation. Our comparison to state-of-the-art
model-based and model-free deep RL algorithms shows that our approach matches
the asymptotic performance of model-free algorithms on several challenging
benchmark tasks, while requiring significantly fewer samples (e.g., 8 and 125
times fewer samples than Soft Actor Critic and Proximal Policy Optimization
respectively on the half-cheetah task). | [
"cs.LG",
"cs.AI",
"cs.RO",
"stat.ML"
] |
$Q$-learning with function approximation is one of the most popular methods
in reinforcement learning. Though the idea of using function approximation was
proposed at least 60 years ago, even in the simplest setup, i.e, approximating
$Q$-functions with linear functions, it is still an open problem on how to
design a provably efficient algorithm that learns a near-optimal policy. The
key challenges are how to efficiently explore the state space and how to decide
when to stop exploring in conjunction with the function approximation scheme.
The current paper presents a provably efficient algorithm for $Q$-learning
with linear function approximation. Under certain regularity assumptions, our
algorithm, Difference Maximization $Q$-learning (DMQ), combined with linear
function approximation, returns a near-optimal policy using a polynomial number
of trajectories. Our algorithm introduces a new notion, the Distribution Shift
Error Checking (DSEC) oracle. This oracle tests whether there exists a function
in the function class that predicts well on a distribution $\mathcal{D}_1$, but
predicts poorly on another distribution $\mathcal{D}_2$, where $\mathcal{D}_1$
and $\mathcal{D}_2$ are distributions over states induced by two different
exploration policies. For the linear function class, this oracle is equivalent
to solving a top eigenvalue problem. We believe our algorithmic insights,
especially the DSEC oracle, are also useful in designing and analyzing
reinforcement learning algorithms with general function approximation. | [
"cs.LG",
"cs.AI",
"math.OC",
"stat.ML"
] |
Unsupervised domain adaptive object detection aims to adapt a well-trained
detector from its original source domain with rich labeled data to a new target
domain with unlabeled data. Recently, mainstream approaches perform this task
through adversarial learning, yet still suffer from two limitations. First,
they mainly align marginal distribution by unsupervised cross-domain feature
matching, and ignore each feature's categorical and positional information that
can be exploited for conditional alignment; Second, they treat all classes as
equally important for transferring cross-domain knowledge and ignore that
different classes usually have different transferability. In this paper, we
propose a joint adaptive detection framework (JADF) to address the above
challenges. First, an end-to-end joint adversarial adaptation framework for
object detection is proposed, which aligns both marginal and conditional
distributions between domains without introducing any extra hyperparameter.
Next, to consider the transferability of each object class, a metric for
class-wise transferability assessment is proposed, which is incorporated into
the JADF objective for domain adaptation. Further, an extended study from
unsupervised domain adaptation (UDA) to unsupervised few-shot domain adaptation
(UFDA) is conducted, where only a few unlabeled training images are available
in unlabeled target domain. Extensive experiments validate that JADF is
effective in both the UDA and UFDA settings, achieving significant performance
gains over existing state-of-the-art cross-domain detection methods. | [
"cs.CV"
] |
Intra-camera supervision (ICS) for person re-identification (Re-ID) assumes
that identity labels are independently annotated within each camera view and no
inter-camera identity association is labeled. It is a new setting proposed
recently to reduce the burden of annotation while expect to maintain desirable
Re-ID performance. However, the lack of inter-camera labels makes the ICS Re-ID
problem much more challenging than the fully supervised counterpart. By
investigating the characteristics of ICS, this paper proposes camera-specific
non-parametric classifiers, together with a hybrid mining quintuplet loss, to
perform intra-camera learning. Then, an inter-camera learning module consisting
of a graph-based ID association step and a Re-ID model updating step is
conducted. Extensive experiments on three large-scale Re-ID datasets show that
our approach outperforms all existing ICS works by a great margin. Our approach
performs even comparable to state-of-the-art fully supervised methods in two of
the datasets. | [
"cs.CV"
] |
Recent progress in image recognition has stimulated the deployment of vision
systems at an unprecedented scale. As a result, visual data are now often
consumed not only by humans but also by machines. Existing image processing
methods only optimize for better human perception, yet the resulting images may
not be accurately recognized by machines. This can be undesirable, e.g., the
images can be improperly handled by search engines or recommendation systems.
In this work, we propose simple approaches to improve machine interpretability
of processed images: optimizing the recognition loss directly on the image
processing network or through an intermediate transforming model.
Interestingly, the processing model's ability to enhance recognition quality
can transfer when evaluated on models of different architectures, recognized
categories, tasks and training datasets. This makes the solutions applicable
even when we do not have the knowledge of future recognition models, e.g., if
we upload processed images to the Internet. We conduct experiments on multiple
image processing tasks, with ImageNet classification and PASCAL VOC detection
as recognition tasks. With our simple methods, substantial accuracy gain can be
achieved with strong transferability and minimal image quality loss. Through a
user study we further show that the accuracy gain can transfer to a black-box,
third-party cloud model. Finally, we try to explain this transferability
phenomenon by demonstrating the similarities of different models' decision
boundaries. Code is available at https://github.com/liuzhuang13/Transferable_RA . | [
"cs.CV",
"cs.LG"
] |
We demonstrate an object tracking method for {3D} images with fixed
computational cost and state-of-the-art performance. Previous methods predicted
transformation parameters from convolutional layers. We instead propose an
architecture that does not include either flattening of convolutional features
or fully connected layers, but instead relies on equivariant filters to
preserve transformations between inputs and outputs (e.g. rot./trans. of inputs
rotate/translate outputs). The transformation is then derived in closed form
from the outputs of the filters. This method is useful for applications
requiring low latency, such as real-time tracking. We demonstrate our model on
synthetically augmented adult brain MRI, as well as fetal brain MRI, which is
the intended use-case. | [
"cs.CV",
"cs.LG",
"q-bio.QM"
] |
Semantic segmentation with deep learning has achieved great progress in
classifying the pixels in the image. However, the local location information is
usually ignored in the high-level feature extraction by the deep learning,
which is important for image semantic segmentation. To avoid this problem, we
propose a graph model initialized by a fully convolutional network (FCN) named
Graph-FCN for image semantic segmentation. Firstly, the image grid data is
extended to graph structure data by a convolutional network, which transforms
the semantic segmentation problem into a graph node classification problem.
Then we apply graph convolutional network to solve this graph node
classification problem. As far as we know, it is the first time that we apply
the graph convolutional network in image semantic segmentation. Our method
achieves competitive performance in mean intersection over union (mIOU) on the
VOC dataset(about 1.34% improvement), compared to the original FCN model. | [
"cs.CV"
] |
Generative Adversarial Networks (GAN) have shown great promise in tasks like
synthetic image generation, image inpainting, style transfer, and anomaly
detection. However, generating discrete data is a challenge. This work presents
an adversarial training based correlated discrete data (CDD) generation model.
It also details an approach for conditional CDD generation. The results of our
approach are presented over two datasets; job-seeking candidates skill set
(private dataset) and MNIST (public dataset). From quantitative and qualitative
analysis of these results, we show that our model performs better as it
leverages inherent correlation in the data, than an existing model that
overlooks correlation. | [
"cs.LG",
"stat.ML"
] |
Pixel binning is considered one of the most prominent solutions to tackle the
hardware limitation of smartphone cameras. Despite numerous advantages, such an
image sensor has to appropriate an artefact-prone non-Bayer colour filter array
(CFA) to enable the binning capability. Contrarily, performing essential image
signal processing (ISP) tasks like demosaicking and denoising, explicitly with
such CFA patterns, makes the reconstruction process notably complicated. In
this paper, we tackle the challenges of joint demosaicing and denoising (JDD)
on such an image sensor by introducing a novel learning-based method. The
proposed method leverages the depth and spatial attention in a deep network.
The proposed network is guided by a multi-term objective function, including
two novel perceptual losses to produce visually plausible images. On top of
that, we stretch the proposed image processing pipeline to comprehensively
reconstruct and enhance the images captured with a smartphone camera, which
uses pixel binning techniques. The experimental results illustrate that the
proposed method can outperform the existing methods by a noticeable margin in
qualitative and quantitative comparisons. Code available:
https://github.com/sharif-apu/BJDD_CVPR21. | [
"cs.CV"
] |
In recent years, the Transformer architecture has proven to be very
successful in sequence processing, but its application to other data
structures, such as graphs, has remained limited due to the difficulty of
properly defining positions. Here, we present the $\textit{Spectral Attention
Network}$ (SAN), which uses a learned positional encoding (LPE) that can take
advantage of the full Laplacian spectrum to learn the position of each node in
a given graph. This LPE is then added to the node features of the graph and
passed to a fully-connected Transformer. By leveraging the full spectrum of the
Laplacian, our model is theoretically powerful in distinguishing graphs, and
can better detect similar sub-structures from their resonance. Further, by
fully connecting the graph, the Transformer does not suffer from
over-squashing, an information bottleneck of most GNNs, and enables better
modeling of physical phenomenons such as heat transfer and electric
interaction. When tested empirically on a set of 4 standard datasets, our model
performs on par or better than state-of-the-art GNNs, and outperforms any
attention-based model by a wide margin, becoming the first fully-connected
architecture to perform well on graph benchmarks. | [
"cs.LG"
] |
Interventional causal models describe several joint distributions over some
variables used to describe a system, one for each intervention setting. They
provide a formal recipe for how to move between the different joint
distributions and make predictions about the variables upon intervening on the
system. Yet, it is difficult to formalise how we may change the underlying
variables used to describe the system, say moving from fine-grained to
coarse-grained variables. Here, we argue that compositionality is a desideratum
for such model transformations and the associated errors: When abstracting a
reference model M iteratively, first obtaining M' and then further simplifying
that to obtain M'', we expect the composite transformation from M to M'' to
exist and its error to be bounded by the errors incurred by each individual
transformation step. Category theory, the study of mathematical objects via
compositional transformations between them, offers a natural language to
develop our framework for model transformations and abstractions. We introduce
a category of finite interventional causal models and, leveraging theory of
enriched categories, prove the desired compositionality properties for our
framework. | [
"stat.ML",
"cs.AI",
"cs.LG",
"cs.LO",
"math.CT"
] |
We study online reinforcement learning for finite-horizon deterministic
control systems with {\it arbitrary} state and action spaces. Suppose that the
transition dynamics and reward function is unknown, but the state and action
space is endowed with a metric that characterizes the proximity between
different states and actions. We provide a surprisingly simple upper-confidence
reinforcement learning algorithm that uses a function approximation oracle to
estimate optimistic Q functions from experiences. We show that the regret of
the algorithm after $K$ episodes is $O(HL(KH)^{\frac{d-1}{d}}) $ where $L$ is a
smoothness parameter, and $d$ is the doubling dimension of the state-action
space with respect to the given metric. We also establish a near-matching
regret lower bound. The proposed method can be adapted to work for more
structured transition systems, including the finite-state case and the case
where value functions are linear combinations of features, where the method
also achieve the optimal regret. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
We present Graph-$Q$-SAT, a branching heuristic for a Boolean SAT solver
trained with value-based reinforcement learning (RL) using Graph Neural
Networks for function approximation. Solvers using Graph-$Q$-SAT are complete
SAT solvers that either provide a satisfying assignment or proof of
unsatisfiability, which is required for many SAT applications. The branching
heuristics commonly used in SAT solvers make poor decisions during their
warm-up period, whereas Graph-$Q$-SAT is trained to examine the structure of
the particular problem instance to make better decisions early in the search.
Training Graph-$Q$-SAT is data efficient and does not require elaborate dataset
preparation or feature engineering. We train Graph-$Q$-SAT using RL interfacing
with MiniSat solver and show that Graph-$Q$-SAT can reduce the number of
iterations required to solve SAT problems by 2-3X. Furthermore, it generalizes
to unsatisfiable SAT instances, as well as to problems with 5X more variables
than it was trained on. We show that for larger problems, reductions in the
number of iterations lead to wall clock time reductions, the ultimate goal when
designing heuristics. We also show positive zero-shot transfer behavior when
testing Graph-$Q$-SAT on a task family different from that used for training.
While more work is needed to apply Graph-$Q$-SAT to reduce wall clock time in
modern SAT solving settings, it is a compelling proof-of-concept showing that
RL equipped with Graph Neural Networks can learn a generalizable branching
heuristic for SAT search. | [
"cs.LG",
"cs.AI"
] |
In this paper we address the problem of continuous fine-grained action
segmentation, in which multiple actions are present in an unsegmented video
stream. The challenge for this task lies in the need to represent the
hierarchical nature of the actions and to detect the transitions between
actions, allowing us to localise the actions within the video effectively. We
propose a novel recurrent semi-supervised Generative Adversarial Network (GAN)
model for continuous fine-grained human action segmentation. Temporal context
information is captured via a novel Gated Context Extractor (GCE) module,
composed of gated attention units, that directs the queued context information
through the generator model, for enhanced action segmentation. The GAN is made
to learn features in a semi-supervised manner, enabling the model to perform
action classification jointly with the standard, unsupervised, GAN learning
procedure. We perform extensive evaluations on different architectural variants
to demonstrate the importance of the proposed network architecture, and show
that it is capable of outperforming current state-of-the-art on three
challenging datasets: 50 Salads, MERL Shopping and Georgia Tech Egocentric
Activities dataset. | [
"cs.CV"
] |
Point cloud registration is the task of estimating the rigid transformation
that aligns a pair of point cloud fragments. We present an efficient and robust
framework for pairwise registration of real-world 3D scans, leveraging Hough
voting in the 6D transformation parameter space. First, deep geometric features
are extracted from a point cloud pair to compute putative correspondences. We
then construct a set of triplets of correspondences to cast votes on the 6D
Hough space, representing the transformation parameters in sparse tensors.
Next, a fully convolutional refinement module is applied to refine the noisy
votes. Finally, we identify the consensus among the correspondences from the
Hough space, which we use to predict our final transformation parameters. Our
method outperforms state-of-the-art methods on 3DMatch and 3DLoMatch benchmarks
while achieving comparable performance on KITTI odometry dataset. We further
demonstrate the generalizability of our approach by setting a new
state-of-the-art on ICL-NUIM dataset, where we integrate our module into a
multi-way registration pipeline. | [
"cs.CV"
] |
Reinforcement learning with function approximation can be unstable and even
divergent, especially when combined with off-policy learning and Bellman
updates. In deep reinforcement learning, these issues have been dealt with
empirically by adapting and regularizing the representation, in particular with
auxiliary tasks. This suggests that representation learning may provide a means
to guarantee stability. In this paper, we formally show that there are indeed
nontrivial state representations under which the canonical TD algorithm is
stable, even when learning off-policy. We analyze representation learning
schemes that are based on the transition matrix of a policy, such as
proto-value functions, along three axes: approximation error, stability, and
ease of estimation. In the most general case, we show that a Schur basis
provides convergence guarantees, but is difficult to estimate from samples. For
a fixed reward function, we find that an orthogonal basis of the corresponding
Krylov subspace is an even better choice. We conclude by empirically
demonstrating that these stable representations can be learned using stochastic
gradient descent, opening the door to improved techniques for representation
learning with deep networks. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Bundle adjustment plays a vital role in feature-based monocular SLAM. In many
modern SLAM pipelines, bundle adjustment is performed to estimate the 6DOF
camera trajectory and 3D map (3D point cloud) from the input feature tracks.
However, two fundamental weaknesses plague SLAM systems based on bundle
adjustment. First, the need to carefully initialise bundle adjustment means
that all variables, in particular the map, must be estimated as accurately as
possible and maintained over time, which makes the overall algorithm
cumbersome. Second, since estimating the 3D structure (which requires
sufficient baseline) is inherent in bundle adjustment, the SLAM algorithm will
encounter difficulties during periods of slow motion or pure rotational motion.
We propose a different SLAM optimisation core: instead of bundle adjustment,
we conduct rotation averaging to incrementally optimise only camera
orientations. Given the orientations, we estimate the camera positions and 3D
points via a quasi-convex formulation that can be solved efficiently and
globally optimally. Our approach not only obviates the need to estimate and
maintain the positions and 3D map at keyframe rate (which enables simpler SLAM
systems), it is also more capable of handling slow motions or pure rotational
motions. | [
"cs.CV",
"I.4"
] |
With the goal of predicting the future rainfall intensity in a local region
over a relatively short period time, precipitation nowcasting has been a
long-time scientific challenge with great social and economic impact. The radar
echo extrapolation approaches for precipitation nowcasting take radar echo
images as input, aiming to generate future radar echo images by learning from
the historical images. To effectively handle complex and high non-stationary
evolution of radar echoes, we propose to decompose the movement into optical
flow field motion and morphologic deformation. Following this idea, we
introduce Flow-Deformation Network (FDNet), a neural network that models flow
and deformation in two parallel cross pathways. The flow encoder captures the
optical flow field motion between consecutive images and the deformation
encoder distinguishes the change of shape from the translational motion of
radar echoes. We evaluate the proposed network architecture on two real-world
radar echo datasets. Our model achieves state-of-the-art prediction results
compared with recent approaches. To the best of our knowledge, this is the
first network architecture with flow and deformation separation to model the
evolution of radar echoes for precipitation nowcasting. We believe that the
general idea of this work could not only inspire much more effective approaches
but also be applied to other similar spatiotemporal prediction tasks | [
"cs.LG",
"cs.AI"
] |
Sales forecasts are crucial for the E-commerce business. State-of-the-art
techniques typically apply only univariate methods to make prediction for each
series independently. However, due to the short nature of sales times series in
E-commerce, univariate methods don't apply well. In this article, we propose a
global model which outperforms state-of-the-art models on real dataset. It is
achieved by using Tree Boosting Methods that exploit non-linearity and
cross-series information. We also proposed a preprocessing framework to
overcome the inherent difficulties in the E-commerce data. In particular, we
use different schemes to limit the impact of the volatility of the data. | [
"stat.ML",
"cs.LG"
] |
We tackle the panoptic segmentation problem with a conditional random field
(CRF) model. Panoptic segmentation involves assigning a semantic label and an
instance label to each pixel of a given image. At each pixel, the semantic
label and the instance label should be compatible. Furthermore, a good panoptic
segmentation should have a number of other desirable properties such as the
spatial and color consistency of the labeling (similar looking neighboring
pixels should have the same semantic label and the instance label). To tackle
this problem, we propose a CRF model, named Bipartite CRF or BCRF, with two
types of random variables for semantic and instance labels. In this
formulation, various energies are defined within and across the two types of
random variables to encourage a consistent panoptic segmentation. We propose a
mean-field-based efficient inference algorithm for solving the CRF and
empirically show its convergence properties. This algorithm is fully
differentiable, and therefore, BCRF inference can be included as a trainable
module in a deep network. In the experimental evaluation, we quantitatively and
qualitatively show that the BCRF yields superior panoptic segmentation results
in practice. | [
"cs.CV",
"eess.IV"
] |
Paleness or pallor is a manifestation of blood loss or low hemoglobin
concentrations in the human blood that can be caused by pathologies such as
anemia. This work presents the first automated screening system that utilizes
pallor site images, segments, and extracts color and intensity-based features
for multi-class classification of patients with high pallor due to anemia-like
pathologies, normal patients and patients with other abnormalities. This work
analyzes the pallor sites of conjunctiva and tongue for anemia screening
purposes. First, for the eye pallor site images, the sclera and conjunctiva
regions are automatically segmented for regions of interest. Similarly, for the
tongue pallor site images, the inner and outer tongue regions are segmented.
Then, color-plane based feature extraction is performed followed by machine
learning algorithms for feature reduction and image level classification for
anemia. In this work, a suite of classification algorithms image-level
classifications for normal (class 0), pallor (class 1) and other abnormalities
(class 2). The proposed method achieves 86% accuracy, 85% precision and 67%
recall in eye pallor site images and 98.2% accuracy and precision with 100%
recall in tongue pallor site images for classification of images with pallor.
The proposed pallor screening system can be further fine-tuned to detect the
severity of anemia-like pathologies using controlled set of local images that
can then be used for future benchmarking purposes. | [
"cs.CV"
] |
Short video applications like TikTok and Kwai have been a great hit recently.
In order to meet the increasing demands and take full advantage of visual
information in short videos, objects in each short video need to be located and
analyzed as an upstream task. A question is thus raised -- how to improve the
accuracy and robustness of object detection, tracking, and re-identification
across tons of short videos with hundreds of categories and complicated visual
effects (VFX). To this end, a system composed of a detection module, a tracking
module and a generic object re-identification module, is proposed in this
paper, which captures features of major objects from short videos. In
particular, towards the high efficiency demands in practical short video
application, a Temporal Information Fusion Network (TIFN) is proposed in the
object detection module, which shows comparable accuracy and improved time
efficiency to the state-of-the-art video object detector. Furthermore, in order
to mitigate the fragmented issue of tracklets in short videos, a Cross-Layer
Pointwise Siamese Network (CPSN) is proposed in the tracking module to enhance
the robustness of the appearance model. Moreover, in order to evaluate the
proposed system, two challenge datasets containing real-world short videos are
built for video object trajectory extraction and generic object
re-identification respectively. Overall, extensive experiments for each module
and the whole system demonstrate the effectiveness and efficiency of our
system. | [
"cs.CV"
] |
In this paper, we investigate the cause of the high false positive rate in
Visual Relationship Detection (VRD). We observe that during training, the
relationship proposal distribution is highly imbalanced: most of the negative
relationship proposals are easy to identify, e.g., the inaccurate object
detection, which leads to the under-fitting of low-frequency difficult
proposals. This paper presents Spatially-Aware Balanced negative pRoposal
sAmpling (SABRA), a robust VRD framework that alleviates the influence of false
positives. To effectively optimize the model under imbalanced distribution,
SABRA adopts Balanced Negative Proposal Sampling (BNPS) strategy for mini-batch
sampling. BNPS divides proposals into 5 well defined sub-classes and generates
a balanced training distribution according to the inverse frequency. BNPS gives
an easier optimization landscape and significantly reduces the number of false
positives. To further resolve the low-frequency challenging false positive
proposals with high spatial ambiguity, we improve the spatial modeling ability
of SABRA on two aspects: a simple and efficient multi-head heterogeneous graph
attention network (MH-GAT) that models the global spatial interactions of
objects, and a spatial mask decoder that learns the local spatial
configuration. SABRA outperforms SOTA methods by a large margin on two
human-object interaction (HOI) datasets and one general VRD dataset. | [
"cs.CV",
"cs.LG"
] |
Accurate medical image segmentation is essential for diagnosis, surgical
planning and many other applications. Convolutional Neural Networks (CNNs) have
become the state-of-the-art automatic segmentation methods. However, fully
automatic results may still need to be refined to become accurate and robust
enough for clinical use. We propose a deep learning-based interactive
segmentation method to improve the results obtained by an automatic CNN and to
reduce user interactions during refinement for higher accuracy. We use one CNN
to obtain an initial automatic segmentation, on which user interactions are
added to indicate mis-segmentations. Another CNN takes as input the user
interactions with the initial segmentation and gives a refined result. We
propose to combine user interactions with CNNs through geodesic distance
transforms, and propose a resolution-preserving network that gives a better
dense prediction. In addition, we integrate user interactions as hard
constraints into a back-propagatable Conditional Random Field. We validated the
proposed framework in the context of 2D placenta segmentation from fetal MRI
and 3D brain tumor segmentation from FLAIR images. Experimental results show
our method achieves a large improvement from automatic CNNs, and obtains
comparable and even higher accuracy with fewer user interventions and less time
compared with traditional interactive methods. | [
"cs.CV"
] |
Automatic skin lesion segmentation on dermoscopic images is an essential step
in computer-aided diagnosis of melanoma. However, this task is challenging due
to significant variations of lesion appearances across different patients. This
challenge is further exacerbated when dealing with a large amount of image
data. In this paper, we extended our previous work by developing a deeper
network architecture with smaller kernels to enhance its discriminant capacity.
In addition, we explicitly included color information from multiple color
spaces to facilitate network training and thus to further improve the
segmentation performance. We extensively evaluated our method on the ISBI 2017
skin lesion segmentation challenge. By training with the 2000 challenge
training images, our method achieved an average Jaccard Index (JA) of 0.765 on
the 600 challenge testing images, which ranked itself in the first place in the
challenge | [
"cs.CV"
] |
Visual recognition has been dominated by convolutional neural networks (CNNs)
for years. Though recently the prevailing vision transformers (ViTs) have shown
great potential of self-attention based models in ImageNet classification,
their performance is still inferior to that of the latest SOTA CNNs if no extra
data are provided. In this work, we try to close the performance gap and
demonstrate that attention-based models are indeed able to outperform CNNs. We
find a major factor limiting the performance of ViTs for ImageNet
classification is their low efficacy in encoding fine-level features into the
token representations. To resolve this, we introduce a novel outlook attention
and present a simple and general architecture, termed Vision Outlooker (VOLO).
Unlike self-attention that focuses on global dependency modeling at a coarse
level, the outlook attention efficiently encodes finer-level features and
contexts into tokens, which is shown to be critically beneficial to recognition
performance but largely ignored by the self-attention. Experiments show that
our VOLO achieves 87.1% top-1 accuracy on ImageNet-1K classification, which is
the first model exceeding 87% accuracy on this competitive benchmark, without
using any extra training data In addition, the pre-trained VOLO transfers well
to downstream tasks, such as semantic segmentation. We achieve 84.3% mIoU score
on the cityscapes validation set and 54.3% on the ADE20K validation set. Code
is available at \url{https://github.com/sail-sg/volo}. | [
"cs.CV"
] |
While human observers are able to cope with variations in color and
appearance of histological stains, digital pathology algorithms commonly
require a well-normalized setting to achieve peak performance, especially when
a limited amount of labeled data is available. This work provides a fully
automated, end-to-end learning-based setup for normalizing histological stains,
which considers the texture context of the tissue. We introduce Feature Aware
Normalization, which extends the framework of batch normalization in
combination with gating elements from Long Short-Term Memory units for
normalization among different spatial regions of interest. By incorporating a
pretrained deep neural network as a feature extractor steering a pixelwise
processing pipeline, we achieve excellent normalization results and ensure a
consistent representation of color and texture. The evaluation comprises a
comparison of color histogram deviations, structural similarity and measures
the color volume obtained by the different methods. | [
"cs.CV"
] |
Research on content-based image retrieval (CBIR) has been under development
for decades, and numerous methods have been competing to extract the most
discriminative features for improved representation of the image content.
Recently, deep learning methods have gained attention in computer vision,
including CBIR. In this paper, we present a comparative investigation of
different features, including low-level and high-level features, for CBIR. We
compare the performance of CBIR systems using different deep features with
state-of-the-art low-level features such as SIFT, SURF, HOG, LBP, and LTP,
using different dictionaries and coefficient learning techniques. Furthermore,
we conduct comparisons with a set of primitive and popular features that have
been used in this field, including colour histograms and Gabor features. We
also investigate the discriminative power of deep features using certain
similarity measures under different validation approaches. Furthermore, we
investigate the effects of the dimensionality reduction of deep features on the
performance of CBIR systems using principal component analysis, discrete
wavelet transform, and discrete cosine transform. Unprecedentedly, the
experimental results demonstrate high (95\% and 93\%) mean average precisions
when using the VGG-16 FC7 deep features of Corel-1000 and Coil-20 datasets with
10-D and 20-D K-SVD, respectively. | [
"cs.CV"
] |
We introduce canonical correlation forests (CCFs), a new decision tree
ensemble method for classification and regression. Individual canonical
correlation trees are binary decision trees with hyperplane splits based on
local canonical correlation coefficients calculated during training. Unlike
axis-aligned alternatives, the decision surfaces of CCFs are not restricted to
the coordinate system of the inputs features and therefore more naturally
represent data with correlated inputs. CCFs naturally accommodate multiple
outputs, provide a similar computational complexity to random forests, and
inherit their impressive robustness to the choice of input parameters. As part
of the CCF training algorithm, we also introduce projection bootstrapping, a
novel alternative to bagging for oblique decision tree ensembles which
maintains use of the full dataset in selecting split points, often leading to
improvements in predictive accuracy. Our experiments show that, even without
parameter tuning, CCFs out-perform axis-aligned random forests and other
state-of-the-art tree ensemble methods on both classification and regression
problems, delivering both improved predictive accuracy and faster training
times. We further show that they outperform all of the 179 classifiers
considered in a recent extensive survey. | [
"stat.ML",
"cs.LG"
] |
Graph vertex embeddings based on random walks have become increasingly
influential in recent years, showing good performance in several tasks as they
efficiently transform a graph into a more computationally digestible format
while preserving relevant information. However, the theoretical properties of
such algorithms, in particular the influence of hyperparameters and of the
graph structure on their convergence behaviour, have so far not been
well-understood. In this work, we provide a theoretical analysis for
random-walks based embeddings techniques. Firstly, we prove that, under some
weak assumptions, vertex embeddings derived from random walks do indeed
converge both in the single limit of the number of random walks $N \to \infty$
and in the double limit of both $N$ and the length of each random walk
$L\to\infty$. Secondly, we derive concentration bounds quantifying the converge
rate of the corpora for the single and double limits. Thirdly, we use these
results to derive a heuristic for choosing the hyperparameters $N$ and $L$. We
validate and illustrate the practical importance of our findings with a range
of numerical and visual experiments on several graphs drawn from real-world
applications. | [
"stat.ML",
"cs.LG",
"math.PR"
] |
Weight quantization for deep ConvNets has shown promising results for
applications such as image classification and semantic segmentation and is
especially important for applications where memory storage is limited. However,
when aiming for quantization without accuracy degradation, different tasks may
end up with different bitwidths. This creates complexity for software and
hardware support and the complexity accumulates when one considers
mixed-precision quantization, in which case each layer's weights use a
different bitwidth. Our key insight is that optimizing for the least bitwidth
subject to no accuracy degradation is not necessarily an optimal strategy. This
is because one cannot decide optimality between two bitwidths if one has a
smaller model size while the other has better accuracy. In this work, we take
the first step to understand if some weight bitwidth is better than others by
aligning all to the same model size using a width-multiplier. Under this
setting, somewhat surprisingly, we show that using a single bitwidth for the
whole network can achieve better accuracy compared to mixed-precision
quantization targeting zero accuracy degradation when both have the same model
size. In particular, our results suggest that when the number of channels
becomes a target hyperparameter, a single weight bitwidth throughout the
network shows superior results for model compression. | [
"cs.LG",
"cs.CV",
"eess.IV"
] |
A learned generative model often produces biased statistics relative to the
underlying data distribution. A standard technique to correct this bias is
importance sampling, where samples from the model are weighted by the
likelihood ratio under model and true distributions. When the likelihood ratio
is unknown, it can be estimated by training a probabilistic classifier to
distinguish samples from the two distributions. We employ this likelihood-free
importance weighting method to correct for the bias in generative models. We
find that this technique consistently improves standard goodness-of-fit metrics
for evaluating the sample quality of state-of-the-art deep generative models,
suggesting reduced bias. Finally, we demonstrate its utility on representative
applications in a) data augmentation for classification using generative
adversarial networks, and b) model-based policy evaluation using off-policy
data. | [
"stat.ML",
"cs.LG",
"cs.NE"
] |
Since human-labeled samples are free for the target set, unsupervised person
re-identification (Re-ID) has attracted much attention in recent years, by
additionally exploiting the source set. However, due to the differences on
camera styles, illumination and backgrounds, there exists a large gap between
source domain and target domain, introducing a great challenge on cross-domain
matching. To tackle this problem, in this paper we propose a novel method named
Dual-stream Reciprocal Disentanglement Learning (DRDL), which is quite
efficient in learning domain-invariant features. In DRDL, two encoders are
first constructed for id-related and id-unrelated feature extractions, which
are respectively measured by their associated classifiers. Furthermore,
followed by an adversarial learning strategy, both streams reciprocally and
positively effect each other, so that the id-related features and id-unrelated
features are completely disentangled from a given image, allowing the encoder
to be powerful enough to obtain the discriminative but domain-invariant
features. In contrast to existing approaches, our proposed method is free from
image generation, which not only reduces the computational complexity
remarkably, but also removes redundant information from id-related features.
Extensive experiments substantiate the superiority of our proposed method
compared with the state-of-the-arts. The source code has been released in
https://github.com/lhf12278/DRDL. | [
"cs.CV"
] |
The detection of anatomical landmarks is a vital step for medical image
analysis and applications for diagnosis, interpretation and guidance. Manual
annotation of landmarks is a tedious process that requires domain-specific
expertise and introduces inter-observer variability. This paper proposes a new
detection approach for multiple landmarks based on multi-agent reinforcement
learning. Our hypothesis is that the position of all anatomical landmarks is
interdependent and non-random within the human anatomy, thus finding one
landmark can help to deduce the location of others. Using a Deep Q-Network
(DQN) architecture we construct an environment and agent with implicit
inter-communication such that we can accommodate K agents acting and learning
simultaneously, while they attempt to detect K different landmarks. During
training the agents collaborate by sharing their accumulated knowledge for a
collective gain. We compare our approach with state-of-the-art architectures
and achieve significantly better accuracy by reducing the detection error by
50%, while requiring fewer computational resources and time to train compared
to the naive approach of training K agents separately. | [
"cs.CV"
] |
In this paper, the design of an optimal trajectory for an energy-constrained
drone operating in dynamic network environments is studied. In the considered
model, a drone base station (DBS) is dispatched to provide uplink connectivity
to ground users whose demand is dynamic and unpredictable. In this case, the
DBS's trajectory must be adaptively adjusted to satisfy the dynamic user access
requests. To this end, a meta-learning algorithm is proposed in order to adapt
the DBS's trajectory when it encounters novel environments, by tuning a
reinforcement learning (RL) solution. The meta-learning algorithm provides a
solution that adapts the DBS in novel environments quickly based on limited
former experiences. The meta-tuned RL is shown to yield a faster convergence to
the optimal coverage in unseen environments with a considerably low computation
complexity, compared to the baseline policy gradient algorithm. Simulation
results show that, the proposed meta-learning solution yields a 25% improvement
in the convergence speed, and about 10% improvement in the DBS' communication
performance, compared to a baseline policy gradient algorithm. Meanwhile, the
probability that the DBS serves over 50% of user requests increases about 27%,
compared to the baseline policy gradient algorithm. | [
"cs.LG",
"cs.IT",
"cs.NI",
"math.IT",
"stat.ML"
] |
We present lambda layers -- an alternative framework to self-attention -- for
capturing long-range interactions between an input and structured contextual
information (e.g. a pixel surrounded by other pixels). Lambda layers capture
such interactions by transforming available contexts into linear functions,
termed lambdas, and applying these linear functions to each input separately.
Similar to linear attention, lambda layers bypass expensive attention maps, but
in contrast, they model both content and position-based interactions which
enables their application to large structured inputs such as images. The
resulting neural network architectures, LambdaNetworks, significantly
outperform their convolutional and attentional counterparts on ImageNet
classification, COCO object detection and COCO instance segmentation, while
being more computationally efficient. Additionally, we design LambdaResNets, a
family of hybrid architectures across different scales, that considerably
improves the speed-accuracy tradeoff of image classification models.
LambdaResNets reach excellent accuracies on ImageNet while being 3.2 - 4.4x
faster than the popular EfficientNets on modern machine learning accelerators.
When training with an additional 130M pseudo-labeled images, LambdaResNets
achieve up to a 9.5x speed-up over the corresponding EfficientNet checkpoints. | [
"cs.CV",
"cs.LG"
] |
Reconstruction of geometry based on different input modes, such as images or
point clouds, has been instrumental in the development of computer aided design
and computer graphics. Optimal implementations of these applications have
traditionally involved the use of spline-based representations at their core.
Most such methods attempt to solve optimization problems that minimize an
output-target mismatch. However, these optimization techniques require an
initialization that is close enough, as they are local methods by nature. We
propose a deep learning architecture that adapts to perform spline fitting
tasks accordingly, providing complementary results to the aforementioned
traditional methods. We showcase the performance of our approach, by
reconstructing spline curves and surfaces based on input images or point
clouds. | [
"cs.CV"
] |
We present Face Swapping GAN (FSGAN) for face swapping and reenactment.
Unlike previous work, FSGAN is subject agnostic and can be applied to pairs of
faces without requiring training on those faces. To this end, we describe a
number of technical contributions. We derive a novel recurrent neural network
(RNN)-based approach for face reenactment which adjusts for both pose and
expression variations and can be applied to a single image or a video sequence.
For video sequences, we introduce continuous interpolation of the face views
based on reenactment, Delaunay Triangulation, and barycentric coordinates.
Occluded face regions are handled by a face completion network. Finally, we use
a face blending network for seamless blending of the two faces while preserving
target skin color and lighting conditions. This network uses a novel Poisson
blending loss which combines Poisson optimization with perceptual loss. We
compare our approach to existing state-of-the-art systems and show our results
to be both qualitatively and quantitatively superior. | [
"cs.CV",
"cs.GR",
"cs.LG"
] |
Visual navigation for autonomous agents is a core task in the fields of
computer vision and robotics. Learning-based methods, such as deep
reinforcement learning, have the potential to outperform the classical
solutions developed for this task; however, they come at a significantly
increased computational load. Through this work, we design a novel approach
that focuses on performing better or comparable to the existing learning-based
solutions but under a clear time/computational budget. To this end, we propose
a method to encode vital scene semantics such as traversable paths, unexplored
areas, and observed scene objects -- alongside raw visual streams such as RGB,
depth, and semantic segmentation masks -- into a semantically informed,
top-down egocentric map representation. Further, to enable the effective use of
this information, we introduce a novel 2-D map attention mechanism, based on
the successful multi-layer Transformer networks. We conduct experiments on 3-D
reconstructed indoor PointGoal visual navigation and demonstrate the
effectiveness of our approach. We show that by using our novel attention schema
and auxiliary rewards to better utilize scene semantics, we outperform multiple
baselines trained with only raw inputs or implicit semantic information while
operating with an 80% decrease in the agent's experience. | [
"cs.CV",
"cs.RO"
] |
We propose Differentiable Window, a new neural module and general purpose
component for dynamic window selection. While universally applicable, we
demonstrate a compelling use case of utilizing Differentiable Window to improve
standard attention modules by enabling more focused attentions over the input
regions. We propose two variants of Differentiable Window, and integrate them
within the Transformer architecture in two novel ways. We evaluate our proposed
approach on a myriad of NLP tasks, including machine translation, sentiment
analysis, subject-verb agreement and language modeling. Our experimental
results demonstrate consistent and sizable improvements across all tasks. | [
"cs.LG",
"cs.CL",
"stat.ML"
] |
State-of-the-art deep neural networks (DNNs) have been proved to have
excellent performance on unsupervised domain adaption (UDA). However, recent
work shows that DNNs perform poorly when being attacked by adversarial samples,
where these attacks are implemented by simply adding small disturbances to the
original images. Although plenty of work has focused on this, as far as we
know, there is no systematic research on the robustness of unsupervised domain
adaption model. Hence, we discuss the robustness of unsupervised domain
adaption against adversarial attacking for the first time. We benchmark various
settings of adversarial attack and defense in domain adaption, and propose a
cross domain attack method based on pseudo label. Most importantly, we analyze
the impact of different datasets, models, attack methods and defense methods.
Directly, our work proves the limited robustness of unsupervised domain
adaptation model, and we hope our work may facilitate the community to pay more
attention to improve the robustness of the model against attacking. | [
"cs.CV"
] |
Unsupervised feature learning has made great strides with contrastive
learning based on instance discrimination and invariant mapping, as benchmarked
on curated class-balanced datasets. However, natural data could be highly
correlated and long-tail distributed. Natural between-instance similarity
conflicts with the presumed instance distinction, causing unstable training and
poor performance.
Our idea is to discover and integrate between-instance similarity into
contrastive learning, not directly by instance grouping, but by cross-level
discrimination (CLD) between instances and local instance groups. While
invariant mapping of each instance is imposed by attraction within its
augmented views, between-instance similarity could emerge from common repulsion
against instance groups.
Our batch-wise and cross-view comparisons also greatly improve the
positive/negative sample ratio of contrastive learning and achieve better
invariant mapping. To effect both grouping and discrimination objectives, we
impose them on features separately derived from a shared representation. In
addition, we propose normalized projection heads and unsupervised
hyper-parameter tuning for the first time.
Our extensive experimentation demonstrates that CLD is a lean and powerful
add-on to existing methods such as NPID, MoCo, InfoMin, and BYOL on highly
correlated, long-tail, or balanced datasets. It not only achieves new
state-of-the-art on self-supervision, semi-supervision, and transfer learning
benchmarks, but also beats MoCo v2 and SimCLR on every reported performance
attained with a much larger compute. CLD effectively brings unsupervised
learning closer to natural data and real-world applications. Our code is
publicly available at: https://github.com/frank-xwang/CLD-UnsupervisedLearning. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
Accurate real-time traffic forecasting is a core technological problem
against the implementation of the intelligent transportation system. However,
it remains challenging considering the complex spatial and temporal
dependencies among traffic flows. In the spatial dimension, due to the
connectivity of the road network, the traffic flows between linked roads are
closely related. In terms of the temporal factor, although there exists a
tendency among adjacent time points in general, the importance of distant past
points is not necessarily smaller than that of recent past points since traffic
flows are also affected by external factors. In this study, an attention
temporal graph convolutional network (A3T-GCN) traffic forecasting method was
proposed to simultaneously capture global temporal dynamics and spatial
correlations. The A3T-GCN model learns the short-time trend in time series by
using the gated recurrent units and learns the spatial dependence based on the
topology of the road network through the graph convolutional network. Moreover,
the attention mechanism was introduced to adjust the importance of different
time points and assemble global temporal information to improve prediction
accuracy. Experimental results in real-world datasets demonstrate the
effectiveness and robustness of proposed A3T-GCN. The source code can be
visited at https://github.com/lehaifeng/T-GCN/A3T. | [
"cs.LG",
"stat.ML"
] |
Self-supervised learning by predicting transformations has demonstrated
outstanding performances in both unsupervised and (semi-)supervised tasks.
Among the state-of-the-art methods is the AutoEncoding Transformations (AET) by
decoding transformations from the learned representations of original and
transformed images. Both deterministic and probabilistic AETs rely on the
Euclidean distance to measure the deviation of estimated transformations from
their groundtruth counterparts. However, this assumption is questionable as a
group of transformations often reside on a curved manifold rather staying in a
flat Euclidean space. For this reason, we should use the geodesic to
characterize how an image transform along the manifold of a transformation
group, and adopt its length to measure the deviation between transformations.
Particularly, we present to autoencode a Lie group of homography
transformations PG(2) to learn image representations. For this, we make an
estimate of the intractable Riemannian logarithm by projecting PG(2) to a
subgroup of rotation transformations SO(3) that allows the closed-form
expression of geodesic distances. Experiments demonstrate the proposed AETv2
model outperforms the previous version as well as the other state-of-the-art
self-supervised models in multiple tasks. | [
"cs.CV",
"cs.LG",
"cs.NE"
] |
The capability to detect objects is a core part of autonomous driving. Due to
sensor noise and incomplete data, perfectly detecting and localizing every
object is infeasible. Therefore, it is important for a detector to provide the
amount of uncertainty in each prediction. Providing the autonomous system with
reliable uncertainties enables the vehicle to react differently based on the
level of uncertainty. Previous work has estimated the uncertainty in a
detection by predicting a probability distribution over object bounding boxes.
In this work, we propose a method to improve the ability to learn the
probability distribution by considering the potential noise in the ground-truth
labeled data. Our proposed approach improves not only the accuracy of the
learned distribution but also the object detection performance. | [
"cs.CV",
"cs.LG",
"cs.RO"
] |
Robotic systems are ever more capable of automation and fulfilment of complex
tasks, particularly with reliance on recent advances in intelligent systems,
deep learning and artificial intelligence. However, as robots and humans come
closer in their interactions, the matter of interpretability, or explainability
of robot decision-making processes for the human grows in importance. A
successful interaction and collaboration will only take place through mutual
understanding of underlying representations of the environment and the task at
hand. This is currently a challenge in deep learning systems. We present a
hierarchical deep reinforcement learning system, consisting of a low-level
agent handling the large actions/states space of a robotic system efficiently,
by following the directives of a high-level agent which is learning the
high-level dynamics of the environment and task. This high-level agent forms a
representation of the world and task at hand that is interpretable for a human
operator. The method, which we call Dot-to-Dot, is tested on a MuJoCo-based
model of the Fetch Robotics Manipulator, as well as a Shadow Hand, to test its
performance. Results show efficient learning of complex actions/states spaces
by the low-level agent, and an interpretable representation of the task and
decision-making process learned by the high-level agent. | [
"cs.LG",
"cs.AI",
"cs.RO",
"stat.ML"
] |
The ever-growing advances of deep learning in many areas including vision,
recommendation systems, natural language processing, etc., have led to the
adoption of Deep Neural Networks (DNNs) in production systems. The availability
of large datasets and high computational power are the main contributors to
these advances. The datasets are usually crowdsourced and may contain sensitive
information. This poses serious privacy concerns as this data can be misused or
leaked through various vulnerabilities. Even if the cloud provider and the
communication link is trusted, there are still threats of inference attacks
where an attacker could speculate properties of the data used for training, or
find the underlying model architecture and parameters. In this survey, we
review the privacy concerns brought by deep learning, and the mitigating
techniques introduced to tackle these issues. We also show that there is a gap
in the literature regarding test-time inference privacy, and propose possible
future research directions. | [
"cs.LG",
"cs.CR",
"stat.ML"
] |
We present a new meshing algorithm called guided and augmented meshing,
GAMesh, which uses a mesh prior to generate a surface for the output points of
a point network. By projecting the output points onto this prior and
simplifying the resulting mesh, GAMesh ensures a surface with the same topology
as the mesh prior but whose geometric fidelity is controlled by the point
network. This makes GAMesh independent of both the density and distribution of
the output points, a common artifact in traditional surface reconstruction
algorithms. We show that such a separation of geometry from topology can have
several advantages especially in single-view shape prediction, fair evaluation
of point networks and reconstructing surfaces for networks which output sparse
point clouds. We further show that by training point networks with GAMesh, we
can directly optimize the vertex positions to generate adaptive meshes with
arbitrary topologies. | [
"cs.CV",
"cs.CG",
"cs.GR",
"cs.LG"
] |
In this paper we cast neural networks defined on graphs as message-passing
neural networks (MPNNs) in order to study the distinguishing power of different
classes of such models. We are interested in whether certain architectures are
able to tell vertices apart based on the feature labels given as input with the
graph. We consider two variants of MPNNS: anonymous MPNNs whose message
functions depend only on the labels of vertices involved; and degree-aware
MPNNs in which message functions can additionally use information regarding the
degree of vertices. The former class covers a popular formalisms for computing
functions on graphs: graph neural networks (GNN). The latter covers the
so-called graph convolutional networks (GCNs), a recently introduced variant of
GNNs by Kipf and Welling. We obtain lower and upper bounds on the
distinguishing power of MPNNs in terms of the distinguishing power of the
Weisfeiler-Lehman (WL) algorithm. Our results imply that (i) the distinguishing
power of GCNs is bounded by the WL algorithm, but that they are one step ahead;
(ii) the WL algorithm cannot be simulated by "plain vanilla" GCNs but the
addition of a trade-off parameter between features of the vertex and those of
its neighbours (as proposed by Kipf and Welling themselves) resolves this
problem. | [
"cs.LG",
"stat.ML"
] |
Neural volumetric representations such as Neural Radiance Fields (NeRF) have
emerged as a compelling technique for learning to represent 3D scenes from
images with the goal of rendering photorealistic images of the scene from
unobserved viewpoints. However, NeRF's computational requirements are
prohibitive for real-time applications: rendering views from a trained NeRF
requires querying a multilayer perceptron (MLP) hundreds of times per ray. We
present a method to train a NeRF, then precompute and store (i.e. "bake") it as
a novel representation called a Sparse Neural Radiance Grid (SNeRG) that
enables real-time rendering on commodity hardware. To achieve this, we
introduce 1) a reformulation of NeRF's architecture, and 2) a sparse voxel grid
representation with learned feature vectors. The resulting scene representation
retains NeRF's ability to render fine geometric details and view-dependent
appearance, is compact (averaging less than 90 MB per scene), and can be
rendered in real-time (higher than 30 frames per second on a laptop GPU).
Actual screen captures are shown in our video. | [
"cs.CV",
"cs.GR"
] |
Deep neural networks have been successfully applied to solving the
video-based person re-identification problem with impressive results reported.
The existing networks for person re-id are designed to extract discriminative
features that preserve the identity information. Usually, whole video frames
are fed into the neural networks and all the regions in a frame are equally
treated. This may be a suboptimal choice because many regions, e.g., background
regions in the video, are not related to the person. Furthermore, the person of
interest may be occluded by another person or something else. These unrelated
regions may hinder person re-identification. In this paper, we introduce a
novel gating mechanism to deep neural networks. Our gating mechanism will learn
which regions are helpful for person re-identification and let these regions
pass the gate. The unrelated background regions or occluding regions are
filtered out by the gate. In each frame, the color channels and optical flow
channels provide quite different information. To better leverage such
information, we generate one gate using the color channels and another gate
using the optical flow channels. These two gates are combined to provide a more
reliable gate with a novel fusion method. Experimental results on two major
datasets demonstrate the performance improvements due to the proposed gating
mechanism. | [
"cs.CV"
] |
Pedestrian trajectory prediction in dynamic scenes remains a challenging and
critical problem in numerous applications, such as self-driving cars and
socially aware robots. Challenges concentrate on capturing pedestrians' motion
patterns and social interactions, as well as handling the future uncertainties.
Recent studies focus on modeling pedestrians' motion patterns with recurrent
neural networks, capturing social interactions with pooling-based or
graph-based methods, and handling future uncertainties by using random Gaussian
noise as the latent variable. However, they do not integrate specific obstacle
avoidance experience (OAE) that may improve prediction performance. For
example, pedestrians' future trajectories are always influenced by others in
front. Here we propose GTPPO (Graph-based Trajectory Predictor with Pseudo
Oracle), an encoder-decoder-based method conditioned on pedestrians' future
behaviors. Pedestrians' motion patterns are encoded with a long short-term
memory unit, which introduces the temporal attention to highlight specific time
steps. Their interactions are captured by a graph-based attention mechanism,
which draws OAE into the data-driven learning process of graph attention.
Future uncertainties are handled by generating multi-modal outputs with an
informative latent variable. Such a variable is generated by a novel pseudo
oracle predictor, which minimizes the knowledge gap between historical and
ground-truth trajectories. Finally, the GTPPO is evaluated on ETH, UCY and
Stanford Drone datasets, and the results demonstrate state-of-the-art
performance. Besides, the qualitative evaluations show successful cases of
handling sudden motion changes in the future. Such findings indicate that GTPPO
can peek into the future. | [
"cs.CV"
] |
The aim of the project is to investigate and assess opportunities for
applying reinforcement learning (RL) for power system control. As a proof of
concept (PoC), voltage control of thermostatically controlled loads (TCLs) for
power consumption regulation was developed using Modelica-based pipeline. The
Q-learning RL algorithm has been validated for deterministic and stochastic
initialization of TCLs. The latter modelling is closer to real grid behaviour,
which challenges the control development, considering the stochastic nature of
load switching. In addition, the paper shows the influence of Q-learning
parameters, including discretization of state-action space, on the controller
performance. | [
"cs.LG",
"cs.SY",
"eess.SY",
"stat.ML"
] |
Reinforcement learning methods for traffic signal control has gained
increasing interests recently and achieved better performances compared with
traditional transportation methods. However, reinforcement learning based
methods usually requires heavy training data and computational resources which
largely limit its application in real-world traffic signal control. This makes
meta-learning, which enables data-efficient and fast-adaptation training by
leveraging the knowledge of previous learning experiences, catches attentions
in traffic signal control. In this paper, we propose a novel value-based
Bayesian meta-reinforcement learning framework BM-DQN to robustly speed up the
learning process in new scenarios by utilizing well-trained prior knowledge
learned from existing scenarios. This framework based on our proposed
fast-adaptation variation to Gradient-EM Bayesian Meta-learning and the fast
update advantage of DQN, which allows fast adaptation to new scenarios with
continual learning ability and robustness to uncertainty. The experiments on 2D
navigation and traffic signal control show that our proposed framework adapts
more quickly and robustly in new scenarios than previous methods, and
specifically, much better continual learning ability in heterogeneous
scenarios. | [
"cs.LG",
"stat.ML"
] |
Collaborative personalization, such as through learned user representations
(embeddings), can improve the prediction accuracy of neural-network-based
models significantly. We propose Federated User Representation Learning (FURL),
a simple, scalable, privacy-preserving and resource-efficient way to utilize
existing neural personalization techniques in the Federated Learning (FL)
setting. FURL divides model parameters into federated and private parameters.
Private parameters, such as private user embeddings, are trained locally, but
unlike federated parameters, they are not transferred to or averaged on the
server. We show theoretically that this parameter split does not affect
training for most model personalization approaches. Storing user embeddings
locally not only preserves user privacy, but also improves memory locality of
personalization compared to on-server training. We evaluate FURL on two
datasets, demonstrating a significant improvement in model quality with 8% and
51% performance increases, and approximately the same level of performance as
centralized training with only 0% and 4% reductions. Furthermore, we show that
user embeddings learned in FL and the centralized setting have a very similar
structure, indicating that FURL can learn collaboratively through the shared
parameters while preserving user privacy. | [
"cs.LG",
"stat.ML"
] |
Reconstruction tasks in computer vision aim fundamentally to recover an
undetermined signal from a set of noisy measurements. Examples include
super-resolution, image denoising, and non-rigid structure from motion, all of
which have seen recent advancements through deep learning. However, earlier
work made extensive use of sparse signal reconstruction frameworks (e.g
convolutional sparse coding). While this work was ultimately surpassed by deep
learning, it rested on a much more developed theoretical framework. Recent work
by Papyan et. al provides a bridge between the two approaches by showing how a
convolutional neural network (CNN) can be viewed as an approximate solution to
a convolutional sparse coding (CSC) problem. In this work we argue that for
some types of inverse problems the CNN approximation breaks down leading to
poor performance. We argue that for these types of problems the CSC approach
should be used instead and validate this argument with empirical evidence.
Specifically we identify JPEG artifact reduction and non-rigid trajectory
reconstruction as challenging inverse problems for CNNs and demonstrate state
of the art performance on them using a CSC method. Furthermore, we offer some
practical improvements to this model and its application, and also show how
insights from the CSC model can be used to make CNNs effective in tasks where
their naive application fails. | [
"cs.CV"
] |
In recent years, deep learning has made great progress in many fields such as
image recognition, natural language processing, speech recognition and video
super-resolution. In this survey, we comprehensively investigate 33
state-of-the-art video super-resolution (VSR) methods based on deep learning.
It is well known that the leverage of information within video frames is
important for video super-resolution. Thus we propose a taxonomy and classify
the methods into six sub-categories according to the ways of utilizing
inter-frame information. Moreover, the architectures and implementation details
of all the methods are depicted in detail. Finally, we summarize and compare
the performance of the representative VSR method on some benchmark datasets. We
also discuss some challenges, which need to be further addressed by researchers
in the community of VSR. To the best of our knowledge, this work is the first
systematic review on VSR tasks, and it is expected to make a contribution to
the development of recent studies in this area and potentially deepen our
understanding to the VSR techniques based on deep learning. | [
"cs.CV",
"eess.IV"
] |
We propose a framework for the completely unsupervised learning of latent
object properties from their interactions: the perception-prediction network
(PPN). Consisting of a perception module that extracts representations of
latent object properties and a prediction module that uses those extracted
properties to simulate system dynamics, the PPN can be trained in an end-to-end
fashion purely from samples of object dynamics. The representations of latent
object properties learned by PPNs not only are sufficient to accurately
simulate the dynamics of systems comprised of previously unseen objects, but
also can be translated directly into human-interpretable properties (e.g.,
mass, coefficient of restitution) in an entirely unsupervised manner.
Crucially, PPNs also generalize to novel scenarios: their gradient-based
training can be applied to many dynamical systems and their graph-based
structure functions over systems comprised of different numbers of objects. Our
results demonstrate the efficacy of graph-based neural architectures in
object-centric inference and prediction tasks, and our model has the potential
to discover relevant object properties in systems that are not yet well
understood. | [
"cs.LG",
"cs.AI",
"cs.CV",
"stat.ML"
] |
Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | [
"cs.CV",
"cs.LG",
"cs.NE"
] |
Person re-identification (re-ID) concerns the matching of subject images
across different camera views in a multi camera surveillance system. One of the
major challenges in person re-ID is pose variations across the camera network,
which significantly affects the appearance of a person. Existing development
data lack adequate pose variations to carry out effective training of person
re-ID systems. To solve this issue, in this paper we propose an end-to-end
pose-driven attention-guided generative adversarial network, to generate
multiple poses of a person. We propose to attentively learn and transfer the
subject pose through an attention mechanism. A semantic-consistency loss is
proposed to preserve the semantic information of the person during pose
transfer. To ensure fine image details are realistic after pose translation, an
appearance discriminator is used while a pose discriminator is used to ensure
the pose of the transferred images will exactly be the same as the target pose.
We show that by incorporating the proposed approach in a person
re-identification framework, realistic pose transferred images and
state-of-the-art re-identification results can be achieved. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.