text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Fast methods for convolution and correlation underlie a variety of
applications in computer vision and graphics, including efficient filtering,
analysis, and simulation. However, standard convolution and correlation are
inherently limited to fixed filters: spatial adaptation is impossible without
sacrificing efficient computation. In early work, Freeman and Adelson have
shown how steerable filters can address this limitation, providing a way for
rotating the filter as it is passed over the signal. In this work, we provide a
general, representation-theoretic, framework that allows for spatially varying
linear transformations to be applied to the filter. This framework allows for
efficient implementation of extended convolution and correlation for
transformation groups such as rotation (in 2D and 3D) and scale, and provides a
new interpretation for previous methods including steerable filters and the
generalized Hough transform. We present applications to pattern matching, image
feature description, vector field visualization, and adaptive image filtering. | [
"cs.CV",
"cs.GR"
] |
We discuss a multiple-play multi-armed bandit (MAB) problem in which several
arms are selected at each round. Recently, Thompson sampling (TS), a randomized
algorithm with a Bayesian spirit, has attracted much attention for its
empirically excellent performance, and it is revealed to have an optimal regret
bound in the standard single-play MAB problem. In this paper, we propose the
multiple-play Thompson sampling (MP-TS) algorithm, an extension of TS to the
multiple-play MAB problem, and discuss its regret analysis. We prove that MP-TS
for binary rewards has the optimal regret upper bound that matches the regret
lower bound provided by Anantharam et al. (1987). Therefore, MP-TS is the first
computationally efficient algorithm with optimal regret. A set of computer
simulations was also conducted, which compared MP-TS with state-of-the-art
algorithms. We also propose a modification of MP-TS, which is shown to have
better empirical performance. | [
"stat.ML",
"cs.LG"
] |
3D hand pose estimation has received a lot of attention for its wide range of
applications and has made great progress owing to the development of deep
learning. Existing approaches mainly consider different input modalities and
settings, such as monocular RGB, multi-view RGB, depth, or point cloud, to
provide sufficient cues for resolving variations caused by self occlusion and
viewpoint change. In contrast, this work aims to address the less-explored idea
of using minimal information to estimate 3D hand poses. We present a new
architecture that automatically learns a guidance from implicit depth
perception and solves the ambiguity of hand pose through end-to-end training.
The experimental results show that 3D hand poses can be accurately estimated
from solely {\em hand silhouettes} without using depth maps. Extensive
evaluations on the {\em 2017 Hands In the Million Challenge} (HIM2017)
benchmark dataset further demonstrate that our method achieves comparable or
even better performance than recent depth-based approaches and serves as the
state-of-the-art of its own kind on estimating 3D hand poses from silhouettes. | [
"cs.CV"
] |
Graph convolutional networks (GCNs) have been very successful in
skeleton-based human action recognition where the sequence of skeletons is
modeled as a graph. However, most of the GCN-based methods in this area train a
deep feed-forward network with a fixed topology that leads to high
computational complexity and restricts their application in low computation
scenarios. In this paper, we propose a method to automatically find a compact
and problem-specific topology for spatio-temporal graph convolutional networks
in a progressive manner. Experimental results on two widely used datasets for
skeleton-based human action recognition indicate that the proposed method has
competitive or even better classification performance compared to the
state-of-the-art methods with much lower computational complexity. | [
"cs.CV",
"cs.LG"
] |
Convolutional neural networks have been applied to a wide variety of computer
vision tasks. Recent advances in semantic segmentation have enabled their
application to medical image segmentation. While most CNNs use two-dimensional
kernels, recent CNN-based publications on medical image segmentation featured
three-dimensional kernels, allowing full access to the three-dimensional
structure of medical images. Though closely related to semantic segmentation,
medical image segmentation includes specific challenges that need to be
addressed, such as the scarcity of labelled data, the high class imbalance
found in the ground truth and the high memory demand of three-dimensional
images. In this work, a CNN-based method with three-dimensional filters is
demonstrated and applied to hand and brain MRI. Two modifications to an
existing CNN architecture are discussed, along with methods on addressing the
aforementioned challenges. While most of the existing literature on medical
image segmentation focuses on soft tissue and the major organs, this work is
validated on data both from the central nervous system as well as the bones of
the hand. | [
"cs.CV"
] |
Reinforcement learning (RL) has recently been introduced to interactive
recommender systems (IRS) because of its nature of learning from dynamic
interactions and planning for long-run performance. As IRS is always with
thousands of items to recommend (i.e., thousands of actions), most existing
RL-based methods, however, fail to handle such a large discrete action space
problem and thus become inefficient. The existing work that tries to deal with
the large discrete action space problem by utilizing the deep deterministic
policy gradient framework suffers from the inconsistency between the continuous
action representation (the output of the actor network) and the real discrete
action. To avoid such inconsistency and achieve high efficiency and
recommendation effectiveness, in this paper, we propose a Tree-structured
Policy Gradient Recommendation (TPGR) framework, where a balanced hierarchical
clustering tree is built over the items and picking an item is formulated as
seeking a path from the root to a certain leaf of the tree. Extensive
experiments on carefully-designed environments based on two real-world datasets
demonstrate that our model provides superior recommendation performance and
significant efficiency improvement over state-of-the-art methods. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
The large domain discrepancy between faces captured in polarimetric (or
conventional) thermal and visible domain makes cross-domain face recognition
quite a challenging problem for both human-examiners and computer vision
algorithms. Previous approaches utilize a two-step procedure (visible feature
estimation and visible image reconstruction) to synthesize the visible image
given the corresponding polarimetric thermal image. However, these are regarded
as two disjoint steps and hence may hinder the performance of visible face
reconstruction. We argue that joint optimization would be a better way to
reconstruct more photo-realistic images for both computer vision algorithms and
human-examiners to examine. To this end, this paper proposes a Generative
Adversarial Network-based Visible Face Synthesis (GAN-VFS) method to synthesize
more photo-realistic visible face images from their corresponding polarimetric
images. To ensure that the encoded visible-features contain more semantically
meaningful information in reconstructing the visible face image, a guidance
sub-network is involved into the training procedure. To achieve photo realistic
property while preserving discriminative characteristics for the reconstructed
outputs, an identity loss combined with the perceptual loss are optimized in
the framework. Multiple experiments evaluated on different experimental
protocols demonstrate that the proposed method achieves state-of-the-art
performance. | [
"cs.CV"
] |
We study offline reinforcement learning (RL), which aims to learn an optimal
policy based on a dataset collected a priori. Due to the lack of further
interactions with the environment, offline RL suffers from the insufficient
coverage of the dataset, which eludes most existing theoretical analysis. In
this paper, we propose a pessimistic variant of the value iteration algorithm
(PEVI), which incorporates an uncertainty quantifier as the penalty function.
Such a penalty function simply flips the sign of the bonus function for
promoting exploration in online RL, which makes it easily implementable and
compatible with general function approximators.
Without assuming the sufficient coverage of the dataset, we establish a
data-dependent upper bound on the suboptimality of PEVI for general Markov
decision processes (MDPs). When specialized to linear MDPs, it matches the
information-theoretic lower bound up to multiplicative factors of the dimension
and horizon. In other words, pessimism is not only provably efficient but also
minimax optimal. In particular, given the dataset, the learned policy serves as
the "best effort" among all policies, as no other policies can do better. Our
theoretical analysis identifies the critical role of pessimism in eliminating a
notion of spurious correlation, which emerges from the "irrelevant"
trajectories that are less covered by the dataset and not informative for the
optimal policy. | [
"cs.LG",
"cs.AI",
"math.OC",
"math.ST",
"stat.ML",
"stat.TH"
] |
Artificial neural networks use a lot of coefficients that take a great deal
of computing power for their adjustment, especially if deep learning networks
are employed. However, there exist coefficients-free extremely fast
indexing-based technologies that work, for instance, in Google search engines,
in genome sequencing, etc. The paper discusses the use of indexing-based
methods for pattern recognition. It is shown that for pattern recognition
applications such indexing methods replace with inverse patterns the fully
inverted files, which are typically employed in search engines. Not only such
inversion provide automatic feature extraction, which is a distinguishing mark
of deep learning, but, unlike deep learning, pattern inversion supports almost
instantaneous learning, which is a consequence of absence of coefficients. The
paper discusses a pattern inversion formalism that makes use on a novel pattern
transform and its application for unsupervised instant learning. Examples
demonstrate a view-angle independent recognition of three-dimensional objects,
such as cars, against arbitrary background, prediction of remaining useful life
of aircraft engines, and other applications. In conclusion, it is noted that,
in neurophysiology, the function of the neocortical mini-column has been widely
debated since 1957. This paper hypothesize that, mathematically, the cortical
mini-column can be described as an inverse pattern, which physically serves as
a connection multiplier expanding associations of inputs with relevant pattern
classes. | [
"cs.CV",
"cs.LG",
"cs.NE",
"C.3; I.2; I.5"
] |
The act of explaining across two parties is a feedback loop, where one
provides information on what needs to be explained and the other provides an
explanation relevant to this information. We apply a reinforcement learning
framework which emulates this format by providing explanations based on the
explainee's current mental model. We conduct novel online human experiments
where explanations generated by various explanation methods are selected and
presented to participants, using policies which observe participants' mental
models, in order to optimize an interpretability proxy. Our results suggest
that mental model-based policies (anchored in our proposed state
representation) may increase interpretability over multiple sequential
explanations, when compared to a random selection baseline. This work provides
insight into how to select explanations which increase relevant information for
users, and into conducting human-grounded experimentation to understand
interpretability. | [
"cs.LG",
"cs.AI",
"cs.HC",
"stat.ML"
] |
In this work, we present a novel dataset consisting of eye movements and
verbal descriptions recorded synchronously over images. Using this data, we
study the differences in human attention during free-viewing and image
captioning tasks. We look into the relationship between human attention and
language constructs during perception and sentence articulation. We also
analyse attention deployment mechanisms in the top-down soft attention approach
that is argued to mimic human attention in captioning tasks, and investigate
whether visual saliency can help image captioning. Our study reveals that (1)
human attention behaviour differs in free-viewing and image description tasks.
Humans tend to fixate on a greater variety of regions under the latter task,
(2) there is a strong relationship between described objects and attended
objects ($97\%$ of the described objects are being attended), (3) a
convolutional neural network as feature encoder accounts for human-attended
regions during image captioning to a great extent (around $78\%$), (4)
soft-attention mechanism differs from human attention, both spatially and
temporally, and there is low correlation between caption scores and attention
consistency scores. These indicate a large gap between humans and machines in
regards to top-down attention, and (5) by integrating the soft attention model
with image saliency, we can significantly improve the model's performance on
Flickr30k and MSCOCO benchmarks. The dataset can be found at:
https://github.com/SenHe/Human-Attention-in-Image-Captioning. | [
"cs.CV"
] |
The recent vision transformer(i.e.for image classification) learns non-local
attentive interaction of different patch tokens. However, prior arts miss
learning the cross-scale dependencies of different pixels, the semantic
correspondence of different labels, and the consistency of the feature
representations and semantic embeddings, which are critical for biomedical
segmentation. In this paper, we tackle the above issues by proposing a unified
transformer network, termed Multi-Compound Transformer (MCTrans), which
incorporates rich feature learning and semantic structure mining into a unified
framework. Specifically, MCTrans embeds the multi-scale convolutional features
as a sequence of tokens and performs intra- and inter-scale self-attention,
rather than single-scale attention in previous works. In addition, a learnable
proxy embedding is also introduced to model semantic relationship and feature
enhancement by using self-attention and cross-attention, respectively. MCTrans
can be easily plugged into a UNet-like network and attains a significant
improvement over the state-of-the-art methods in biomedical image segmentation
in six standard benchmarks. For example, MCTrans outperforms UNet by 3.64%,
3.71%, 4.34%, 2.8%, 1.88%, 1.57% in Pannuke, CVC-Clinic, CVC-Colon, Etis,
Kavirs, ISIC2018 dataset, respectively. Code is available at
https://github.com/JiYuanFeng/MCTrans. | [
"cs.CV"
] |
Superpixels serve as a powerful preprocessing tool in many computer vision
tasks. By using superpixel representation, the number of image primitives can
be largely reduced by orders of magnitudes. The majority of superpixel methods
use handcrafted features, which usually do not translate well into strong
adherence to object boundaries. A few recent superpixel methods have introduced
deep learning into the superpixel segmentation process. However, none of these
methods is able to produce superpixels in near real-time, which is crucial to
the applicability of a superpixel method in practice. In this work, we propose
a two-stage graph-based framework for superpixel segmentation. In the first
stage, we introduce an efficient Deep Affinity Learning (DAL) network that
learns pairwise pixel affinities by aggregating multi-scale information. In the
second stage, we propose a highly efficient superpixel method called
Hierarchical Entropy Rate Segmentation (HERS). Using the learned affinities
from the first stage, HERS builds a hierarchical tree structure that can
produce any number of highly adaptive superpixels instantaneously. We
demonstrate, through visual and numerical experiments, the effectiveness and
efficiency of our method compared to various state-of-the-art superpixel
methods. | [
"cs.CV",
"stat.AP",
"stat.ML",
"I.4; I.5"
] |
Fully convolutional neural network (FCN) has been dominating the game of face
detection task for a few years with its congenital capability of
sliding-window-searching with shared kernels, which boiled down all the
redundant calculation, and most recent state-of-the-art methods such as
Faster-RCNN, SSD, YOLO and FPN use FCN as their backbone. So here comes one
question: Can we find a universal strategy to further accelerate FCN with
higher accuracy, so could accelerate all the recent FCN-based methods? To
analyze this, we decompose the face searching space into two orthogonal
directions, `scale' and `spatial'. Only a few coordinates in the space expanded
by the two base vectors indicate foreground. So if FCN could ignore most of the
other points, the searching space and false alarm should be significantly
boiled down. Based on this philosophy, a novel method named scale estimation
and spatial attention proposal ($S^2AP$) is proposed to pay attention to some
specific scales and valid locations in the image pyramid. Furthermore, we adopt
a masked-convolution operation based on the attention result to accelerate FCN
calculation. Experiments show that FCN-based method RPN can be accelerated by
about $4\times$ with the help of $S^2AP$ and masked-FCN and at the same time it
can also achieve the state-of-the-art on FDDB, AFW and MALF face detection
benchmarks as well. | [
"cs.CV"
] |
The goal of task transfer in reinforcement learning is migrating the action
policy of an agent to the target task from the source task. Given their
successes on robotic action planning, current methods mostly rely on two
requirements: exactly-relevant expert demonstrations or the explicitly-coded
cost function on target task, both of which, however, are inconvenient to
obtain in practice. In this paper, we relax these two strong conditions by
developing a novel task transfer framework where the expert preference is
applied as a guidance. In particular, we alternate the following two steps:
Firstly, letting experts apply pre-defined preference rules to select related
expert demonstrates for the target task. Secondly, based on the selection
result, we learn the target cost function and trajectory distribution
simultaneously via enhanced Adversarial MaxEnt IRL and generate more
trajectories by the learned target distribution for the next preference
selection. The theoretical analysis on the distribution learning and
convergence of the proposed algorithm are provided. Extensive simulations on
several benchmarks have been conducted for further verifying the effectiveness
of the proposed method. | [
"cs.LG",
"cs.RO",
"stat.ML"
] |
Finding an optimal matching in a weighted graph is a standard combinatorial
problem. We consider its semi-bandit version where either a pair or a full
matching is sampled sequentially. We prove that it is possible to leverage a
rank-1 assumption on the adjacency matrix to reduce the sample complexity and
the regret of off-the-shelf algorithms up to reaching a linear dependency in
the number of vertices (up to poly log terms). | [
"stat.ML",
"cs.LG"
] |
Anticipating human motion in crowded scenarios is essential for developing
intelligent transportation systems, social-aware robots and advanced video
surveillance applications. A key component of this task is represented by the
inherently multi-modal nature of human paths which makes socially acceptable
multiple futures when human interactions are involved. To this end, we propose
a generative architecture for multi-future trajectory predictions based on
Conditional Variational Recurrent Neural Networks (C-VRNNs). Conditioning
mainly relies on prior belief maps, representing most likely moving directions
and forcing the model to consider past observed dynamics in generating future
positions. Human interactions are modeled with a graph-based attention
mechanism enabling an online attentive hidden state refinement of the recurrent
estimation. To corroborate our model, we perform extensive experiments on
publicly-available datasets (e.g., ETH/UCY, Stanford Drone Dataset, STATS
SportVU NBA, Intersection Drone Dataset and TrajNet++) and demonstrate its
effectiveness in crowded scenes compared to several state-of-the-art methods. | [
"cs.CV",
"cs.LG"
] |
Synthetic images are one of the most promising solutions to avoid high costs
associated with generating annotated datasets to train supervised convolutional
neural networks (CNN). However, to allow networks to generalize knowledge from
synthetic to real images, domain adaptation methods are necessary. This paper
implements unsupervised domain adaptation (UDA) methods on an anchorless object
detector. Given their good performance, anchorless detectors are increasingly
attracting attention in the field of object detection. While their results are
comparable to the well-established anchor-based methods, anchorless detectors
are considerably faster. In our work, we use CenterNet, one of the most recent
anchorless architectures, for a domain adaptation problem involving synthetic
images. Taking advantage of the architecture of anchorless detectors, we
propose to adjust two UDA methods, viz., entropy minimization and maximum
squares loss, originally developed for segmentation, to object detection. Our
results show that the proposed UDA methods can increase the mAPfrom61 %to69
%with respect to direct transfer on the considered anchorless detector. The
code is available: https://github.com/scheckmedia/centernet-uda. | [
"cs.CV"
] |
A representation is supposed universal if it encodes any element of the
visual world (e.g., objects, scenes) in any configuration (e.g., scale,
context). While not expecting pure universal representations, the goal in the
literature is to improve the universality level, starting from a representation
with a certain level. To do so, the state-of-the-art consists in learning
CNN-based representations on a diversified training problem (e.g., ImageNet
modified by adding annotated data). While it effectively increases
universality, such approach still requires a large amount of efforts to satisfy
the needs in annotated data. In this work, we propose two methods to improve
universality, but pay special attention to limit the need of annotated data. We
also propose a unified framework of the methods based on the diversifying of
the training problem. Finally, to better match Atkinson's cognitive study about
universal human representations, we proposed to rely on the transfer-learning
scheme as well as a new metric to evaluate universality. This latter, aims us
to demonstrates the interest of our methods on 10 target-problems, relating to
the classification task and a variety of visual domains. | [
"cs.CV",
"cs.LG"
] |
As a successful deep model applied in image super-resolution (SR), the
Super-Resolution Convolutional Neural Network (SRCNN) has demonstrated superior
performance to the previous hand-crafted models either in speed and restoration
quality. However, the high computational cost still hinders it from practical
usage that demands real-time performance (24 fps). In this paper, we aim at
accelerating the current SRCNN, and propose a compact hourglass-shape CNN
structure for faster and better SR. We re-design the SRCNN structure mainly in
three aspects. First, we introduce a deconvolution layer at the end of the
network, then the mapping is learned directly from the original low-resolution
image (without interpolation) to the high-resolution one. Second, we
reformulate the mapping layer by shrinking the input feature dimension before
mapping and expanding back afterwards. Third, we adopt smaller filter sizes but
more mapping layers. The proposed model achieves a speed up of more than 40
times with even superior restoration quality. Further, we present the parameter
settings that can achieve real-time performance on a generic CPU while still
maintaining good performance. A corresponding transfer strategy is also
proposed for fast training and testing across different upscaling factors. | [
"cs.CV"
] |
The susceptibility of deep learning models to adversarial perturbations has
stirred renewed attention in adversarial examples resulting in a number of
attacks. However, most of these attacks fail to encompass a large spectrum of
adversarial perturbations that are imperceptible to humans. In this paper, we
present localized uncertainty attacks, a novel class of threat models against
deterministic and stochastic classifiers. Under this threat model, we create
adversarial examples by perturbing only regions in the inputs where a
classifier is uncertain. To find such regions, we utilize the predictive
uncertainty of the classifier when the classifier is stochastic or, we learn a
surrogate model to amortize the uncertainty when it is deterministic. Unlike
$\ell_p$ ball or functional attacks which perturb inputs indiscriminately, our
targeted changes can be less perceptible. When considered under our threat
model, these attacks still produce strong adversarial examples; with the
examples retaining a greater degree of similarity with the inputs. | [
"stat.ML",
"cs.CR",
"cs.CV",
"cs.LG"
] |
In this paper, we present a conditional generative adversarial network-based
model for real-time underwater image enhancement. To supervise the adversarial
training, we formulate an objective function that evaluates the perceptual
image quality based on its global content, color, local texture, and style
information. We also present EUVP, a large-scale dataset of a paired and
unpaired collection of underwater images (of `poor' and `good' quality) that
are captured using seven different cameras over various visibility conditions
during oceanic explorations and human-robot collaborative experiments. In
addition, we perform several qualitative and quantitative evaluations which
suggest that the proposed model can learn to enhance underwater image quality
from both paired and unpaired training. More importantly, the enhanced images
provide improved performances of standard models for underwater object
detection, human pose estimation, and saliency prediction. These results
validate that it is suitable for real-time preprocessing in the autonomy
pipeline by visually-guided underwater robots. The model and associated
training pipelines are available at https://github.com/xahidbuffon/funie-gan. | [
"cs.CV"
] |
Weakly supervised learning of object detection is an important problem in
image understanding that still does not have a satisfactory solution. In this
paper, we address this problem by exploiting the power of deep convolutional
neural networks pre-trained on large-scale image-level classification tasks. We
propose a weakly supervised deep detection architecture that modifies one such
network to operate at the level of image regions, performing simultaneously
region selection and classification. Trained as an image classifier, the
architecture implicitly learns object detectors that are better than
alternative weakly supervised detection systems on the PASCAL VOC data. The
model, which is a simple and elegant end-to-end architecture, outperforms
standard data augmentation and fine-tuning techniques for the task of
image-level classification as well. | [
"cs.CV"
] |
Restricted Boltzmann machines (RBMs) are a powerful class of generative
models, but their training requires computing a gradient that, unlike
supervised backpropagation on typical loss functions, is notoriously difficult
even to approximate. Here, we show that properly combining standard gradient
updates with an off-gradient direction, constructed from samples of the RBM
ground state (mode), improves their training dramatically over traditional
gradient methods. This approach, which we call mode training, promotes faster
training and stability, in addition to lower converged relative entropy (KL
divergence). Along with the proofs of stability and convergence of this method,
we also demonstrate its efficacy on synthetic datasets where we can compute KL
divergences exactly, as well as on a larger machine learning standard, MNIST.
The mode training we suggest is quite versatile, as it can be applied in
conjunction with any given gradient method, and is easily extended to more
general energy-based neural network structures such as deep, convolutional and
unrestricted Boltzmann machines. | [
"cs.LG",
"stat.ML"
] |
This paper reports on a novel nonparametric rigid point cloud registration
framework that jointly integrates geometric and semantic measurements such as
color or semantic labels into the alignment process and does not require
explicit data association. The point clouds are represented as nonparametric
functions in a reproducible kernel Hilbert space. The alignment problem is
formulated as maximizing the inner product between two functions, essentially a
sum of weighted kernels, each of which exploits the local geometric and
semantic features. As a result of the continuous models, analytical gradients
can be computed, and a local solution can be obtained by optimization over the
rigid body transformation group. Besides, we present a new point cloud
alignment metric that is intrinsic to the proposed framework and takes into
account geometric and semantic information. The evaluations using publicly
available stereo and RGB-D datasets show that the proposed method outperforms
state-of-the-art outdoor and indoor frame-to-frame registration methods. An
open-source GPU implementation is also provided. | [
"cs.CV",
"cs.RO"
] |
Object tracking is one of the most important problems in computer vision. The
aim of video tracking is to extract the trajectories of a target or object of
interest, i.e. accurately locate a moving target in a video sequence and
discriminate target from non-targets in the feature space of the sequence. So,
feature descriptors can have significant effects on such discrimination. In
this paper, we use the basic idea of many trackers which consists of three main
components of the reference model, i.e., object modeling, object detection and
localization, and model updating. However, there are major improvements in our
system. Our forth component, occlusion handling, utilizes the r-spatiogram to
detect the best target candidate. While spatiogram contains some moments upon
the coordinates of the pixels, r-spatiogram computes region-based compactness
on the distribution of the given feature in the image that captures richer
features to represent the objects. The proposed research develops an efficient
and robust way to keep tracking the object throughout video sequences in the
presence of significant appearance variations and severe occlusions. The
proposed method is evaluated on the Princeton RGBD tracking dataset considering
sequences with different challenges and the obtained results demonstrate the
effectiveness of the proposed method. | [
"cs.CV"
] |
Fine-grained image classification is a challenging task due to the large
intra-class variance and small inter-class variance, aiming at recognizing
hundreds of sub-categories belonging to the same basic-level category. Most
existing fine-grained image classification methods generally learn part
detection models to obtain the semantic parts for better classification
accuracy. Despite achieving promising results, these methods mainly have two
limitations: (1) not all the parts which obtained through the part detection
models are beneficial and indispensable for classification, and (2)
fine-grained image classification requires more detailed visual descriptions
which could not be provided by the part locations or attribute annotations. For
addressing the above two limitations, this paper proposes the two-stream model
combining vision and language (CVL) for learning latent semantic
representations. The vision stream learns deep representations from the
original visual information via deep convolutional neural network. The language
stream utilizes the natural language descriptions which could point out the
discriminative parts or characteristics for each image, and provides a flexible
and compact way of encoding the salient visual aspects for distinguishing
sub-categories. Since the two streams are complementary, combining the two
streams can further achieves better classification accuracy. Comparing with 12
state-of-the-art methods on the widely used CUB-200-2011 dataset for
fine-grained image classification, the experimental results demonstrate our CVL
approach achieves the best performance. | [
"cs.CV"
] |
Faster R-CNN is one of the most representative and successful methods for
object detection, and has been becoming increasingly popular in various
objection detection applications. In this report, we propose a robust deep face
detection approach based on Faster R-CNN. In our approach, we exploit several
new techniques including new multi-task loss function design, online hard
example mining, and multi-scale training strategy to improve Faster R-CNN in
multiple aspects. The proposed approach is well suited for face detection, so
we call it Face R-CNN. Extensive experiments are conducted on two most popular
and challenging face detection benchmarks, FDDB and WIDER FACE, to demonstrate
the superiority of the proposed approach over state-of-the-arts. | [
"cs.CV"
] |
The topological structure of skeleton data plays a significant role in human
action recognition. Combining the topological structure with graph
convolutional networks has achieved remarkable performance. In existing
methods, modeling the topological structure of skeleton data only considered
the connections between the joints and bones, and directly use physical
information. However, there exists an unknown problem to investigate the key
joints, bones and body parts in every human action. In this paper, we propose
the centrality graph convolutional networks to uncover the overlooked
topological information, and best take advantage of the information to
distinguish key joints, bones, and body parts. A novel centrality graph
convolutional network firstly highlights the effects of the key joints and
bones to bring a definite improvement. Besides, the topological information of
the skeleton sequence is explored and combined to further enhance the
performance in a four-channel framework. Moreover, the reconstructed graph is
implemented by the adaptive methods on the training process, which further
yields improvements. Our model is validated by two large-scale datasets,
NTU-RGB+D and Kinetics, and outperforms the state-of-the-art methods. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Existing facial image super-resolution (SR) methods focus mostly on improving
artificially down-sampled low-resolution (LR) imagery. Such SR models, although
strong at handling artificial LR images, often suffer from significant
performance drop on genuine LR test data. Previous unsupervised domain
adaptation (UDA) methods address this issue by training a model using unpaired
genuine LR and HR data as well as cycle consistency loss formulation. However,
this renders the model overstretched with two tasks: consistifying the visual
characteristics and enhancing the image resolution. Importantly, this makes the
end-to-end model training ineffective due to the difficulty of back-propagating
gradients through two concatenated CNNs. To solve this problem, we formulate a
method that joins the advantages of conventional SR and UDA models.
Specifically, we separate and control the optimisations for characteristics
consistifying and image super-resolving by introducing Characteristic
Regularisation (CR) between them. This task split makes the model training more
effective and computationally tractable. Extensive evaluations demonstrate the
performance superiority of our method over state-of-the-art SR and UDA models
on both genuine and artificial LR facial imagery data. | [
"cs.CV"
] |
Semantic segmentation and semantic edge detection can be seen as two dual
problems with close relationships in computer vision. Despite the fast
evolution of learning-based 3D semantic segmentation methods, little attention
has been drawn to the learning of 3D semantic edge detectors, even less to a
joint learning method for the two tasks. In this paper, we tackle the 3D
semantic edge detection task for the first time and present a new two-stream
fully-convolutional network that jointly performs the two tasks. In particular,
we design a joint refinement module that explicitly wires region information
and edge information to improve the performances of both tasks. Further, we
propose a novel loss function that encourages the network to produce semantic
segmentation results with better boundaries. Extensive evaluations on S3DIS and
ScanNet datasets show that our method achieves on par or better performance
than the state-of-the-art methods for semantic segmentation and outperforms the
baseline methods for semantic edge detection. Code release:
https://github.com/hzykent/JSENet | [
"cs.CV"
] |
Object detection using an oriented bounding box (OBB) can better target
rotated objects by reducing the overlap with background areas. Existing OBB
approaches are mostly built on horizontal bounding box detectors by introducing
an additional angle dimension optimized by a distance loss. However, as the
distance loss only minimizes the angle error of the OBB and that it loosely
correlates to the IoU, it is insensitive to objects with high aspect ratios.
Therefore, a novel loss, Pixels-IoU (PIoU) Loss, is formulated to exploit both
the angle and IoU for accurate OBB regression. The PIoU loss is derived from
IoU metric with a pixel-wise form, which is simple and suitable for both
horizontal and oriented bounding box. To demonstrate its effectiveness, we
evaluate the PIoU loss on both anchor-based and anchor-free frameworks. The
experimental results show that PIoU loss can dramatically improve the
performance of OBB detectors, particularly on objects with high aspect ratios
and complex backgrounds. Besides, previous evaluation datasets did not include
scenarios where the objects have high aspect ratios, hence a new dataset,
Retail50K, is introduced to encourage the community to adapt OBB detectors for
more complex environments. | [
"cs.CV"
] |
Transformers with remarkable global representation capacities achieve
competitive results for visual tasks, but fail to consider high-level local
pattern information in input images. In this paper, we present a generic
Dual-stream Network (DS-Net) to fully explore the representation capacity of
local and global pattern features for image classification. Our DS-Net can
simultaneously calculate fine-grained and integrated features and efficiently
fuse them. Specifically, we propose an Intra-scale Propagation module to
process two different resolutions in each block and an Inter-Scale Alignment
module to perform information interaction across features at dual scales.
Besides, we also design a Dual-stream FPN (DS-FPN) to further enhance
contextual information for downstream dense predictions. Without bells and
whistles, the propsed DS-Net outperforms Deit-Small by 2.4% in terms of top-1
accuracy on ImageNet-1k and achieves state-of-the-art performance over other
Vision Transformers and ResNets. For object detection and instance
segmentation, DS-Net-Small respectively outperforms ResNet-50 by 6.4% and 5.5 %
in terms of mAP on MSCOCO 2017, and surpasses the previous state-of-the-art
scheme, which significantly demonstrates its potential to be a general backbone
in vision tasks. The code will be released soon. | [
"cs.CV"
] |
Causal inference, or counterfactual prediction, is central to decision making
in healthcare, policy and social sciences. To de-bias causal estimators with
high-dimensional data in observational studies, recent advances suggest the
importance of combining machine learning models for both the propensity score
and the outcome function. We propose a novel scalable method to learn
double-robust representations for counterfactual predictions, leading to
consistent causal estimation if the model for either the propensity score or
the outcome, but not necessarily both, is correctly specified. Specifically, we
use the entropy balancing method to learn the weights that minimize the
Jensen-Shannon divergence of the representation between the treated and control
groups, based on which we make robust and efficient counterfactual predictions
for both individual and average treatment effects. We provide theoretical
justifications for the proposed method. The algorithm shows competitive
performance with the state-of-the-art on real world and synthetic data. | [
"stat.ML",
"cs.LG"
] |
Spatial transformer networks (STNs) were designed to enable CNNs to learn
invariance to image transformations. STNs were originally proposed to transform
CNN feature maps as well as input images. This enables the use of more complex
features when predicting transformation parameters. However, since STNs perform
a purely spatial transformation, they do not, in the general case, have the
ability to align the feature maps of a transformed image and its original. We
present a theoretical argument for this and investigate the practical
implications, showing that this inability is coupled with decreased
classification accuracy. We advocate taking advantage of more complex features
in deeper layers by instead sharing parameters between the classification and
the localisation network. | [
"cs.CV"
] |
There are a huge number of features which are said to improve Convolutional
Neural Network (CNN) accuracy. Practical testing of combinations of such
features on large datasets, and theoretical justification of the result, is
required. Some features operate on certain models exclusively and for certain
problems exclusively, or only for small-scale datasets; while some features,
such as batch-normalization and residual-connections, are applicable to the
majority of models, tasks, and datasets. We assume that such universal features
include Weighted-Residual-Connections (WRC), Cross-Stage-Partial-connections
(CSP), Cross mini-Batch Normalization (CmBN), Self-adversarial-training (SAT)
and Mish-activation. We use new features: WRC, CSP, CmBN, SAT, Mish activation,
Mosaic data augmentation, CmBN, DropBlock regularization, and CIoU loss, and
combine some of them to achieve state-of-the-art results: 43.5% AP (65.7% AP50)
for the MS COCO dataset at a realtime speed of ~65 FPS on Tesla V100. Source
code is at https://github.com/AlexeyAB/darknet | [
"cs.CV",
"eess.IV"
] |
We study the XAI (explainable AI) on the face recognition task, particularly
the face verification here. Face verification is a crucial task in recent days
and it has been deployed to plenty of applications, such as access control,
surveillance, and automatic personal log-on for mobile devices. With the
increasing amount of data, deep convolutional neural networks can achieve very
high accuracy for the face verification task. Beyond exceptional performances,
deep face verification models need more interpretability so that we can trust
the results they generate. In this paper, we propose a novel similarity metric,
called explainable cosine ($xCos$), that comes with a learnable module that can
be plugged into most of the verification models to provide meaningful
explanations. With the help of $xCos$, we can see which parts of the two input
faces are similar, where the model pays its attention to, and how the local
similarities are weighted to form the output $xCos$ score. We demonstrate the
effectiveness of our proposed method on LFW and various competitive benchmarks,
resulting in not only providing novel and desiring model interpretability for
face verification but also ensuring the accuracy as plugging into existing face
recognition models. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Automatic histopathology image segmentation is crucial to disease analysis.
Limited available labeled data hinders the generalizability of trained models
under the fully supervised setting. Semi-supervised learning (SSL) based on
generative methods has been proven to be effective in utilizing diverse image
characteristics. However, it has not been well explored what kinds of generated
images would be more useful for model training and how to use such images. In
this paper, we propose a new data guided generative method for histopathology
image segmentation by leveraging the unlabeled data distributions. First, we
design an image generation module. Image content and style are disentangled and
embedded in a clustering-friendly space to utilize their distributions. New
images are synthesized by sampling and cross-combining contents and styles.
Second, we devise an effective data selection policy for judiciously sampling
the generated images: (1) to make the generated training set better cover the
dataset, the clusters that are underrepresented in the original training set
are covered more; (2) to make the training process more effective, we identify
and oversample the images of "hard cases" in the data for which annotated
training data may be scarce. Our method is evaluated on glands and nuclei
datasets. We show that under both the inductive and transductive settings, our
SSL method consistently boosts the performance of common segmentation models
and attains state-of-the-art results. | [
"cs.CV"
] |
We study instancewise feature importance scoring as a method for model
interpretation. Any such method yields, for each predicted instance, a vector
of importance scores associated with the feature vector. Methods based on the
Shapley score have been proposed as a fair way of computing feature
attributions of this kind, but incur an exponential complexity in the number of
features. This combinatorial explosion arises from the definition of the
Shapley value and prevents these methods from being scalable to large data sets
and complex models. We focus on settings in which the data have a graph
structure, and the contribution of features to the target variable is
well-approximated by a graph-structured factorization. In such settings, we
develop two algorithms with linear complexity for instancewise feature
importance scoring. We establish the relationship of our methods to the Shapley
value and another closely related concept known as the Myerson value from
cooperative game theory. We demonstrate on both language and image data that
our algorithms compare favorably with other methods for model interpretation. | [
"cs.LG",
"stat.ML"
] |
Incremental learning (IL) is an important task aimed at increasing the
capability of a trained model, in terms of the number of classes recognizable
by the model. The key problem in this task is the requirement of storing data
(e.g. images) associated with existing classes, while teaching the classifier
to learn new classes. However, this is impractical as it increases the memory
requirement at every incremental step, which makes it impossible to implement
IL algorithms on edge devices with limited memory. Hence, we propose a novel
approach, called `Learning without Memorizing (LwM)', to preserve the
information about existing (base) classes, without storing any of their data,
while making the classifier progressively learn the new classes. In LwM, we
present an information preserving penalty: Attention Distillation Loss
($L_{AD}$), and demonstrate that penalizing the changes in classifiers'
attention maps helps to retain information of the base classes, as new classes
are added. We show that adding $L_{AD}$ to the distillation loss which is an
existing information preserving loss consistently outperforms the
state-of-the-art performance in the iILSVRC-small and iCIFAR-100 datasets in
terms of the overall accuracy of base and incrementally learned classes. | [
"cs.CV",
"cs.LG"
] |
Competitive diving is a well recognized aquatic sport in which a person dives
from a platform or a springboard into the water. Based on the acrobatics
performed during the dive, diving is classified into a finite set of action
classes which are standardized by FINA. In this work, we propose an attention
guided LSTM-based neural network architecture for the task of diving
classification. The network takes the frames of a diving video as input and
determines its class. We evaluate the performance of the proposed model on a
recently introduced competitive diving dataset, Diving48. It contains over
18000 video clips which covers 48 classes of diving. The proposed model
outperforms the classification accuracy of the state-of-the-art models in both
2D and 3D frameworks by 11.54% and 4.24%, respectively. We show that the
network is able to localize the diver in the video frames during the dive
without being trained with such a supervision. | [
"cs.CV"
] |
Here we explore two related but important tasks based on the recently
released REalistic Single Image DEhazing (RESIDE) benchmark dataset: (i) single
image dehazing as a low-level image restoration problem; and (ii) high-level
visual understanding (e.g., object detection) of hazy images. For the first
task, we investigated a variety of loss functions and show that
perception-driven loss significantly improves dehazing performance. In the
second task, we provide multiple solutions including using advanced modules in
the dehazing-detection cascade and domain-adaptive object detectors. In both
tasks, our proposed solutions significantly improve performance. GitHub
repository URL is: https://github.com/guanlongzhao/dehaze | [
"cs.CV",
"cs.LG"
] |
High-resolution depth map can be inferred from a low-resolution one with the
guidance of an additional high-resolution texture map of the same scene.
Recently, deep neural networks with large receptive fields are shown to benefit
applications such as image completion. Our insight is that super resolution is
similar to image completion, where only parts of the depth values are precisely
known. In this paper, we present a joint convolutional neural pyramid model
with large receptive fields for joint depth map super-resolution. Our model
consists of three sub-networks, two convolutional neural pyramids concatenated
by a normal convolutional neural network. The convolutional neural pyramids
extract information from large receptive fields of the depth map and guidance
map, while the convolutional neural network effectively transfers useful
structures of the guidance image to the depth image. Experimental results show
that our model outperforms existing state-of-the-art algorithms not only on
data pairs of RGB/depth images, but also on other data pairs like
color/saliency and color-scribbles/colorized images. | [
"cs.CV",
"cs.GR"
] |
Superpixel algorithms are a common pre-processing step for computer vision
algorithms such as segmentation, object tracking and localization. Many
superpixel methods only rely on colors features for segmentation, limiting
performance in low-contrast regions and applicability to infrared or medical
images where object boundaries have wide appearance variability. We study the
inclusion of deep image features in the SLIC superpixel algorithm to exploit
higher-level image representations. In addition, we devise a trainable
superpixel algorithm, yielding an intermediate domain-specific image
representation that can be applied to different tasks. A clustering-based
superpixel algorithm is transformed into a pixel-wise classification task and
superpixel training data is derived from semantic segmentation datasets. Our
results demonstrate that this approach is able to improve superpixel quality
consistently. | [
"cs.CV"
] |
Bayesian optimization (BO) is an efficient framework for solving black-box
optimization problems with expensive function evaluations. This paper addresses
the BO problem setting for combinatorial spaces (e.g., sequences and graphs)
that occurs naturally in science and engineering applications. A prototypical
example is molecular optimization guided by expensive experiments. The key
challenge is to balance the complexity of statistical models and tractability
of search to select combinatorial structures for evaluation. In this paper, we
propose an efficient approach referred as Mercer Features for Combinatorial
Bayesian Optimization (MerCBO). The key idea behind MerCBO is to provide
explicit feature maps for diffusion kernels over discrete objects by exploiting
the structure of their combinatorial graph representation. These Mercer
features combined with Thompson sampling as the acquisition function allows us
to employ tractable solvers to find next structures for evaluation. Experiments
on diverse real-world benchmarks demonstrate that MerCBO performs similarly or
better than prior methods. The source code is available at
https://github.com/aryandeshwal/MerCBO . | [
"cs.LG",
"cs.AI"
] |
Depth information is important for autonomous systems to perceive
environments and estimate their own state. Traditional depth estimation
methods, like structure from motion and stereo vision matching, are built on
feature correspondences of multiple viewpoints. Meanwhile, the predicted depth
maps are sparse. Inferring depth information from a single image (monocular
depth estimation) is an ill-posed problem. With the rapid development of deep
neural networks, monocular depth estimation based on deep learning has been
widely studied recently and achieved promising performance in accuracy.
Meanwhile, dense depth maps are estimated from single images by deep neural
networks in an end-to-end manner. In order to improve the accuracy of depth
estimation, different kinds of network frameworks, loss functions and training
strategies are proposed subsequently. Therefore, we survey the current
monocular depth estimation methods based on deep learning in this review.
Initially, we conclude several widely used datasets and evaluation indicators
in deep learning-based depth estimation. Furthermore, we review some
representative existing methods according to different training manners:
supervised, unsupervised and semi-supervised. Finally, we discuss the
challenges and provide some ideas for future researches in monocular depth
estimation. | [
"cs.CV"
] |
Empirically multidimensional discriminator (critic) output can be
advantageous, while a solid explanation for it has not been discussed. In this
paper, (i) we rigorously prove that high-dimensional critic output has
advantage on distinguishing real and fake distributions; (ii) we also introduce
an square-root velocity transformation (SRVT) block which further magnifies
this advantage. The proof is based on our proposed maximal p-centrality
discrepancy which is bounded above by p-Wasserstein distance and perfectly fits
the Wasserstein GAN framework with high-dimensional critic output n. We have
also showed when n = 1, the proposed discrepancy is equivalent to 1-Wasserstein
distance. The SRVT block is applied to break the symmetric structure of
high-dimensional critic output and improve the generalization capability of the
discriminator network. In terms of implementation, the proposed framework does
not require additional hyper-parameter tuning, which largely facilitates its
usage. Experiments on image generation tasks show performance improvement on
benchmark datasets. | [
"stat.ML",
"cs.LG"
] |
Decentralized nonconvex optimization has received increasing attention in
recent years in machine learning due to its advantages in system robustness,
data privacy, and implementation simplicity. However, three fundamental
challenges in designing decentralized optimization algorithms are how to reduce
their sample, communication, and memory complexities. In this paper, we propose
a \underline{g}radient-\underline{t}racking-based \underline{sto}chastic
\underline{r}ecursive \underline{m}omentum (GT-STORM) algorithm for efficiently
solving nonconvex optimization problems. We show that to reach an
$\epsilon^2$-stationary solution, the total number of sample evaluations of our
algorithm is $\tilde{O}(m^{1/2}\epsilon^{-3})$ and the number of communication
rounds is $\tilde{O}(m^{-1/2}\epsilon^{-3})$, which improve the
$O(\epsilon^{-4})$ costs of sample evaluations and communications for the
existing decentralized stochastic gradient algorithms. We conduct extensive
experiments with a variety of learning models, including non-convex logistical
regression and convolutional neural networks, to verify our theoretical
findings. Collectively, our results contribute to the state of the art of
theories and algorithms for decentralized network optimization. | [
"cs.LG",
"cs.DC"
] |
We present a new class of stochastic, geometrically-driven optimization
algorithms on the orthogonal group $O(d)$ and naturally reductive homogeneous
manifolds obtained from the action of the rotation group $SO(d)$. We
theoretically and experimentally demonstrate that our methods can be applied in
various fields of machine learning including deep, convolutional and recurrent
neural networks, reinforcement learning, normalizing flows and metric learning.
We show an intriguing connection between efficient stochastic optimization on
the orthogonal group and graph theory (e.g. matching problem, partition
functions over graphs, graph-coloring). We leverage the theory of Lie groups
and provide theoretical results for the designed class of algorithms. We
demonstrate broad applicability of our methods by showing strong performance on
the seemingly unrelated tasks of learning world models to obtain stable
policies for the most difficult $\mathrm{Humanoid}$ agent from
$\mathrm{OpenAI}$ $\mathrm{Gym}$ and improving convolutional neural networks. | [
"cs.LG",
"stat.ML"
] |
Training of large scale models on distributed clusters is a critical
component of the machine learning pipeline. However, this training can easily
be made to fail if some workers behave in an adversarial (Byzantine) fashion
whereby they return arbitrary results to the parameter server (PS). A plethora
of existing papers consider a variety of attack models and propose robust
aggregation and/or computational redundancy to alleviate the effects of these
attacks. In this work we consider an omniscient attack model where the
adversary has full knowledge about the gradient computation assignments of the
workers and can choose to attack (up to) any q out of K worker nodes to induce
maximal damage. Our redundancy-based method ByzShield leverages the properties
of bipartite expander graphs for the assignment of tasks to workers; this helps
to effectively mitigate the effect of the Byzantine behavior. Specifically, we
demonstrate an upper bound on the worst case fraction of corrupted gradients
based on the eigenvalues of our constructions which are based on mutually
orthogonal Latin squares and Ramanujan graphs. Our numerical experiments
indicate over a 36% reduction on average in the fraction of corrupted gradients
compared to the state of the art. Likewise, our experiments on training
followed by image classification on the CIFAR-10 dataset show that ByzShield
has on average a 20% advantage in accuracy under the most sophisticated
attacks. ByzShield also tolerates a much larger fraction of adversarial nodes
compared to prior work. | [
"cs.LG",
"cs.CR",
"cs.DC",
"cs.IT",
"math.IT"
] |
Pre-training Graph Neural Networks (GNN) via self-supervised contrastive
learning has recently drawn lots of attention. However, most existing works
focus on node-level contrastive learning, which cannot capture global graph
structure. The key challenge to conducting subgraph-level contrastive learning
is to sample informative subgraphs that are semantically meaningful. To solve
it, we propose to learn graph motifs, which are frequently-occurring subgraph
patterns (e.g. functional groups of molecules), for better subgraph sampling.
Our framework MotIf-driven Contrastive leaRning Of Graph representations
(MICRO-Graph) can: 1) use GNNs to extract motifs from large graph datasets; 2)
leverage learned motifs to sample informative subgraphs for contrastive
learning of GNN. We formulate motif learning as a differentiable clustering
problem, and adopt EM-clustering to group similar and significant subgraphs
into several motifs. Guided by these learned motifs, a sampler is trained to
generate more informative subgraphs, and these subgraphs are used to train GNNs
through graph-to-subgraph contrastive learning. By pre-training on the
ogbg-molhiv dataset with MICRO-Graph, the pre-trained GNN achieves 2.04%
ROC-AUC average performance enhancement on various downstream benchmark
datasets, which is significantly higher than other state-of-the-art
self-supervised learning baselines. | [
"cs.LG"
] |
AI is undergoing a paradigm shift with the rise of models (e.g., BERT,
DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a
wide range of downstream tasks. We call these models foundation models to
underscore their critically central yet incomplete character. This report
provides a thorough account of the opportunities and risks of foundation
models, ranging from their capabilities (e.g., language, vision, robotics,
reasoning, human interaction) and technical principles(e.g., model
architectures, training procedures, data, systems, security, evaluation,
theory) to their applications (e.g., law, healthcare, education) and societal
impact (e.g., inequity, misuse, economic and environmental impact, legal and
ethical considerations). Though foundation models are based on standard deep
learning and transfer learning, their scale results in new emergent
capabilities,and their effectiveness across so many tasks incentivizes
homogenization. Homogenization provides powerful leverage but demands caution,
as the defects of the foundation model are inherited by all the adapted models
downstream. Despite the impending widespread deployment of foundation models,
we currently lack a clear understanding of how they work, when they fail, and
what they are even capable of due to their emergent properties. To tackle these
questions, we believe much of the critical research on foundation models will
require deep interdisciplinary collaboration commensurate with their
fundamentally sociotechnical nature. | [
"cs.LG",
"cs.AI",
"cs.CY"
] |
Despite the progress within the last decades, weather forecasting is still a
challenging and computationally expensive task. Current satellite-based
approaches to predict thunderstorms are usually based on the analysis of the
observed brightness temperatures in different spectral channels and emit a
warning if a critical threshold is reached. Recent progress in data science
however demonstrates that machine learning can be successfully applied to many
research fields in science, especially in areas dealing with large datasets. We
therefore present a new approach to the problem of predicting thunderstorms
based on machine learning. The core idea of our work is to use the error of
two-dimensional optical flow algorithms applied to images of meteorological
satellites as a feature for machine learning models. We interpret that optical
flow error as an indication of convection potentially leading to thunderstorms
and lightning. To factor in spatial proximity we use various manual convolution
steps. We also consider effects such as the time of day or the geographic
location. We train different tree classifier models as well as a neural network
to predict lightning within the next few hours (called nowcasting in
meteorology) based on these features. In our evaluation section we compare the
predictive power of the different models and the impact of different features
on the classification result. Our results show a high accuracy of 96% for
predictions over the next 15 minutes which slightly decreases with increasing
forecast period but still remains above 83% for forecasts of up to five hours.
The high false positive rate of nearly 6% however needs further investigation
to allow for an operational use of our approach. | [
"cs.LG",
"stat.ML"
] |
Transfer learning is a machine learning paradigm where the knowledge from one
task is utilized to resolve the problem in a related task. On the one hand, it
is conceivable that knowledge from one task could be useful for solving a
related problem. On the other hand, it is also recognized that if not executed
properly, transfer learning algorithms could in fact impair the learning
performance instead of improving it - commonly known as "negative transfer". In
this paper, we study the online transfer learning problems where the source
samples are given in an offline way while the target samples arrive
sequentially. We define the expected regret of the online transfer learning
problem and provide upper bounds on the regret using information-theoretic
quantities. We also obtain exact expressions for the bounds when the sample
size becomes large. Examples show that the derived bounds are accurate even for
small sample sizes. Furthermore, the obtained bounds give valuable insight on
the effect of prior knowledge for transfer learning in our formulation. In
particular, we formally characterize the conditions under which negative
transfer occurs. | [
"cs.LG",
"cs.IT",
"math.IT"
] |
The paper proposes a solution based on Generative Adversarial Network (GAN)
for solving jigsaw puzzles. The problem assumes that an image is cut into equal
square pieces, and asks to recover the image according to pieces information.
Conventional jigsaw solvers often determine piece relationships based on the
piece boundaries, which ignore the important semantic information. In this
paper, we propose JigsawGAN, a GAN-based self-supervised method for solving
jigsaw puzzles with unpaired images (with no prior knowledge of the initial
images). We design a multi-task pipeline that includes, (1) a classification
branch to classify jigsaw permutations, and (2) a GAN branch to recover
features to images with correct orders. The classification branch is
constrained by the pseudo-labels generated according to the shuffled pieces.
The GAN branch concentrates on the image semantic information, among which the
generator produces the natural images to fool the discriminator with
reassembled pieces, while the discriminator distinguishes whether a given image
belongs to the synthesized or the real target manifold. These two branches are
connected by a flow-based warp that is applied to warp features to correct
order according to the classification results. The proposed method can solve
jigsaw puzzles more efficiently by utilizing both semantic information and edge
information simultaneously. Qualitative and quantitative comparisons against
several leading prior methods demonstrate the superiority of our method. | [
"cs.CV"
] |
Few-shot classification aims to recognize unseen classes when presented with
only a small number of samples. We consider the problem of multi-domain
few-shot image classification, where unseen classes and examples come from
diverse data sources. This problem has seen growing interest and has inspired
the development of benchmarks such as Meta-Dataset. A key challenge in this
multi-domain setting is to effectively integrate the feature representations
from the diverse set of training domains. Here, we propose a Universal
Representation Transformer (URT) layer, that meta-learns to leverage universal
features for few-shot classification by dynamically re-weighting and composing
the most appropriate domain-specific representations. In experiments, we show
that URT sets a new state-of-the-art result on Meta-Dataset. Specifically, it
achieves top-performance on the highest number of data sources compared to
competing methods. We analyze variants of URT and present a visualization of
the attention score heatmaps that sheds light on how the model performs
cross-domain generalization. Our code is available at
https://github.com/liulu112601/URT. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
The lack of interpretability remains a key barrier to the adoption of deep
models in many applications. In this work, we explicitly regularize deep models
so human users might step through the process behind their predictions in
little time. Specifically, we train deep time-series models so their
class-probability predictions have high accuracy while being closely modeled by
decision trees with few nodes. Using intuitive toy examples as well as medical
tasks for treating sepsis and HIV, we demonstrate that this new tree
regularization yields models that are easier for humans to simulate than
simpler L1 or L2 penalties without sacrificing predictive power. | [
"stat.ML",
"cs.LG"
] |
Finding a path free from obstacles that poses minimal risk is critical for
safe navigation. People who are sighted and people who are visually impaired
require navigation safety while walking on a sidewalk. In this research we
developed an assistive navigation on a sidewalk by integrating sensory inputs
using reinforcement learning. We trained a Sidewalk Obstacle Avoidance Agent
(SOAA) through reinforcement learning in a simulated robotic environment. A
Sidewalk Obstacle Conversational Agent (SOCA) is built by training a natural
language conversation agent with real conversation data. The SOAA along with
SOCA was integrated in a prototype device called augmented guide (AG).
Empirical analysis showed that this prototype improved the obstacle avoidance
experience about 5% from a base case of 81.29% | [
"cs.CV"
] |
Global localization is an important and widely studied problem for many
robotic applications. Place recognition approaches can be exploited to solve
this task, e.g., in the autonomous driving field. While most vision-based
approaches match an image w.r.t. an image database, global visual localization
within LiDAR-maps remains fairly unexplored, even though the path toward high
definition 3D maps, produced mainly from LiDARs, is clear. In this work we
leverage Deep Neural Network (DNN) approaches to create a shared embedding
space between images and LiDAR-maps, allowing for image to 3D-LiDAR place
recognition. We trained a 2D and a 3D DNN that create embeddings, respectively
from images and from point clouds, that are close to each other whether they
refer to the same place. An extensive experimental activity is presented to
assess the effectiveness of the approach w.r.t. different learning paradigms,
network architectures, and loss functions. All the evaluations have been
performed using the Oxford Robotcar Dataset, which encompasses a wide range of
weather and light conditions. | [
"cs.CV",
"cs.LG",
"cs.RO",
"eess.IV"
] |
Conventionally, autoencoders are unsupervised representation learning tools.
In this work, we propose a novel discriminative autoencoder. Use of supervised
discriminative learning ensures that the learned representation is robust to
variations commonly encountered in image datasets. Using the basic
discriminating autoencoder as a unit, we build a stacked architecture aimed at
extracting relevant representation from the training data. The efficiency of
our feature extraction algorithm ensures a high classification accuracy with
even simple classification schemes like KNN (K-nearest neighbor). We
demonstrate the superiority of our model for representation learning by
conducting experiments on standard datasets for character/image recognition and
subsequent comparison with existing supervised deep architectures like class
sparse stacked autoencoder and discriminative deep belief network. | [
"cs.CV",
"cs.LG"
] |
The 2D object detection in clean images has been a well studied topic, but
its vulnerability against adversarial attack is still worrying. Existing work
has improved robustness of object detectors by adversarial training, at the
same time, the average precision (AP) on clean images drops significantly. In
this paper, we propose that using feature alignment of intermediate layer can
improve clean AP and robustness in object detection. Further, on the basis of
adversarial training, we present two feature alignment modules:
Knowledge-Distilled Feature Alignment (KDFA) module and Self-Supervised Feature
Alignment (SSFA) module, which can guide the network to generate more effective
features. We conduct extensive experiments on PASCAL VOC and MS-COCO datasets
to verify the effectiveness of our proposed approach. The code of our
experiments is available at https://github.com/grispeut/Feature-Alignment.git. | [
"cs.CV"
] |
Bayesian inference is used extensively to infer and to quantify the
uncertainty in a field of interest from a measurement of a related field when
the two are linked by a physical model. Despite its many applications, Bayesian
inference faces challenges when inferring fields that have discrete
representations of large dimension, and/or have prior distributions that are
difficult to represent mathematically. In this manuscript we consider the use
of Generative Adversarial Networks (GANs) in addressing these challenges. A GAN
is a type of deep neural network equipped with the ability to learn the
distribution implied by multiple samples of a given field. Once trained on
these samples, the generator component of a GAN maps the iid components of a
low-dimensional latent vector to an approximation of the distribution of the
field of interest. In this work we demonstrate how this approximate
distribution may be used as a prior in a Bayesian update, and how it addresses
the challenges associated with characterizing complex prior distributions and
the large dimension of the inferred field. We demonstrate the efficacy of this
approach by applying it to the problem of inferring and quantifying uncertainty
in the initial temperature field in a heat conduction problem from a noisy
measurement of the temperature at later time. | [
"stat.ML",
"cs.LG",
"eess.IV",
"physics.comp-ph"
] |
There have been numerous advances in reinforcement learning, but the
typically unconstrained exploration of the learning process prevents the
adoption of these methods in many safety critical applications. Recent work in
safe reinforcement learning uses idealized models to achieve their guarantees,
but these models do not easily accommodate the stochasticity or
high-dimensionality of real world systems. We investigate how prediction
provides a general and intuitive framework to constraint exploration, and show
how it can be used to safely learn intersection handling behaviors on an
autonomous vehicle. | [
"cs.LG",
"cs.AI",
"cs.RO",
"stat.ML"
] |
Stream learning refers to the ability to acquire and transfer knowledge
across a continuous stream of data without forgetting and without repeated
passes over the data. A common way to avoid catastrophic forgetting is to
intersperse new examples with replays of old examples stored as image pixels or
reproduced by generative models. Here, we considered stream learning in image
classification tasks and proposed a novel hypotheses-driven Augmented Memory
Network, which efficiently consolidates previous knowledge with a limited
number of hypotheses in the augmented memory and replays relevant hypotheses to
avoid catastrophic forgetting. The advantages of hypothesis-driven replay over
image pixel replay and generative replay are two-fold. First, hypothesis-based
knowledge consolidation avoids redundant information in the image pixel space
and makes memory usage more efficient. Second, hypotheses in the augmented
memory can be re-used for learning new tasks, improving generalization and
transfer learning ability. We evaluated our method on three stream learning
object recognition datasets. Our method performs comparably well or better than
SOTA methods, while offering more efficient memory usage. All source code and
data are publicly available https://github.com/kreimanlab/AugMem. | [
"cs.CV",
"cs.AI"
] |
Computer vision has received a significant attention in recent years, which
is one of the important parts for robots to apperceive external environment.
Discriminative Correlation Filter (DCF) based trackers gained more popularity
due to their efficiency, however, tracking in low-illumination environments is
a challenging problem, not yet successfully addressed in the literature. In
this work, we tackle the problems by introducing Low-Illumination Long-term
Correlation Tracker (LLCT). First, fused features only including HOG and Color
Names are employed to boost the tracking efficiency. Second, we used the
standard PCA to reduction scheme in the translation and scale estimation phase
for accelerating. Third, we learned a long-term correlation filter to keep the
long-term memory ability. Finally, update memory templates with interval
updates, then re-match existing and initial templates every few frames to
maintain template accuracy. The extensive experiments on popular Object
Tracking Benchmark OTB-50 datasets have demonstrated that the proposed tracker
outperforms the state-of-the-art trackers significantly achieves a high
real-time (33FPS) performance. In addition, the proposed approach can be
integrated easily in robot system and the running speed performed well. The
experimental results show that the novel tracker performance in
low-illumination environment is better than that of general trackers. | [
"cs.CV"
] |
Abnormal driving behaviour is one of the leading cause of terrible traffic
accidents endangering human life. Therefore, study on driving behaviour
surveillance has become essential to traffic security and public management. In
this paper, we conduct this promising research and employ a two stream CNN
framework for video-based driving behaviour recognition, in which spatial
stream CNN captures appearance information from still frames, whilst temporal
stream CNN captures motion information with pre-computed optical flow
displacement between a few adjacent video frames. We investigate different
spatial-temporal fusion strategies to combine the intra frame static clues and
inter frame dynamic clues for final behaviour recognition. So as to validate
the effectiveness of the designed spatial-temporal deep learning based model,
we create a simulated driving behaviour dataset, containing 1237 videos with 6
different driving behavior for recognition. Experiment result shows that our
proposed method obtains noticeable performance improvements compared to the
existing methods. | [
"cs.CV",
"cs.LG"
] |
In recent years, developing a speech understanding system that classifies a
waveform to structured data, such as intents and slots, without first
transcribing the speech to text has emerged as an interesting research problem.
This work proposes such as system with an additional constraint of designing a
system that has a small enough footprint to run on small micro-controllers and
embedded systems with minimal latency. Given a streaming input speech signal,
the proposed system can process it segment-by-segment without the need to have
the entire stream at the moment of processing. The proposed system is evaluated
on the publicly available Fluent Speech Commands dataset. Experiments show that
the proposed system yields state-of-the-art performance with the advantage of
low latency and a much smaller model when compared to other published works on
the same task. | [
"cs.CV"
] |
Deep Reinforcement Learning (deep RL) has made several breakthroughs in
recent years in applications ranging from complex control tasks in unmanned
vehicles to game playing. Despite their success, deep RL still lacks several
important capacities of human intelligence, such as transfer learning,
abstraction and interpretability. Deep Symbolic Reinforcement Learning (DSRL)
seeks to incorporate such capacities to deep Q-networks (DQN) by learning a
relevant symbolic representation prior to using Q-learning. In this paper, we
propose a novel extension of DSRL, which we call Symbolic Reinforcement
Learning with Common Sense (SRL+CS), offering a better balance between
generalization and specialization, inspired by principles of common sense when
assigning rewards and aggregating Q-values. Experiments reported in this paper
show that SRL+CS learns consistently faster than Q-learning and DSRL, achieving
also a higher accuracy. In the hardest case, where agents were trained in a
deterministic environment and tested in a random environment, SRL+CS achieves
nearly 100% average accuracy compared to DSRL's 70% and DQN's 50% accuracy. To
the best of our knowledge, this is the first case of near perfect zero-shot
transfer learning using Reinforcement Learning. | [
"cs.LG",
"cs.AI",
"stat.ML",
"I.2.6"
] |
Fetal brain magnetic resonance imaging (MRI) offers exquisite images of the
developing brain but is not suitable for anomaly screening. For this ultrasound
(US) is employed. While expert sonographers are adept at reading US images, MR
images are much easier for non-experts to interpret. Hence in this paper we
seek to produce images with MRI-like appearance directly from clinical US
images. Our own clinical motivation is to seek a way to communicate US findings
to patients or clinical professionals unfamiliar with US, but in medical image
analysis such a capability is potentially useful, for instance, for US-MRI
registration or fusion. Our model is self-supervised and end-to-end trainable.
Specifically, based on an assumption that the US and MRI data share a similar
anatomical latent space, we first utilise an extractor to determine shared
latent features, which are then used for data synthesis. Since paired data was
unavailable for our study (and rare in practice), we propose to enforce the
distributions to be similar instead of employing pixel-wise constraints, by
adversarial learning in both the image domain and latent space. Furthermore, we
propose an adversarial structural constraint to regularise the anatomical
structures between the two modalities during the synthesis. A cross-modal
attention scheme is proposed to leverage non-local spatial correlations. The
feasibility of the approach to produce realistic looking MR images is
demonstrated quantitatively and with a qualitative evaluation compared to real
fetal MR images. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Bayesian inference has great promise for the privacy-preserving analysis of
sensitive data, as posterior sampling automatically preserves differential
privacy, an algorithmic notion of data privacy, under certain conditions
(Dimitrakakis et al., 2014; Wang et al., 2015). While this one posterior sample
(OPS) approach elegantly provides privacy "for free," it is data inefficient in
the sense of asymptotic relative efficiency (ARE). We show that a simple
alternative based on the Laplace mechanism, the workhorse of differential
privacy, is as asymptotically efficient as non-private posterior inference,
under general assumptions. This technique also has practical advantages
including efficient use of the privacy budget for MCMC. We demonstrate the
practicality of our approach on a time-series analysis of sensitive military
records from the Afghanistan and Iraq wars disclosed by the Wikileaks
organization. | [
"cs.LG",
"cs.AI",
"cs.CR",
"stat.ML"
] |
Color transfer is an image editing process that adjusts the colors of a
picture to match a target picture's color theme. A natural color transfer not
only matches the color styles but also prevents after-transfer artifacts due to
image compression, noise, and gradient smoothness change. The recently
discovered color homography theorem proves that colors across a change in
photometric viewing condition are related by a homography. In this paper, we
propose a color-homography-based color transfer decomposition which encodes
color transfer as a combination of chromaticity shift and shading adjustment. A
powerful form of shading adjustment is shown to be a global shading curve by
which the same shading homography can be applied elsewhere. Our experiments
show that the proposed color transfer decomposition provides a very close
approximation to many popular color transfer methods. The advantage of our
approach is that the learned color transfer can be applied to many other images
(e.g. other frames in a video), instead of a frame-to-frame basis. We
demonstrate two applications for color transfer enhancement and video color
grading re-application. This simple model of color transfer is also important
for future color transfer algorithm design. | [
"cs.CV"
] |
Deep models trained in supervised mode have achieved remarkable success on a
variety of tasks. When labeled samples are limited, self-supervised learning
(SSL) is emerging as a new paradigm for making use of large amounts of
unlabeled samples. SSL has achieved promising performance on natural language
and image learning tasks. Recently, there is a trend to extend such success to
graph data using graph neural networks (GNNs). In this survey, we provide a
unified review of different ways of training GNNs using SSL. Specifically, we
categorize SSL methods into contrastive and predictive models. In either
category, we provide a unified framework for methods as well as how these
methods differ in each component under the framework. Our unified treatment of
SSL methods for GNNs sheds light on the similarities and differences of various
methods, setting the stage for developing new methods and algorithms. We also
summarize different SSL settings and the corresponding datasets used in each
setting. To facilitate methodological development and empirical comparison, we
develop a standardized testbed for SSL in GNNs, including implementations of
common baseline methods, datasets, and evaluation metrics. | [
"cs.LG"
] |
Fine-grained visual classification (FGVC) aims to distinguish the sub-classes
of the same category and its essential solution is to mine the subtle and
discriminative regions. Convolution neural networks (CNNs), which employ the
cross entropy loss (CE-loss) as the loss function, show poor performance since
the model can only learn the most discriminative part and ignore other
meaningful regions. Some existing works try to solve this problem by mining
more discriminative regions by some detection techniques or attention
mechanisms. However, most of them will meet the background noise problem when
trying to find more discriminative regions. In this paper, we address it in a
knowledge transfer learning manner. Multiple models are trained one by one, and
all previously trained models are regarded as teacher models to supervise the
training of the current one. Specifically, a orthogonal loss (OR-loss) is
proposed to encourage the network to find diverse and meaningful regions. In
addition, the first model is trained with only CE-Loss. Finally, all models'
outputs with complementary knowledge are combined together for the final
prediction result. We demonstrate the superiority of the proposed method and
obtain state-of-the-art (SOTA) performances on three popular FGVC datasets. | [
"cs.CV"
] |
Human motion prediction is a challenging task due to the stochasticity and
aperiodicity of future poses. Recently, graph convolutional network has been
proven to be very effective to learn dynamic relations among pose joints, which
is helpful for pose prediction. On the other hand, one can abstract a human
pose recursively to obtain a set of poses at multiple scales. With the increase
of the abstraction level, the motion of the pose becomes more stable, which
benefits pose prediction too. In this paper, we propose a novel Multi-Scale
Residual Graph Convolution Network (MSR-GCN) for human pose prediction task in
the manner of end-to-end. The GCNs are used to extract features from fine to
coarse scale and then from coarse to fine scale. The extracted features at each
scale are then combined and decoded to obtain the residuals between the input
and target poses. Intermediate supervisions are imposed on all the predicted
poses, which enforces the network to learn more representative features. Our
proposed approach is evaluated on two standard benchmark datasets, i.e., the
Human3.6M dataset and the CMU Mocap dataset. Experimental results demonstrate
that our method outperforms the state-of-the-art approaches. Code and
pre-trained models are available at https://github.com/Droliven/MSRGCN. | [
"cs.CV"
] |
Efficient Reinforcement Learning usually takes advantage of demonstration or
good exploration strategy. By applying posterior sampling in model-free RL
under the hypothesis of GP, we propose Gaussian Process Posterior Sampling
Reinforcement Learning(GPPSTD) algorithm in continuous state space, giving
theoretical justifications and empirical results. We also provide theoretical
and empirical results that various demonstration could lower expected
uncertainty and benefit posterior sampling exploration. In this way, we
combined the demonstration and exploration process together to achieve a more
efficient reinforcement learning. | [
"cs.LG",
"stat.ML"
] |
The first mobile camera phone was sold only 20 years ago, when taking
pictures with one's phone was an oddity, and sharing pictures online was
unheard of. Today, the smartphone is more camera than phone. How did this
happen? This transformation was enabled by advances in computational
photography -the science and engineering of making great images from small form
factor, mobile cameras. Modern algorithmic and computing advances, including
machine learning, have changed the rules of photography, bringing to it new
modes of capture, post-processing, storage, and sharing. In this paper, we give
a brief history of mobile computational photography and describe some of the
key technological components, including burst photography, noise reduction, and
super-resolution. At each step, we may draw naive parallels to the human visual
system. | [
"cs.CV",
"eess.IV"
] |
We propose PiNet, a generalised differentiable attention-based pooling
mechanism for utilising graph convolution operations for graph level
classification. We demonstrate high sample efficiency and superior performance
over other graph neural networks in distinguishing isomorphic graph classes, as
well as competitive results with state of the art methods on standard
chemo-informatics datasets. | [
"cs.LG",
"cs.CV",
"stat.ML",
"I.5.1"
] |
We focus on an important yet challenging problem: using a 2D deep network to
deal with 3D segmentation for medical image analysis. Existing approaches
either applied multi-view planar (2D) networks or directly used volumetric (3D)
networks for this purpose, but both of them are not ideal: 2D networks cannot
capture 3D contexts effectively, and 3D networks are both memory-consuming and
less stable arguably due to the lack of pre-trained models.
In this paper, we bridge the gap between 2D and 3D using a novel approach
named Elastic Boundary Projection (EBP). The key observation is that, although
the object is a 3D volume, what we really need in segmentation is to find its
boundary which is a 2D surface. Therefore, we place a number of pivot points in
the 3D space, and for each pivot, we determine its distance to the object
boundary along a dense set of directions. This creates an elastic shell around
each pivot which is initialized as a perfect sphere. We train a 2D deep network
to determine whether each ending point falls within the object, and gradually
adjust the shell so that it gradually converges to the actual shape of the
boundary and thus achieves the goal of segmentation. EBP allows boundary-based
segmentation without cutting a 3D volume into slices or patches, which stands
out from conventional 2D and 3D approaches. EBP achieves promising accuracy in
abdominal organ segmentation. Our code has been open-sourced
https://github.com/twni2016/Elastic-Boundary-Projection. | [
"cs.CV"
] |
Regulators have signalled an interest in adopting explainable AI(XAI)
techniques to handle the diverse needs for model governance, operational
servicing, and compliance in the financial services industry. In this short
overview, we review the recent technical literature in XAI and argue that based
on our current understanding of the field, the use of XAI techniques in
practice necessitate a highly contextualized approach considering the specific
needs of stakeholders for particular business applications. | [
"cs.LG",
"cs.AI",
"cs.CY",
"68T27",
"I.2.6; J.4; K.5.2"
] |
Recent state-of-the-art semi-supervised learning (SSL) methods use a
combination of image-based transformations and consistency regularization as
core components. Such methods, however, are limited to simple transformations
such as traditional data augmentation or convex combinations of two images. In
this paper, we propose a novel learned feature-based refinement and
augmentation method that produces a varied set of complex transformations.
Importantly, these transformations also use information from both within-class
and across-class prototypical representations that we extract through
clustering. We use features already computed across iterations by storing them
in a memory bank, obviating the need for significant extra computation. These
transformations, combined with traditional image-based augmentation, are then
used as part of the consistency-based regularization loss. We demonstrate that
our method is comparable to current state of art for smaller datasets (CIFAR-10
and SVHN) while being able to scale up to larger datasets such as CIFAR-100 and
mini-Imagenet where we achieve significant gains over the state of art
(\textit{e.g.,} absolute 17.44\% gain on mini-ImageNet). We further test our
method on DomainNet, demonstrating better robustness to out-of-domain unlabeled
data, and perform rigorous ablations and analysis to validate the method. | [
"cs.CV",
"cs.LG"
] |
Pre-training is a dominant paradigm in computer vision. For example,
supervised ImageNet pre-training is commonly used to initialize the backbones
of object detection and segmentation models. He et al., however, show a
surprising result that ImageNet pre-training has limited impact on COCO object
detection. Here we investigate self-training as another method to utilize
additional data on the same setup and contrast it against ImageNet
pre-training. Our study reveals the generality and flexibility of self-training
with three additional insights: 1) stronger data augmentation and more labeled
data further diminish the value of pre-training, 2) unlike pre-training,
self-training is always helpful when using stronger data augmentation, in both
low-data and high-data regimes, and 3) in the case that pre-training is
helpful, self-training improves upon pre-training. For example, on the COCO
object detection dataset, pre-training benefits when we use one fifth of the
labeled data, and hurts accuracy when we use all labeled data. Self-training,
on the other hand, shows positive improvements from +1.3 to +3.4AP across all
dataset sizes. In other words, self-training works well exactly on the same
setup that pre-training does not work (using ImageNet to help COCO). On the
PASCAL segmentation dataset, which is a much smaller dataset than COCO, though
pre-training does help significantly, self-training improves upon the
pre-trained model. On COCO object detection, we achieve 54.3AP, an improvement
of +1.5AP over the strongest SpineNet model. On PASCAL segmentation, we achieve
90.5 mIOU, an improvement of +1.5% mIOU over the previous state-of-the-art
result by DeepLabv3+. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
Solving partially-observable Markov decision processes (POMDPs) is critical
when applying deep reinforcement learning (DRL) to real-world robotics
problems, where agents have an incomplete view of the world. We present graph
convolutional memory (GCM) for solving POMDPs using deep reinforcement
learning. Unlike recurrent neural networks (RNNs) or transformers, GCM embeds
domain-specific priors into the memory recall process via a knowledge graph. By
encapsulating priors in the graph, GCM adapts to specific tasks but remains
applicable to any DRL task. Using graph convolutions, GCM extracts hierarchical
graph features, analogous to image features in a convolutional neural network
(CNN). We show GCM outperforms long short-term memory (LSTM), gated
transformers for reinforcement learning (GTrXL), and differentiable neural
computers (DNCs) on control, long-term non-sequential recall, and 3D navigation
tasks while using significantly fewer parameters. | [
"cs.LG",
"cs.AI",
"cs.RO"
] |
Nowadays, full face synthesis and partial face manipulation by virtue of the
generative adversarial networks (GANs) have raised wide public concerns. In the
multi-media forensics area, detecting and ultimately locating the image forgery
have become imperative. We investigated the architecture of existing GAN-based
face manipulation methods and observed that the imperfection of upsampling
methods therewithin could be served as an important asset for GAN-synthesized
fake images detection and forgery localization. Based on this basic
observation, we have proposed a novel approach to obtain high localization
accuracy, at full resolution, on manipulated facial images. To the best of our
knowledge, this is the very first attempt to solve the GAN-based fake
localization problem with a gray-scale fakeness prediction map that preserves
more information of fake regions. To improve the universality of FakeLocator
across multifarious facial attributes, we introduce an attention mechanism to
guide the training of the model. Experimental results on the CelebA and FFHQ
databases with seven different state-of-the-art GAN-based face generation
methods show the effectiveness of our method. Compared with the baseline, our
method performs two times better on various metrics. Moreover, the proposed
method is robust against various real-world facial image degradations such as
JPEG compression, low-resolution, noise, and blur. | [
"cs.CV",
"cs.LG"
] |
We introduce the vine copula autoencoder (VCAE), a flexible generative model
for high-dimensional distributions built in a straightforward three-step
procedure.
First, an autoencoder (AE) compresses the data into a lower dimensional
representation. Second, the multivariate distribution of the encoded data is
estimated with vine copulas. Third, a generative model is obtained by combining
the estimated distribution with the decoder part of the AE. As such, the
proposed approach can transform any already trained AE into a flexible
generative model at a low computational cost. This is an advantage over
existing generative models such as adversarial networks and variational AEs
which can be difficult to train and can impose strong assumptions on the latent
space. Experiments on MNIST, Street View House Numbers and Large-Scale
CelebFaces Attributes datasets show that VCAEs can achieve competitive results
to standard baselines. | [
"stat.ML",
"cs.CV",
"cs.LG"
] |
Despite the explosion of interest in healthcare AI research, the
reproducibility and benchmarking of those research works are often limited due
to the lack of standard benchmark datasets and diverse evaluation metrics. To
address this reproducibility challenge, we develop PyHealth, an open-source
Python toolbox for developing various predictive models on healthcare data.
PyHealth consists of data preprocessing module, predictive modeling module,
and evaluation module. The target users of PyHealth are both computer science
researchers and healthcare data scientists. With PyHealth, they can conduct
complex machine learning pipelines on healthcare datasets with fewer than ten
lines of code. The data preprocessing module enables the transformation of
complex healthcare datasets such as longitudinal electronic health records,
medical images, continuous signals (e.g., electrocardiogram), and clinical
notes into machine learning friendly formats. The predictive modeling module
provides more than 30 machine learning models, including established ensemble
trees and deep neural network-based approaches, via a unified but extendable
API designed for both researchers and practitioners. The evaluation module
provides various evaluation strategies (e.g., cross-validation and
train-validation-test split) and predictive model metrics.
With robustness and scalability in mind, best practices such as unit testing,
continuous integration, code coverage, and interactive examples are introduced
in the library's development. PyHealth can be installed through the Python
Package Index (PyPI) or https://github.com/yzhao062/PyHealth . | [
"cs.LG",
"cs.AI"
] |
Random fields have remained a topic of great interest over past decades for
the purpose of structured inference, especially for problems such as image
segmentation. The local nodal interactions commonly used in such models often
suffer the short-boundary bias problem, which are tackled primarily through the
incorporation of long-range nodal interactions. However, the issue of
computational tractability becomes a significant issue when incorporating such
long-range nodal interactions, particularly when a large number of long-range
nodal interactions (e.g., fully-connected random fields) are modeled.
In this work, we introduce a generalized random field framework based around
the concept of stochastic cliques, which addresses the issue of computational
tractability when using fully-connected random fields by stochastically forming
a sparse representation of the random field. The proposed framework allows for
efficient structured inference using fully-connected random fields without any
restrictions on the potential functions that can be utilized. Several
realizations of the proposed framework using graph cuts are presented and
evaluated, and experimental results demonstrate that the proposed framework can
provide competitive performance for the purpose of image segmentation when
compared to existing fully-connected and principled deep random field
frameworks. | [
"cs.CV"
] |
Decay Replay Mining is a deep learning method that utilizes process model
notations to predict the next event. However, this method does not intertwine
the neural network with the structure of the process model to its full extent.
This paper proposes an approach to further interlock the process model of Decay
Replay Mining with its neural network for next event prediction. The approach
uses a masking layer which is initialized based on the reachability graph of
the process model. Additionally, modifications to the neural network
architecture are proposed to increase the predictive performance. Experimental
results demonstrate the value of the approach and underscore the importance of
discovering precise and generalized process models. | [
"cs.LG"
] |
We present a way to use Topological Data Analysis (TDA) for machine learning
tasks on grayscale images. We apply persistent homology to generate a wide
range of topological features using a point cloud obtained from an image, its
natural grayscale filtration, and different filtrations defined on the
binarized image. We show that this topological machine learning pipeline can be
used as a highly relevant dimensionality reduction by applying it to the MNIST
digits dataset. We conduct a feature selection and study their correlations
while providing an intuitive interpretation of their importance, which is
relevant in both machine learning and TDA. Finally, we show that we can
classify digit images while reducing the size of the feature set by a factor 5
compared to the grayscale pixel value features and maintain similar accuracy. | [
"cs.LG",
"math.AT",
"stat.ML"
] |
In this work we present a novel approach for single depth map
super-resolution. Modern consumer depth sensors, especially Time-of-Flight
sensors, produce dense depth measurements, but are affected by noise and have a
low lateral resolution. We propose a method that combines the benefits of
recent advances in machine learning based single image super-resolution, i.e.
deep convolutional networks, with a variational method to recover accurate
high-resolution depth maps. In particular, we integrate a variational method
that models the piecewise affine structures apparent in depth data via an
anisotropic total generalized variation regularization term on top of a deep
network. We call our method ATGV-Net and train it end-to-end by unrolling the
optimization procedure of the variational method. To train deep networks, a
large corpus of training data with accurate ground-truth is required. We
demonstrate that it is feasible to train our method solely on synthetic data
that we generate in large quantities for this task. Our evaluations show that
we achieve state-of-the-art results on three different benchmarks, as well as
on a challenging Time-of-Flight dataset, all without utilizing an additional
intensity image as guidance. | [
"cs.CV"
] |
While the Transformer architecture has become the de-facto standard for
natural language processing tasks, its applications to computer vision remain
limited. In vision, attention is either applied in conjunction with
convolutional networks, or used to replace certain components of convolutional
networks while keeping their overall structure in place. We show that this
reliance on CNNs is not necessary and a pure transformer applied directly to
sequences of image patches can perform very well on image classification tasks.
When pre-trained on large amounts of data and transferred to multiple mid-sized
or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision
Transformer (ViT) attains excellent results compared to state-of-the-art
convolutional networks while requiring substantially fewer computational
resources to train. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
This paper evaluates adaptive Q-learning (AQL) and single-partition adaptive
Q-learning (SPAQL), two algorithms for efficient model-free episodic
reinforcement learning (RL), in two classical control problems (Pendulum and
Cartpole). AQL adaptively partitions the state-action space of a Markov
decision process (MDP), while learning the control policy, i. e., the mapping
from states to actions. The main difference between AQL and SPAQL is that the
latter learns time-invariant policies, where the mapping from states to actions
does not depend explicitly on the time step. This paper also proposes the SPAQL
with terminal state (SPAQL-TS), an improved version of SPAQL tailored for the
design of regulators for control problems. The time-invariant policies are
shown to result in a better performance than the time-variant ones in both
problems studied. These algorithms are particularly fitted to RL problems where
the action space is finite, as is the case with the Cartpole problem. SPAQL-TS
solves the OpenAI Gym Cartpole problem, while also displaying a higher sample
efficiency than trust region policy optimization (TRPO), a standard RL
algorithm for solving control tasks. Moreover, the policies learned by SPAQL
are interpretable, while TRPO policies are typically encoded as neural
networks, and therefore hard to interpret. Yielding interpretable policies
while being sample-efficient are the major advantages of SPAQL. | [
"cs.LG"
] |
Patch-based attacks introduce a perceptible but localized change to the input
that induces misclassification. A limitation of current patch-based black-box
attacks is that they perform poorly for targeted attacks, and even for the less
challenging non-targeted scenarios, they require a large number of queries. Our
proposed PatchAttack is query efficient and can break models for both targeted
and non-targeted attacks. PatchAttack induces misclassifications by
superimposing small textured patches on the input image. We parametrize the
appearance of these patches by a dictionary of class-specific textures. This
texture dictionary is learned by clustering Gram matrices of feature
activations from a VGG backbone. PatchAttack optimizes the position and texture
parameters of each patch using reinforcement learning. Our experiments show
that PatchAttack achieves > 99% success rate on ImageNet for a wide range of
architectures, while only manipulating 3% of the image for non-targeted attacks
and 10% on average for targeted attacks. Furthermore, we show that PatchAttack
circumvents state-of-the-art adversarial defense methods successfully. | [
"cs.CV"
] |
Blind face inpainting refers to the task of reconstructing visual contents
without explicitly indicating the corrupted regions in a face image.
Inherently, this task faces two challenges: (1) how to detect various mask
patterns of different shapes and contents; (2) how to restore visually
plausible and pleasing contents in the masked regions. In this paper, we
propose a novel two-stage blind face inpainting method named Frequency-guided
Transformer and Top-Down Refinement Network (FT-TDR) to tackle these
challenges. Specifically, we first use a transformer-based network to detect
the corrupted regions to be inpainted as masks by modeling the relation among
different patches. We also exploit the frequency modality as complementary
information for improved detection results and capture the local contextual
incoherence to enhance boundary consistency. Then a top-down refinement network
is proposed to hierarchically restore features at different levels and generate
contents that are semantically consistent with the unmasked face regions.
Extensive experiments demonstrate that our method outperforms current
state-of-the-art blind and non-blind face inpainting methods qualitatively and
quantitatively. | [
"cs.CV"
] |
Text-based image captioning (TextCap) which aims to read and reason images
with texts is crucial for a machine to understand a detailed and complex scene
environment, considering that texts are omnipresent in daily life. This task,
however, is very challenging because an image often contains complex texts and
visual information that is hard to be described comprehensively. Existing
methods attempt to extend the traditional image captioning methods to solve
this task, which focus on describing the overall scene of images by one global
caption. This is infeasible because the complex text and visual information
cannot be described well within one caption. To resolve this difficulty, we
seek to generate multiple captions that accurately describe different parts of
an image in detail. To achieve this purpose, there are three key challenges: 1)
it is hard to decide which parts of the texts of images to copy or paraphrase;
2) it is non-trivial to capture the complex relationship between diverse texts
in an image; 3) how to generate multiple captions with diverse content is still
an open problem. To conquer these, we propose a novel Anchor-Captioner method.
Specifically, we first find the important tokens which are supposed to be paid
more attention to and consider them as anchors. Then, for each chosen anchor,
we group its relevant texts to construct the corresponding anchor-centred graph
(ACG). Last, based on different ACGs, we conduct multi-view caption generation
to improve the content diversity of generated captions. Experimental results
show that our method not only achieves SOTA performance but also generates
diverse captions to describe images. | [
"cs.CV"
] |
Bidirectional mapping-based generalized zero-shot learning (GZSL) methods
rely on the quality of synthesized features to recognize seen and unseen data.
Therefore, learning a joint distribution of seen-unseen domains and preserving
domain distinction is crucial for these methods. However, existing methods only
learn the underlying distribution of seen data, although unseen class semantics
are available in the GZSL problem setting. Most methods neglect retaining
domain distinction and use the learned distribution to recognize seen and
unseen data. Consequently, they do not perform well. In this work, we utilize
the available unseen class semantics alongside seen class semantics and learn
joint distribution through a strong visual-semantic coupling. We propose a
bidirectional mapping coupled generative adversarial network (BMCoGAN) by
extending the coupled generative adversarial network into a dual-domain
learning bidirectional mapping model. We further integrate a Wasserstein
generative adversarial optimization to supervise the joint distribution
learning. We design a loss optimization for retaining domain distinctive
information in the synthesized features and reducing bias towards seen classes,
which pushes synthesized seen features towards real seen features and pulls
synthesized unseen features away from real seen features. We evaluate BMCoGAN
on benchmark datasets and demonstrate its superior performance against
contemporary methods. | [
"cs.CV"
] |
Standard reinforcement learning (RL) aims to find an optimal policy that
identifies the best action for each state. However, in healthcare settings,
many actions may be near-equivalent with respect to the reward (e.g.,
survival). We consider an alternative objective -- learning set-valued policies
to capture near-equivalent actions that lead to similar cumulative rewards. We
propose a model-free algorithm based on temporal difference learning and a
near-greedy heuristic for action selection. We analyze the theoretical
properties of the proposed algorithm, providing optimality guarantees and
demonstrate our approach on simulated environments and a real clinical task.
Empirically, the proposed algorithm exhibits good convergence properties and
discovers meaningful near-equivalent actions. Our work provides theoretical, as
well as practical, foundations for clinician/human-in-the-loop decision making,
in which humans (e.g., clinicians, patients) can incorporate additional
knowledge (e.g., side effects, patient preference) when selecting among
near-equivalent actions. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Combining abstract, symbolic reasoning with continuous neural reasoning is a
grand challenge of representation learning. As a step in this direction, we
propose a new architecture, called neural equivalence networks, for the problem
of learning continuous semantic representations of algebraic and logical
expressions. These networks are trained to represent semantic equivalence, even
of expressions that are syntactically very different. The challenge is that
semantic representations must be computed in a syntax-directed manner, because
semantics is compositional, but at the same time, small changes in syntax can
lead to very large changes in semantics, which can be difficult for continuous
neural architectures. We perform an exhaustive evaluation on the task of
checking equivalence on a highly diverse class of symbolic algebraic and
boolean expression types, showing that our model significantly outperforms
existing architectures. | [
"cs.LG",
"cs.AI"
] |
Time Series Classification (TSC) has been an important and challenging task
in data mining, especially on multivariate time series and multi-view time
series data sets. Meanwhile, transfer learning has been widely applied in
computer vision and natural language processing applications to improve deep
neural network's generalization capabilities. However, very few previous works
applied transfer learning framework to time series mining problems.
Particularly, the technique of measuring similarities between source domain and
target domain based on dynamic representation such as density estimation with
importance sampling has never been combined with transfer learning framework.
In this paper, we first proposed a general adaptive transfer learning framework
for multi-view time series data, which shows strong ability in storing
inter-view importance value in the process of knowledge transfer. Next, we
represented inter-view importance through some time series similarity
measurements and approximated the posterior distribution in latent space for
the importance sampling via density estimation techniques. We then computed the
matrix norm of sampled importance value, which controls the degree of knowledge
transfer in pre-training process. We further evaluated our work, applied it to
many other time series classification tasks, and observed that our architecture
maintained desirable generalization ability. Finally, we concluded that our
framework could be adapted with deep learning techniques to receive significant
model performance improvements. | [
"cs.LG",
"stat.ML"
] |
Self-supervised learning, which learns by constructing artificial labels
given only the input signals, has recently gained considerable attention for
learning representations with unlabeled datasets, i.e., learning without any
human-annotated supervision. In this paper, we show that such a technique can
be used to significantly improve the model accuracy even under fully-labeled
datasets. Our scheme trains the model to learn both original and
self-supervised tasks, but is different from conventional multi-task learning
frameworks that optimize the summation of their corresponding losses. Our main
idea is to learn a single unified task with respect to the joint distribution
of the original and self-supervised labels, i.e., we augment original labels
via self-supervision of input transformation. This simple, yet effective
approach allows to train models easier by relaxing a certain invariant
constraint during learning the original and self-supervised tasks
simultaneously. It also enables an aggregated inference which combines the
predictions from different augmentations to improve the prediction accuracy.
Furthermore, we propose a novel knowledge transfer technique, which we refer to
as self-distillation, that has the effect of the aggregated inference in a
single (faster) inference. We demonstrate the large accuracy improvement and
wide applicability of our framework on various fully-supervised settings, e.g.,
the few-shot and imbalanced classification scenarios. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Noisy 3D point clouds arise in many applications. They may be due to errors
when constructing a 3D model from images or simply to imprecise depth sensors.
Point clouds can be given geometrical structure using graphs created from the
similarity information between points. This paper introduces a technique that
uses this graph structure and convex optimization methods to denoise 3D point
clouds. A short discussion presents how those methods naturally generalize to
time-varying inputs such as 3D point cloud time series. | [
"cs.CV",
"cs.GR",
"I.5.4"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.