text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Virtually all state-of-the-art methods for training supervised machine
learning models are variants of SGD enhanced with a number of additional
tricks, such as minibatching, momentum, and adaptive stepsizes. One of the
tricks that works so well in practice that it is used as default in virtually
all widely used machine learning software is {\em random reshuffling (RR)}.
However, the practical benefits of RR have until very recently been eluding
attempts at being satisfactorily explained using theory. Motivated by recent
development due to Mishchenko, Khaled and Richt\'{a}rik (2020), in this work we
provide the first analysis of SVRG under Random Reshuffling (RR-SVRG) for
general finite-sum problems. First, we show that RR-SVRG converges linearly
with the rate $\mathcal{O}(\kappa^{3/2})$ in the strongly-convex case, and can
be improved further to $\mathcal{O}(\kappa)$ in the big data regime (when $n >
\mathcal{O}(\kappa)$), where $\kappa$ is the condition number. This improves
upon the previous best rate $\mathcal{O}(\kappa^2)$ known for a variance
reduced RR method in the strongly-convex case due to Ying, Yuan and Sayed
(2020). Second, we obtain the first sublinear rate for general convex problems.
Third, we establish similar fast rates for Cyclic-SVRG and Shuffle-Once-SVRG.
Finally, we develop and analyze a more general variance reduction scheme for
RR, which allows for less frequent updates of the control variate. We
corroborate our theoretical results with suitably chosen experiments on
synthetic and real datasets. | [
"cs.LG",
"cs.AI",
"math.OC"
] |
Reconfiguration demand is increasing due to frequent requirement changes for
manufacturing systems. Recent approaches aim at investigating feasible
configuration alternatives from which they select the optimal one. This relies
on processes whose behavior is not reliant on e.g. the production sequence.
However, when machine learning is used, components' behavior depends on the
process' specifics, requiring additional concepts to successfully conduct
reconfiguration management. Therefore, we propose the enhancement of the
comprehensive reconfiguration management with transfer learning. This provides
the ability to assess the machine learning dependent behavior of the different
CPPS configurations with reduced effort and further assists the recommissioning
of the chosen one. A real cyber-physical production system from the discrete
manufacturing domain is utilized to demonstrate the aforementioned proposal. | [
"cs.LG",
"cs.AI",
"cs.SE",
"cs.SY",
"eess.SY"
] |
The majority of existing color naming methods focuses on the eleven basic
color terms of the English language. However, in many applications, different
sets of color names are used for the accurate description of objects. Labeling
data to learn these domain-specific color names is an expensive and laborious
task. Therefore, in this article we aim to learn color names from weakly
labeled data. For this purpose, we add an attention branch to the color naming
network. The attention branch is used to modulate the pixel-wise color naming
predictions of the network. In experiments, we illustrate that the attention
branch correctly identifies the relevant regions. Furthermore, we show that our
method obtains state-of-the-art results for pixel-wise and image-wise
classification on the EBAY dataset and is able to learn color names for various
domains. | [
"cs.CV"
] |
Generation of stroke-based non-photorealistic imagery, is an important
problem in the computer vision community. As an endeavor in this direction,
substantial recent research efforts have been focused on teaching machines "how
to paint", in a manner similar to a human painter. However, the applicability
of previous methods has been limited to datasets with little variation in
position, scale and saliency of the foreground object. As a consequence, we
find that these methods struggle to cover the granularity and diversity
possessed by real world images. To this end, we propose a Semantic Guidance
pipeline with 1) a bi-level painting procedure for learning the distinction
between foreground and background brush strokes at training time. 2) We also
introduce invariance to the position and scale of the foreground object through
a neural alignment model, which combines object localization and spatial
transformer networks in an end to end manner, to zoom into a particular
semantic instance. 3) The distinguishing features of the in-focus object are
then amplified by maximizing a novel guided backpropagation based focus reward.
The proposed agent does not require any supervision on human stroke-data and
successfully handles variations in foreground object attributes, thus,
producing much higher quality canvases for the CUB-200 Birds and Stanford
Cars-196 datasets. Finally, we demonstrate the further efficacy of our method
on complex datasets with multiple foreground object instances by evaluating an
extension of our method on the challenging Virtual-KITTI dataset. Source code
and models are available at https://github.com/1jsingh/semantic-guidance. | [
"cs.CV",
"cs.CG",
"cs.LG"
] |
Training neural networks with binary weights and activations is a challenging
problem due to the lack of gradients and difficulty of optimization over
discrete weights. Many successful experimental results have been achieved with
empirical straight-through (ST) approaches, proposing a variety of ad-hoc rules
for propagating gradients through non-differentiable activations and updating
discrete weights. At the same time, ST methods can be truly derived as
estimators in the stochastic binary network (SBN) model with Bernoulli weights.
We advance these derivations to a more complete and systematic study. We
analyze properties, estimation accuracy, obtain different forms of correct ST
estimators for activations and weights, explain existing empirical approaches
and their shortcomings, explain how latent weights arise from the mirror
descent method when optimizing over probabilities. This allows to reintroduce,
once empirical, ST methods as sound approximations, apply them with clarity and
develop further improvements. | [
"stat.ML",
"cs.CV",
"cs.LG",
"cs.NE"
] |
Recent research advances in Computer Vision and Natural Language Processing
have introduced novel tasks that are paving the way for solving AI-complete
problems. One of those tasks is called Visual Question Answering (VQA). A VQA
system must take an image and a free-form, open-ended natural language question
about the image, and produce a natural language answer as the output. Such a
task has drawn great attention from the scientific community, which generated a
plethora of approaches that aim to improve the VQA predictive accuracy. Most of
them comprise three major components: (i) independent representation learning
of images and questions; (ii) feature fusion so the model can use information
from both sources to answer visual questions; and (iii) the generation of the
correct answer in natural language. With so many approaches being recently
introduced, it became unclear the real contribution of each component for the
ultimate performance of the model. The main goal of this paper is to provide a
comprehensive analysis regarding the impact of each component in VQA models.
Our extensive set of experiments cover both visual and textual elements, as
well as the combination of these representations in form of fusion and
attention mechanisms. Our major contribution is to identify core components for
training VQA models so as to maximize their predictive performance. | [
"cs.CV",
"cs.CL",
"cs.LG"
] |
Synthetic data generation to improve classification performance (data
augmentation) is a well-studied problem. Recently, generative adversarial
networks (GAN) have shown superior image data augmentation performance, but
their suitability in gesture synthesis has received inadequate attention.
Further, GANs prohibitively require simultaneous generator and discriminator
network training. We tackle both issues in this work. We first discuss a novel,
device-agnostic GAN model for gesture synthesis called DeepGAN. Thereafter, we
formulate DeepNAG by introducing a new differentiable loss function based on
dynamic time warping and the average Hausdorff distance, which allows us to
train DeepGAN's generator without requiring a discriminator. Through
evaluations, we compare the utility of DeepGAN and DeepNAG against two
alternative techniques for training five recognizers using data augmentation
over six datasets. We further investigate the perceived quality of synthesized
samples via an Amazon Mechanical Turk user study based on the HYPE benchmark.
We find that DeepNAG outperforms DeepGAN in accuracy, training time (up to 17x
faster), and realism, thereby opening the door to a new line of research in
generator network design and training for gesture synthesis. Our source code is
available at https://www.deepnag.com. | [
"cs.CV"
] |
Current state-of-the-art methods for image segmentation form a dense image
representation where the color, shape and texture information are all processed
together inside a deep CNN. This however may not be ideal as they contain very
different type of information relevant for recognition. Here, we propose a new
two-stream CNN architecture for semantic segmentation that explicitly wires
shape information as a separate processing branch, i.e. shape stream, that
processes information in parallel to the classical stream. Key to this
architecture is a new type of gates that connect the intermediate layers of the
two streams. Specifically, we use the higher-level activations in the classical
stream to gate the lower-level activations in the shape stream, effectively
removing noise and helping the shape stream to only focus on processing the
relevant boundary-related information. This enables us to use a very shallow
architecture for the shape stream that operates on the image-level resolution.
Our experiments show that this leads to a highly effective architecture that
produces sharper predictions around object boundaries and significantly boosts
performance on thinner and smaller objects. Our method achieves
state-of-the-art performance on the Cityscapes benchmark, in terms of both mask
(mIoU) and boundary (F-score) quality, improving by 2% and 4% over strong
baselines. | [
"cs.CV",
"cs.LG"
] |
Advances in visual navigation methods have led to intelligent embodied
navigation agents capable of learning meaningful representations from raw RGB
images and perform a wide variety of tasks involving structural and semantic
reasoning. However, most learning-based navigation policies are trained and
tested in simulation environments. In order for these policies to be
practically useful, they need to be transferred to the real-world. In this
paper, we propose an unsupervised domain adaptation method for visual
navigation. Our method translates the images in the target domain to the source
domain such that the translation is consistent with the representations learned
by the navigation policy. The proposed method outperforms several baselines
across two different navigation tasks in simulation. We further show that our
method can be used to transfer the navigation policies learned in simulation to
the real world. | [
"cs.LG",
"cs.CV",
"cs.RO"
] |
Policy evaluation in reinforcement learning is often conducted using
two-timescale stochastic approximation, which results in various gradient
temporal difference methods such as GTD(0), GTD2, and TDC. Here, we provide
convergence rate bounds for this suite of algorithms. Algorithms such as these
have two iterates, $\theta_n$ and $w_n,$ which are updated using two distinct
stepsize sequences, $\alpha_n$ and $\beta_n,$ respectively. Assuming $\alpha_n
= n^{-\alpha}$ and $\beta_n = n^{-\beta}$ with $1 > \alpha > \beta > 0,$ we
show that, with high probability, the two iterates converge to their respective
solutions $\theta^*$ and $w^*$ at rates given by $\|\theta_n - \theta^*\| =
\tilde{O}( n^{-\alpha/2})$ and $\|w_n - w^*\| = \tilde{O}(n^{-\beta/2});$ here,
$\tilde{O}$ hides logarithmic terms. Via comparable lower bounds, we show that
these bounds are, in fact, tight. To the best of our knowledge, ours is the
first finite-time analysis which achieves these rates. While it was known that
the two timescale components decouple asymptotically, our results depict this
phenomenon more explicitly by showing that it in fact happens from some finite
time onwards. Lastly, compared to existing works, our result applies to a
broader family of stepsizes, including non-square summable ones. | [
"cs.LG",
"math.PR"
] |
Learning continuously during all model lifetime is fundamental to deploy
machine learning solutions robust to drifts in the data distribution. Advances
in Continual Learning (CL) with recurrent neural networks could pave the way to
a large number of applications where incoming data is non stationary, like
natural language processing and robotics. However, the existing body of work on
the topic is still fragmented, with approaches which are application-specific
and whose assessment is based on heterogeneous learning protocols and datasets.
In this paper, we organize the literature on CL for sequential data processing
by providing a categorization of the contributions and a review of the
benchmarks. We propose two new benchmarks for CL with sequential data based on
existing datasets, whose characteristics resemble real-world applications. We
also provide a broad empirical evaluation of CL and Recurrent Neural Networks
in class-incremental scenario, by testing their ability to mitigate forgetting
with a number of different strategies which are not specific to sequential data
processing. Our results highlight the key role played by the sequence length
and the importance of a clear specification of the CL scenario. | [
"cs.LG",
"cs.AI"
] |
Time series classification problems exist in many fields and have been
explored for a couple of decades. However, they still remain challenging, and
their solutions need to be further improved for real-world applications in
terms of both accuracy and efficiency. In this paper, we propose a hybrid
neural architecture, called Self-Attentive Recurrent Convolutional Networks
(SARCoN), to learn multi-faceted representations for univariate time series.
SARCoN is the synthesis of long short-term memory networks with self-attentive
mechanisms and Fully Convolutional Networks, which work in parallel to learn
the representations of univariate time series from different perspectives. The
component modules of the proposed architecture are trained jointly in an
end-to-end manner and they classify the input time series in a cooperative way.
Due to its domain-agnostic nature, SARCoN is able to generalize a diversity of
domain tasks. Our experimental results show that, compared to the
state-of-the-art approaches for time series classification, the proposed
architecture can achieve remarkable improvements for a set of univariate time
series benchmarks from the UCR repository. Moreover, the self-attention and the
global average pooling in the proposed architecture enable visible
interpretability by facilitating the identification of the contribution regions
of the original time series. An overall analysis confirms that multi-faceted
representations of time series aid in capturing deep temporal corrections
within complex time series, which is essential for the improvement of time
series classification performance. Our work provides a novel angle that deepens
the understanding of time series classification, qualifying our proposed model
as an ideal choice for real-world applications. | [
"cs.LG"
] |
Reliable deployment of machine learning models such as neural networks
continues to be challenging due to several limitations. Some of the main
shortcomings are the lack of interpretability and the lack of robustness
against adversarial examples or out-of-distribution inputs. In this paper, we
explore the possibilities and limits of adversarial attacks for explainable
machine learning models. First, we extend the notion of adversarial examples to
fit in explainable machine learning scenarios, in which the inputs, the output
classifications and the explanations of the model's decisions are assessed by
humans. Next, we propose a comprehensive framework to study whether (and how)
adversarial examples can be generated for explainable models under human
assessment, introducing novel attack paradigms. In particular, our framework
considers a wide range of relevant (yet often ignored) factors such as the type
of problem, the user expertise or the objective of the explanations in order to
identify the attack strategies that should be adopted in each scenario to
successfully deceive the model (and the human). These contributions intend to
serve as a basis for a more rigorous and realistic study of adversarial
examples in the field of explainable machine learning. | [
"cs.LG",
"cs.CR"
] |
Generating accurate and reliable sales forecasts is crucial in the E-commerce
business. The current state-of-the-art techniques are typically univariate
methods, which produce forecasts considering only the historical sales data of
a single product. However, in a situation where large quantities of related
time series are available, conditioning the forecast of an individual time
series on past behaviour of similar, related time series can be beneficial.
Since the product assortment hierarchy in an E-commerce platform contains large
numbers of related products, in which the sales demand patterns can be
correlated, our attempt is to incorporate this cross-series information in a
unified model. We achieve this by globally training a Long Short-Term Memory
network (LSTM) that exploits the non-linear demand relationships available in
an E-commerce product assortment hierarchy. Aside from the forecasting
framework, we also propose a systematic pre-processing framework to overcome
the challenges in the E-commerce business. We also introduce several product
grouping strategies to supplement the LSTM learning schemes, in situations
where sales patterns in a product portfolio are disparate. We empirically
evaluate the proposed forecasting framework on a real-world online marketplace
dataset from Walmart.com. Our method achieves competitive results on category
level and super-departmental level datasets, outperforming state-of-the-art
techniques. | [
"cs.LG",
"stat.ML"
] |
Visual question answering by using information from multiple modalities has
attracted more and more attention in recent years. However, it is a very
challenging task, as the visual content and natural language have quite
different statistical properties. In this work, we present a method called
Adversarial Multimodal Network (AMN) to better understand video stories for
question answering. In AMN, as inspired by generative adversarial networks, we
propose to learn multimodal feature representations by finding a more coherent
subspace for video clips and the corresponding texts (e.g., subtitles and
questions). Moreover, we introduce a self-attention mechanism to enforce the
so-called consistency constraints in order to preserve the self-correlation of
visual cues of the original video clips in the learned multimodal
representations. Extensive experiments on the MovieQA dataset show the
effectiveness of our proposed AMN over other published state-of-the-art
methods. | [
"cs.CV"
] |
Multi-task learning is a very challenging problem in reinforcement learning.
While training multiple tasks jointly allow the policies to share parameters
across different tasks, the optimization problem becomes non-trivial: It
remains unclear what parameters in the network should be reused across tasks,
and how the gradients from different tasks may interfere with each other. Thus,
instead of naively sharing parameters across tasks, we introduce an explicit
modularization technique on policy representation to alleviate this
optimization issue. Given a base policy network, we design a routing network
which estimates different routing strategies to reconfigure the base network
for each task. Instead of directly selecting routes for each task, our
task-specific policy uses a method called soft modularization to softly combine
all the possible routes, which makes it suitable for sequential tasks. We
experiment with various robotics manipulation tasks in simulation and show our
method improves both sample efficiency and performance over strong baselines by
a large margin. | [
"cs.LG",
"cs.AI",
"cs.RO",
"stat.ML"
] |
Normal map is an important and efficient way to represent complex 3D models.
A designer may benefit from the auto-generation of high quality and accurate
normal maps from freehand sketches in 3D content creation. This paper proposes
a deep generative model for generating normal maps from users sketch with
geometric sampling. Our generative model is based on Conditional Generative
Adversarial Network with the curvature-sensitive points sampling of conditional
masks. This sampling process can help eliminate the ambiguity of generation
results as network input. In addition, we adopted a U-Net structure
discriminator to help the generator be better trained. It is verified that the
proposed framework can generate more accurate normal maps. | [
"cs.CV",
"cs.GR"
] |
We introduce a new architecture called a conditional invertible neural
network (cINN), and use it to address the task of diverse image-to-image
translation for natural images. This is not easily possible with existing INN
models due to some fundamental limitations. The cINN combines the purely
generative INN model with an unconstrained feed-forward network, which
efficiently preprocesses the conditioning image into maximally informative
features. All parameters of a cINN are jointly optimized with a stable, maximum
likelihood-based training procedure. Even though INN-based models have received
far less attention in the literature than GANs, they have been shown to have
some remarkable properties absent in GANs, e.g. apparent immunity to mode
collapse. We find that our cINNs leverage these properties for image-to-image
translation, demonstrated on day to night translation and image colorization.
Furthermore, we take advantage of our bidirectional cINN architecture to
explore and manipulate emergent properties of the latent space, such as
changing the image style in an intuitive way. | [
"cs.CV",
"cs.AI",
"68T01"
] |
The individual brain can be viewed as a highly-complex multigraph (i.e. a set
of graphs also called connectomes), where each graph represents a unique
connectional view of pairwise brain region (node) relationships such as
function or morphology. Due to its multifold complexity, understanding how
brain disorders alter not only a single view of the brain graph, but its
multigraph representation at the individual and population scales, remains one
of the most challenging obstacles to profiling brain connectivity for
ultimately disentangling a wide spectrum of brain states (e.g., healthy vs.
disordered). In this work, while cross-pollinating the fields of spectral graph
theory and diffusion models, we unprecedentedly propose an eigen-based
cross-diffusion strategy for multigraph brain integration, comparison, and
profiling. Specifically, we first devise a brain multigraph fusion model guided
by eigenvector centrality to rely on most central nodes in the cross-diffusion
process. Next, since the graph spectrum encodes its shape (or geometry) as if
one can hear the shape of the graph, for the first time, we profile the fused
multigraphs at several diffusion timescales by extracting the compact
heat-trace signatures of their corresponding Laplacian matrices. Here, we
reveal for the first time autistic and healthy profiles of morphological brain
multigraphs, derived from T1-w magnetic resonance imaging (MRI), and
demonstrate their discriminability in boosting the classification of unseen
samples in comparison with state-of-the-art methods. This study presents the
first step towards hearing the shape of the brain multigraph that can be
leveraged for profiling and disentangling comorbid neurological disorders,
thereby advancing precision medicine. | [
"cs.CV"
] |
Object detection for robot guidance is a crucial mission for autonomous
robots, which has provoked extensive attention for researchers. However, the
changing view of robot movement and limited available data hinder the research
in this area. To address these matters, we proposed a new vision system for
robots, the model adaptation object detection system. Instead of using a single
one to solve problems, We made use of different object detection neural
networks to guide the robot in accordance with various situations, with the
help of a meta neural network to allocate the object detection neural networks.
Furthermore, taking advantage of transfer learning technology and depthwise
separable convolutions, our model is easy to train and can address small
dataset problems. | [
"cs.CV",
"cs.RO"
] |
Recently, AutoRegressive (AR) models for the whole image generation empowered
by transformers have achieved comparable or even better performance to
Generative Adversarial Networks (GANs). Unfortunately, directly applying such
AR models to edit/change local image regions, may suffer from the problems of
missing global information, slow inference speed, and information leakage of
local guidance. To address these limitations, we propose a novel model -- image
Local Autoregressive Transformer (iLAT), to better facilitate the locally
guided image synthesis. Our iLAT learns the novel local discrete
representations, by the newly proposed local autoregressive (LA) transformer of
the attention mask and convolution mechanism. Thus iLAT can efficiently
synthesize the local image regions by key guidance information. Our iLAT is
evaluated on various locally guided image syntheses, such as pose-guided person
image synthesis and face editing. Both the quantitative and qualitative results
show the efficacy of our model. | [
"cs.CV",
"eess.IV"
] |
This paper focuses on pose registration of different object instances from
the same category. This is required in online object mapping because object
instances detected at test time usually differ from the training instances. Our
approach transforms instances of the same category to a normalized canonical
coordinate frame and uses metric learning to train fully convolutional
geometric features. The resulting model is able to generate pairs of matching
points between the instances, allowing category-level registration. Evaluation
on both synthetic and real-world data shows that our method provides robust
features, leading to accurate alignment of instances with different shapes. | [
"cs.CV",
"cs.RO"
] |
Standard frame-based cameras that sample light intensity frames are heavily
impacted by motion blur for high-speed motion and fail to perceive scene
accurately when the dynamic range is high. Event-based cameras, on the other
hand, overcome these limitations by asynchronously detecting the variation in
individual pixel intensities. However, event cameras only provide information
about pixels in motion, leading to sparse data. Hence, estimating the overall
dense behavior of pixels is difficult. To address such issues associated with
the sensors, we present Fusion-FlowNet, a sensor fusion framework for
energy-efficient optical flow estimation using both frame- and event-based
sensors, leveraging their complementary characteristics. Our proposed network
architecture is also a fusion of Spiking Neural Networks (SNNs) and Analog
Neural Networks (ANNs) where each network is designed to simultaneously process
asynchronous event streams and regular frame-based images, respectively. Our
network is end-to-end trained using unsupervised learning to avoid expensive
video annotations. The method generalizes well across distinct environments
(rapid motion and challenging lighting conditions) and demonstrates
state-of-the-art optical flow prediction on the Multi-Vehicle Stereo Event
Camera (MVSEC) dataset. Furthermore, our network offers substantial savings in
terms of the number of network parameters and computational energy cost. | [
"cs.CV",
"cs.NE"
] |
Despite significant progress in monocular depth estimation in the wild,
recent state-of-the-art methods cannot be used to recover accurate 3D scene
shape due to an unknown depth shift induced by shift-invariant reconstruction
losses used in mixed-data depth prediction training, and possible unknown
camera focal length. We investigate this problem in detail, and propose a
two-stage framework that first predicts depth up to an unknown scale and shift
from a single monocular image, and then use 3D point cloud encoders to predict
the missing depth shift and focal length that allow us to recover a realistic
3D scene shape. In addition, we propose an image-level normalized regression
loss and a normal-based geometry loss to enhance depth prediction models
trained on mixed datasets. We test our depth model on nine unseen datasets and
achieve state-of-the-art performance on zero-shot dataset generalization. Code
is available at: https://git.io/Depth | [
"cs.CV"
] |
Astounding results from Transformer models on natural language tasks have
intrigued the vision community to study their application to computer vision
problems. Among their salient benefits, Transformers enable modeling long
dependencies between input sequence elements and support parallel processing of
sequence as compared to recurrent networks e.g., Long short-term memory (LSTM).
Different from convolutional networks, Transformers require minimal inductive
biases for their design and are naturally suited as set-functions. Furthermore,
the straightforward design of Transformers allows processing multiple
modalities (e.g., images, videos, text and speech) using similar processing
blocks and demonstrates excellent scalability to very large capacity networks
and huge datasets. These strengths have led to exciting progress on a number of
vision tasks using Transformer networks. This survey aims to provide a
comprehensive overview of the Transformer models in the computer vision
discipline. We start with an introduction to fundamental concepts behind the
success of Transformers i.e., self-attention, large-scale pre-training, and
bidirectional encoding. We then cover extensive applications of transformers in
vision including popular recognition tasks (e.g., image classification, object
detection, action recognition, and segmentation), generative modeling,
multi-modal tasks (e.g., visual-question answering, visual reasoning, and
visual grounding), video processing (e.g., activity recognition, video
forecasting), low-level vision (e.g., image super-resolution, image
enhancement, and colorization) and 3D analysis (e.g., point cloud
classification and segmentation). We compare the respective advantages and
limitations of popular techniques both in terms of architectural design and
their experimental value. Finally, we provide an analysis on open research
directions and possible future works. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Model-free deep reinforcement learning (RL) algorithms have been demonstrated
on a range of challenging decision making and control tasks. However, these
methods typically suffer from two major challenges: very high sample complexity
and brittle convergence properties, which necessitate meticulous hyperparameter
tuning. Both of these challenges severely limit the applicability of such
methods to complex, real-world domains. In this paper, we propose soft
actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum
entropy reinforcement learning framework. In this framework, the actor aims to
maximize expected reward while also maximizing entropy. That is, to succeed at
the task while acting as randomly as possible. Prior deep RL methods based on
this framework have been formulated as Q-learning methods. By combining
off-policy updates with a stable stochastic actor-critic formulation, our
method achieves state-of-the-art performance on a range of continuous control
benchmark tasks, outperforming prior on-policy and off-policy methods.
Furthermore, we demonstrate that, in contrast to other off-policy algorithms,
our approach is very stable, achieving very similar performance across
different random seeds. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
In this work, we aim to realize a method for embedding human knowledge into
deep neural networks. While the conventional method to embed human knowledge
has been applied for non-deep machine learning, it is challenging to apply it
for deep learning models due to the enormous number of model parameters. To
tackle this problem, we focus on the attention mechanism of an attention branch
network (ABN). In this paper, we propose a fine-tuning method that utilizes a
single-channel attention map which is manually edited by a human expert. Our
fine-tuning method can train a network so that the output attention map
corresponds to the edited ones. As a result, the fine-tuned network can output
an attention map that takes into account human knowledge. Experimental results
with ImageNet, CUB-200-2010, and IDRiD demonstrate that it is possible to
obtain a clear attention map for a visual explanation and improve the
classification performance. Our findings can be a novel framework for
optimizing networks through human intuitive editing via a visual interface and
suggest new possibilities for human-machine cooperation in addition to the
improvement of visual explanations. | [
"cs.CV"
] |
Time series anomaly detection has been a perennially important topic in data
science, with papers dating back to the 1950s. However, in recent years there
has been an explosion of interest in this topic, much of it driven by the
success of deep learning in other domains and for other time series tasks. Most
of these papers test on one or more of a handful of popular benchmark datasets,
created by Yahoo, Numenta, NASA, etc. In this work we make a surprising claim.
The majority of the individual exemplars in these datasets suffer from one or
more of four flaws. Because of these four flaws, we believe that many published
comparisons of anomaly detection algorithms may be unreliable, and more
importantly, much of the apparent progress in recent years may be illusionary.
In addition to demonstrating these claims, with this paper we introduce the UCR
Time Series Anomaly Archive. We believe that this resource will perform a
similar role as the UCR Time Series Classification Archive, by providing the
community with a benchmark that allows meaningful comparisons between
approaches and a meaningful gauge of overall progress. | [
"cs.LG",
"stat.ML"
] |
We study the effect of impairment on stochastic multi-armed bandits and
develop new ways to mitigate it. Impairment effect is the phenomena where an
agent only accrues reward for an action if they have played it at least a few
times in the recent past. It is practically motivated by repetition and recency
effects in domains such as advertising (here consumer behavior may require
repeat actions by advertisers) and vocational training (here actions are
complex skills that can only be mastered with repetition to get a payoff).
Impairment can be naturally modelled as a temporal constraint on the strategy
space, and we provide two novel algorithms that achieve sublinear regret, each
working with different assumptions on the impairment effect. We introduce a new
notion called bucketing in our algorithm design, and show how it can
effectively address impairment as well as a broader class of temporal
constraints. Our regret bounds explicitly capture the cost of impairment and
show that it scales (sub-)linearly with the degree of impairment. Our work
complements recent work on modeling delays and corruptions, and we provide
experimental evidence supporting our claims. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Although having achieved great success in medical image segmentation, deep
learning-based approaches usually require large amounts of well-annotated data,
which can be extremely expensive in the field of medical image analysis.
Unlabeled data, on the other hand, is much easier to acquire. Semi-supervised
learning and unsupervised domain adaptation both take the advantage of
unlabeled data, and they are closely related to each other. In this paper, we
propose uncertainty-aware multi-view co-training (UMCT), a unified framework
that addresses these two tasks for volumetric medical image segmentation. Our
framework is capable of efficiently utilizing unlabeled data for better
performance. We firstly rotate and permute the 3D volumes into multiple views
and train a 3D deep network on each view. We then apply co-training by
enforcing multi-view consistency on unlabeled data, where an uncertainty
estimation of each view is utilized to achieve accurate labeling. Experiments
on the NIH pancreas segmentation dataset and a multi-organ segmentation dataset
show state-of-the-art performance of the proposed framework on semi-supervised
medical image segmentation. Under unsupervised domain adaptation settings, we
validate the effectiveness of this work by adapting our multi-organ
segmentation model to two pathological organs from the Medical Segmentation
Decathlon Datasets. Additionally, we show that our UMCT-DA model can even
effectively handle the challenging situation where labeled source data is
inaccessible, demonstrating strong potentials for real-world applications. | [
"cs.CV"
] |
We consider how image super resolution (SR) can contribute to an object
detection task in low-resolution images. Intuitively, SR gives a positive
impact on the object detection task. While several previous works demonstrated
that this intuition is correct, SR and detector are optimized independently in
these works. This paper proposes a novel framework to train a deep neural
network where the SR sub-network explicitly incorporates a detection loss in
its training objective, via a tradeoff with a traditional detection loss. This
end-to-end training procedure allows us to train SR preprocessing for any
differentiable detector. We demonstrate that our task-driven SR consistently
and significantly improves accuracy of an object detector on low-resolution
images for a variety of conditions and scaling factors. | [
"cs.CV"
] |
Graph Neural Networks (GNNs) are widely used for analyzing graph-structured
data. Most GNN methods are highly sensitive to the quality of graph structures
and usually require a perfect graph structure for learning informative
embeddings. However, the pervasiveness of noise in graphs necessitates learning
robust representations for real-world problems. To improve the robustness of
GNN models, many studies have been proposed around the central concept of Graph
Structure Learning (GSL), which aims to jointly learn an optimized graph
structure and corresponding representations. Towards this end, in the presented
survey, we broadly review recent progress of GSL methods for learning robust
representations. Specifically, we first formulate a general paradigm of GSL,
and then review state-of-the-art methods classified by how they model graph
structures, followed by applications that incorporate the idea of GSL in other
graph tasks. Finally, we point out some issues in current studies and discuss
future directions. | [
"cs.LG",
"cs.SI"
] |
Q-learning (QL), a common reinforcement learning algorithm, suffers from
over-estimation bias due to the maximization term in the optimal Bellman
operator. This bias may lead to sub-optimal behavior. Double-Q-learning tackles
this issue by utilizing two estimators, yet results in an under-estimation
bias. Similar to over-estimation in Q-learning, in certain scenarios, the
under-estimation bias may degrade performance. In this work, we introduce a new
bias-reduced algorithm called Ensemble Bootstrapped Q-Learning (EBQL), a
natural extension of Double-Q-learning to ensembles. We analyze our method both
theoretically and empirically. Theoretically, we prove that EBQL-like updates
yield lower MSE when estimating the maximal mean of a set of independent random
variables. Empirically, we show that there exist domains where both over and
under-estimation result in sub-optimal performance. Finally, We demonstrate the
superior performance of a deep RL variant of EBQL over other deep QL algorithms
for a suite of ATARI games. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Smartphone applications designed to track human motion in combination with
wearable sensors, e.g., during physical exercising, raised huge attention
recently. Commonly, they provide quantitative services, such as personalized
training instructions or the counting of distances. But qualitative monitoring
and assessment is still missing, e.g., to detect malpositions, to prevent
injuries, or to optimize training success. We address this issue by presenting
a concept for qualitative as well as generic assessment of recurrent human
motion by processing multi-dimensional, continuous time series tracked with
motion sensors. Therefore, our segmentation procedure extracts individual
events of specific length and we propose expressive features to accomplish a
qualitative motion assessment by supervised classification. We verified our
approach within a comprehensive study encompassing 27 athletes undertaking
different body weight exercises. We are able to recognize six different
exercise types with a success rate of 100% and to assess them qualitatively
with an average success rate of 99.3%. | [
"cs.LG",
"cs.CV"
] |
Typical convolutional networks are trained and conducted on RGB images.
However, images are often compressed for memory savings and efficient
transmission in real-world applications. In this paper, we explore methods for
performing semantic segmentation on the discrete cosine transform (DCT)
representation defined by the JPEG standard. We first rearrange the DCT
coefficients to form a preferred input type, then we tailor an existing network
to the DCT inputs. The proposed method has an accuracy close to the RGB model
at about the same network complexity. Moreover, we investigate the impact of
selecting different DCT components on segmentation performance. With a proper
selection, one can achieve the same level accuracy using only 36% of the DCT
coefficients. We further show the robustness of our method under the
quantization errors. To our knowledge, this paper is the first to explore
semantic segmentation on the DCT representation. | [
"cs.CV"
] |
With the rapid progress of China's urbanization, research on the automatic
detection of land-use patterns in Chinese cities is of substantial importance.
Deep learning is an effective method to extract image features. To take
advantage of the deep-learning method in detecting urban land-use patterns, we
applied a transfer-learning-based remote-sensing image approach to extract and
classify features. Using the Google Tensorflow framework, a powerful
convolution neural network (CNN) library was created. First, the transferred
model was previously trained on ImageNet, one of the largest object-image data
sets, to fully develop the model's ability to generate feature vectors of
standard remote-sensing land-cover data sets (UC Merced and WHU-SIRI). Then, a
random-forest-based classifier was constructed and trained on these generated
vectors to classify the actual urban land-use pattern on the scale of traffic
analysis zones (TAZs). To avoid the multi-scale effect of remote-sensing
imagery, a large random patch (LRP) method was used. The proposed method could
efficiently obtain acceptable accuracy (OA = 0.794, Kappa = 0.737) for the
study area. In addition, the results show that the proposed method can
effectively overcome the multi-scale effect that occurs in urban land-use
classification at the irregular land-parcel level. The proposed method can help
planners monitor dynamic urban land use and evaluate the impact of
urban-planning schemes. | [
"cs.CV"
] |
At present spoofing attacks via which biometric system is potentially
vulnerable against a fake biometric characteristic, introduces a great
challenge to recognition performance. Despite the availability of a broad range
of presentation attack detection (PAD) or liveness detection algorithms,
fingerprint sensors are vulnerable to spoofing via fake fingers. In such
situations, finger dorsal images can be thought of as an alternative which can
be captured without much user cooperation and are more appropriate for outdoor
security applications. In this paper, we present a first feasibility study of
spoofing attack scenarios on finger dorsal authentication system, which include
four types of presentation attacks such as printed paper, wrapped printed
paper, scan and mobile. This study also presents a CNN based spoofing attack
detection method which employ state-of-the-art deep learning techniques along
with transfer learning mechanism. We have collected 196 finger dorsal real
images from 33 subjects, captured with a Lytro camera and also created a set of
784 finger dorsal spoofing images. Extensive experimental results have been
performed that demonstrates the superiority of the proposed approach for
various spoofing attacks. | [
"cs.CV"
] |
The reconstruction of phase spaces is an essential step to analyze time
series according to Dynamical System concepts. A regression performed on such
spaces unveils the relationships among system states from which we can derive
their generating rules, that is, the most probable set of functions responsible
for generating observations along time. In this sense, most approaches rely on
Takens' embedding theorem to unfold the phase space, which requires the
embedding dimension and the time delay. Moreover, although several methods have
been proposed to empirically estimate those parameters, they still face
limitations due to their lack of consistency and robustness, which has
motivated this paper. As an alternative, we here propose an artificial neural
network with a forgetting mechanism to implicitly learn the phase spaces
properties, whatever they are. Such network trains on forecasting errors and,
after converging, its architecture is used to estimate the embedding
parameters. Experimental results confirm that our approach is either as
competitive as or better than most state-of-the-art strategies while revealing
the temporal relationship among time-series observations. | [
"cs.LG",
"stat.ML",
"37Nxx",
"I.2.1"
] |
The pressure of ever-increasing patient demand and budget restrictions make
hospital bed management a daily challenge for clinical staff. Most critical is
the efficient allocation of resource-heavy Intensive Care Unit (ICU) beds to
the patients who need life support. Central to solving this problem is knowing
for how long the current set of ICU patients are likely to stay in the unit. In
this work, we propose a new deep learning model based on the combination of
temporal convolution and pointwise (1x1) convolution, to solve the length of
stay prediction task on the eICU and MIMIC-IV critical care datasets. The model
- which we refer to as Temporal Pointwise Convolution (TPC) - is specifically
designed to mitigate common challenges with Electronic Health Records, such as
skewness, irregular sampling and missing data. In doing so, we have achieved
significant performance benefits of 18-68% (metric and dataset dependent) over
the commonly used Long-Short Term Memory (LSTM) network, and the multi-head
self-attention network known as the Transformer. By adding mortality prediction
as a side-task, we can improve performance further still, resulting in a mean
absolute deviation of 1.55 days (eICU) and 2.28 days (MIMIC-IV) on predicting
remaining length of stay. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Supervised learning of time series data has been extensively studied for the
case of a categorical target variable. In some application domains, e.g.,
energy, environment and health monitoring, it occurs that the target variable
is numerical and the problem is known as time series extrinsic regression
(TSER). In the literature, some well-known time series classifiers have been
extended for TSER problems. As first benchmarking studies have focused on
predictive performance, very little attention has been given to
interpretability. To fill this gap, in this paper, we suggest an extension of a
Bayesian method for robust and interpretable feature construction and selection
in the context of TSER. Our approach exploits a relational way to tackle with
TSER: (i), we build various and simple representations of the time series which
are stored in a relational data scheme, then, (ii), a propositionalisation
technique (based on classical aggregation / selection functions from the
relational data field) is applied to build interpretable features from
secondary tables to "flatten" the data; and (iii), the constructed features are
filtered out through a Bayesian Maximum A Posteriori approach. The resulting
transformed data can be processed with various existing regressors.
Experimental validation on various benchmark data sets demonstrates the
benefits of the suggested approach. | [
"cs.LG"
] |
We present a graph-convolution-reinforced transformer, named Mesh Graphormer,
for 3D human pose and mesh reconstruction from a single image. Recently both
transformers and graph convolutional neural networks (GCNNs) have shown
promising progress in human mesh reconstruction. Transformer-based approaches
are effective in modeling non-local interactions among 3D mesh vertices and
body joints, whereas GCNNs are good at exploiting neighborhood vertex
interactions based on a pre-specified mesh topology. In this paper, we study
how to combine graph convolutions and self-attentions in a transformer to model
both local and global interactions. Experimental results show that our proposed
method, Mesh Graphormer, significantly outperforms the previous
state-of-the-art methods on multiple benchmarks, including Human3.6M, 3DPW, and
FreiHAND datasets. Code and pre-trained models are available at
https://github.com/microsoft/MeshGraphormer | [
"cs.CV"
] |
Domain adaptation aims to bridge the domain shifts between the source and
target domains. These shifts may span different dimensions such as fog,
rainfall, etc. However, recent methods typically do not consider explicit prior
knowledge on a specific dimension, thus leading to less desired adaptation
performance. In this paper, we study a practical setting called Specific Domain
Adaptation (SDA) that aligns the source and target domains in a
demanded-specific dimension. Within this setting, we observe the intra-domain
gap induced by different domainness (i.e., numerical magnitudes of this
dimension) is crucial when adapting to a specific domain. To address the
problem, we propose a novel Self-Adversarial Disentangling (SAD) framework. In
particular, given a specific dimension, we first enrich the source domain by
introducing a domainness creator with providing additional supervisory signals.
Guided by the created domainness, we design a self-adversarial regularizer and
two loss functions to jointly disentangle the latent representations into
domainness-specific and domainness-invariant features, thus mitigating the
intra-domain gap. Our method can be easily taken as a plug-and-play framework
and does not introduce any extra costs in the inference time. We achieve
consistent improvements over state-of-the-art methods in both object detection
and semantic segmentation tasks. | [
"cs.CV"
] |
We introduce a new spectral method for image segmentation that incorporates
long range relationships for global appearance modeling. The approach combines
two different graphs, one is a sparse graph that captures spatial relationships
between nearby pixels and another is a dense graph that captures pairwise
similarity between all pairs of pixels. We extend the spectral method for
Normalized Cuts to this setting by combining the transition matrices of Markov
chains associated with each graph. We also derive an efficient method that uses
importance sampling for sparsifying the dense graph of appearance
relationships. This leads to a practical algorithm for segmenting
high-resolution images. The resulting method can segment challenging images
without any filtering or pre-processing. | [
"cs.CV",
"cs.LG",
"eess.IV",
"I.4; I.5"
] |
Tracking-by-detection methods have demonstrated competitive performance in
recent years. In these approaches, the tracking model heavily relies on the
quality of the training set. Due to the limited amount of labeled training
data, additional samples need to be extracted and labeled by the tracker
itself. This often leads to the inclusion of corrupted training samples, due to
occlusions, misalignments and other perturbations. Existing
tracking-by-detection methods either ignore this problem, or employ a separate
component for managing the training set.
We propose a novel generic approach for alleviating the problem of corrupted
training samples in tracking-by-detection frameworks. Our approach dynamically
manages the training set by estimating the quality of the samples. Contrary to
existing approaches, we propose a unified formulation by minimizing a single
loss over both the target appearance model and the sample quality weights. The
joint formulation enables corrupted samples to be down-weighted while
increasing the impact of correct ones. Experiments are performed on three
benchmarks: OTB-2015 with 100 videos, VOT-2015 with 60 videos, and Temple-Color
with 128 videos. On the OTB-2015, our unified formulation significantly
improves the baseline, with a gain of 3.8% in mean overlap precision. Finally,
our method achieves state-of-the-art results on all three datasets. Code and
supplementary material are available at
http://www.cvl.isy.liu.se/research/objrec/visualtracking/decontrack/index.html . | [
"cs.CV"
] |
This work focuses on the estimation of multiple change-points in a
time-varying Ising model that evolves piece-wise constantly. The aim is to
identify both the moments at which significant changes occur in the Ising
model, as well as the underlying graph structures. For this purpose, we propose
to estimate the neighborhood of each node by maximizing a penalized version of
its conditional log-likelihood. The objective of the penalization is twofold:
it imposes sparsity in the learned graphs and, thanks to a fused-type penalty,
it also enforces them to evolve piece-wise constantly. Using few assumptions,
we provide two change-points consistency theorems. Those are the first in the
context of unknown number of change-points detection in time-varying Ising
model. Finally, experimental results on several synthetic datasets and a
real-world dataset demonstrate the performance of our method. | [
"stat.ML",
"cond-mat.stat-mech",
"cs.LG"
] |
Classification of time series is a topical issue in machine learning. While
accuracy stands for the most important evaluation criterion, some applications
require decisions to be made as early as possible. Optimization should then
target a compromise between earliness, i.e., a capacity of providing a decision
early in the sequence, and accuracy. In this work, we propose a generic,
end-to-end trainable framework for early classification of time series. This
framework embeds a learnable decision mechanism that can be plugged into a wide
range of already existing models. We present results obtained with deep neural
networks on a diverse set of time series classification problems. Our approach
compares well to state-of-the-art competitors while being easily adaptable by
any existing neural network topology that evaluates a hidden state at each time
step. | [
"cs.LG",
"stat.ML"
] |
Convolutional Neural Networks (CNNs) are known to be brittle under various
image transformations, including rotations, scalings, and changes of lighting
conditions. We observe that the features of a transformed image are drastically
different from the ones of the original image. To make CNNs more invariant to
transformations, we propose "Feature Lenses", a set of ad-hoc modules that can
be easily plugged into a trained model (referred to as the "host model"). Each
individual lens reconstructs the original features given the features of a
transformed image under a particular transformation. These lenses jointly
counteract feature distortions caused by various transformations, thus making
the host model more robust without retraining. By only updating lenses, the
host model is freed from iterative updating when facing new transformations
absent in the training data; as feature semantics are preserved, downstream
applications, such as classifiers and detectors, automatically gain robustness
without retraining. Lenses are trained in a self-supervised fashion with no
annotations, by minimizing a novel "Top-K Activation Contrast Loss" between
lens-transformed features and original features. Evaluated on ImageNet,
MNIST-rot, and CIFAR-10, Feature Lenses show clear advantages over baseline
methods. | [
"cs.CV",
"eess.IV"
] |
We propose an image super resolution(ISR) method using generative adversarial
networks (GANs) that takes a low resolution input fundus image and generates a
high resolution super resolved (SR) image upto scaling factor of $16$. This
facilitates more accurate automated image analysis, especially for small or
blurred landmarks and pathologies. Local saliency maps, which define each
pixel's importance, are used to define a novel saliency loss in the GAN cost
function. Experimental results show the resulting SR images have perceptual
quality very close to the original images and perform better than competing
methods that do not weigh pixels according to their importance. When used for
retinal vasculature segmentation, our SR images result in accuracy levels close
to those obtained when using the original images. | [
"cs.CV"
] |
Collecting training data from the physical world is usually time-consuming
and even dangerous for fragile robots, and thus, recent advances in robot
learning advocate the use of simulators as the training platform.
Unfortunately, the reality gap between synthetic and real visual data prohibits
direct migration of the models trained in virtual worlds to the real world.
This paper proposes a modular architecture for tackling the virtual-to-real
problem. The proposed architecture separates the learning model into a
perception module and a control policy module, and uses semantic image
segmentation as the meta representation for relating these two modules. The
perception module translates the perceived RGB image to semantic image
segmentation. The control policy module is implemented as a deep reinforcement
learning agent, which performs actions based on the translated image
segmentation. Our architecture is evaluated in an obstacle avoidance task and a
target following task. Experimental results show that our architecture
significantly outperforms all of the baseline methods in both virtual and real
environments, and demonstrates a faster learning curve than them. We also
present a detailed analysis for a variety of variant configurations, and
validate the transferability of our modular architecture. | [
"cs.CV",
"cs.RO",
"cs.SY"
] |
Solving Partially Observable Markov Decision Processes (POMDPs) is hard.
Learning optimal controllers for POMDPs when the model is unknown is harder.
Online learning of optimal controllers for unknown POMDPs, which requires
efficient learning using regret-minimizing algorithms that effectively tradeoff
exploration and exploitation, is even harder, and no solution exists currently.
In this paper, we consider infinite-horizon average-cost POMDPs with unknown
transition model, though a known observation model. We propose a natural
posterior sampling-based reinforcement learning algorithm (PSRL-POMDP) and show
that it achieves a regret bound of $O(\log T)$, where $T$ is the time horizon,
when the parameter set is finite. In the general case (continuous parameter
set), we show that the algorithm achieves $O (T^{2/3})$ regret under two
technical assumptions. To the best of our knowledge, this is the first online
RL algorithm for POMDPs and has sub-linear regret. | [
"cs.LG"
] |
We present LADDER, the first deep reinforcement learning agent that can
successfully learn control policies for large-scale real-world problems
directly from raw inputs composed of high-level semantic information. The agent
is based on an asynchronous stochastic variant of DQN (Deep Q Network) named
DASQN. The inputs of the agent are plain-text descriptions of states of a game
of incomplete information, i.e. real-time large scale online auctions, and the
rewards are auction profits of very large scale. We apply the agent to an
essential portion of JD's online RTB (real-time bidding) advertising business
and find that it easily beats the former state-of-the-art bidding policy that
had been carefully engineered and calibrated by human experts: during JD.com's
June 18th anniversary sale, the agent increased the company's ads revenue from
the portion by more than 50%, while the advertisers' ROI (return on investment)
also improved significantly. | [
"cs.LG",
"cs.AI",
"cs.CL",
"cs.GT"
] |
Data poisoning is an attack on machine learning models wherein the attacker
adds examples to the training set to manipulate the behavior of the model at
test time. This paper explores poisoning attacks on neural nets. The proposed
attacks use "clean-labels"; they don't require the attacker to have any control
over the labeling of training data. They are also targeted; they control the
behavior of the classifier on a $\textit{specific}$ test instance without
degrading overall classifier performance. For example, an attacker could add a
seemingly innocuous image (that is properly labeled) to a training set for a
face recognition engine, and control the identity of a chosen person at test
time. Because the attacker does not need to control the labeling function,
poisons could be entered into the training set simply by leaving them on the
web and waiting for them to be scraped by a data collection bot.
We present an optimization-based method for crafting poisons, and show that
just one single poison image can control classifier behavior when transfer
learning is used. For full end-to-end training, we present a "watermarking"
strategy that makes poisoning reliable using multiple ($\approx$50) poisoned
training instances. We demonstrate our method by generating poisoned frog
images from the CIFAR dataset and using them to manipulate image classifiers. | [
"cs.LG",
"cs.CR",
"cs.CV",
"stat.ML"
] |
Path planning methods for the unmanned aerial vehicle (UAV) in goods delivery
have drawn great attention from industry and academics because of its
flexibility which is suitable for many situations in the "Last Kilometer"
between customer and delivery nodes. However, the complicated situation is
still a problem for traditional combinatorial optimization methods. Based on
the state-of-the-art Reinforcement Learning (RL), this paper proposed an
improved method to achieve path planning for UAVs in complex surroundings:
multiple no-fly zones. The improved approach leverages the attention mechanism
and includes the embedding mechanism as the encoder and three different widths
of beam search (i.e.,~1, 5, and 10) as the decoders. Policy gradients are
utilized to train the RL model for obtaining the optimal strategies during
inference. The results show the feasibility and efficiency of the model
applying in this kind of complicated situation. Comparing the model with the
results obtained by the optimization solver OR-tools, it improves the
reliability of the distribution system and has a guiding significance for the
broad application of UAVs. | [
"cs.LG",
"eess.SP",
"math.OC"
] |
Object detection and semantic segmentation are two of the most widely adopted
deep learning algorithms in agricultural applications. One of the major sources
of variability in image quality acquired in the outdoors for such tasks is
changing lighting condition that can alter the appearance of the objects or the
contents of the entire image. While transfer learning and data augmentation to
some extent reduce the need for large amount of data to train deep neural
networks, the large variety of cultivars and the lack of shared datasets in
agriculture makes wide-scale field deployments difficult. In this paper, we
present a high throughput robust active lighting-based camera system that
generates consistent images in all lighting conditions. We detail experiments
that show the consistency in images quality leading to relatively fewer images
to train deep neural networks for the task of object detection. We further
present results from field experiment under extreme lighting conditions where
images without active lighting significantly lack to provide consistent
results. The experimental results show that on average, deep nets for object
detection trained on consistent data required nearly four times less data to
achieve similar level of accuracy. This proposed work could potentially provide
pragmatic solutions to computer vision needs in agriculture. | [
"cs.CV"
] |
Detecting novel objects without class information is not trivial, as it is
difficult to generalize from a small training set. This is an interesting
problem for underwater robotics, as modeling marine objects is inherently more
difficult in sonar images, and training data might not be available apriori.
Detection proposals algorithms can be used for this purpose but usually
requires a large amount of output bounding boxes. In this paper we propose the
use of a fully convolutional neural network that regresses an objectness value
directly from a Forward-Looking sonar image. By ranking objectness, we can
produce high recall (96 %) with only 100 proposals per image. In comparison,
EdgeBoxes requires 5000 proposals to achieve a slightly better recall of 97 %,
while Selective Search requires 2000 proposals to achieve 95 % recall. We also
show that our method outperforms a template matching baseline by a considerable
margin, and is able to generalize to completely new objects. We expect that
this kind of technique can be used in the field to find lost objects under the
sea. | [
"cs.CV",
"cs.LG",
"cs.RO",
"eess.IV"
] |
Unsupervised structure learning in high-dimensional time series data has
attracted a lot of research interests. For example, segmenting and labelling
high dimensional time series can be helpful in behavior understanding and
medical diagnosis. Recent advances in generative sequential modeling have
suggested to combine recurrent neural networks with state space models (e.g.,
Hidden Markov Models). This combination can model not only the long term
dependency in sequential data, but also the uncertainty included in the hidden
states. Inheriting these advantages of stochastic neural sequential models, we
propose a structured and stochastic sequential neural network, which models
both the long-term dependencies via recurrent neural networks and the
uncertainty in the segmentation and labels via discrete random variables. For
accurate and efficient inference, we present a bi-directional inference network
by reparamterizing the categorical segmentation and labels with the recent
proposed Gumbel-Softmax approximation and resort to the Stochastic Gradient
Variational Bayes. We evaluate the proposed model in a number of tasks,
including speech modeling, automatic segmentation and labeling in behavior
understanding, and sequential multi-objects recognition. Experimental results
have demonstrated that our proposed model can achieve significant improvement
over the state-of-the-art methods. | [
"cs.LG",
"cs.CV"
] |
In this paper, we study the problem of Novel Class Discovery (NCD). NCD aims
at inferring novel object categories in an unlabeled set by leveraging from
prior knowledge of a labeled set containing different, but related classes.
Existing approaches tackle this problem by considering multiple objective
functions, usually involving specialized loss terms for the labeled and the
unlabeled samples respectively, and often requiring auxiliary regularization
terms. In this paper, we depart from this traditional scheme and introduce a
UNified Objective function (UNO) for discovering novel classes, with the
explicit purpose of favoring synergy between supervised and unsupervised
learning. Using a multi-view self-labeling strategy, we generate pseudo-labels
that can be treated homogeneously with ground truth labels. This leads to a
single classification objective operating on both known and unknown classes.
Despite its simplicity, UNO outperforms the state of the art by a significant
margin on several benchmarks (~+10% on CIFAR-100 and +8% on ImageNet). The
project page is available at: https://ncd-uno.github.io. | [
"cs.CV",
"cs.LG"
] |
This letter presents a novel framework termed DistSTN for the task of
synthetic aperture radar (SAR) automatic target recognition (ATR). In contrast
to the conventional SAR ATR algorithms, DistSTN considers a more challenging
practical scenario for non-cooperative targets whose aspect angles for training
are incomplete and limited in a partial range while those of testing samples
are unlimited. To address this issue, instead of learning the pose invariant
features, DistSTN newly involves an elaborated feature disentangling model to
separate the learned pose factors of a SAR target from the identity ones so
that they can independently control the representation process of the target
image. To disentangle the explainable pose factors, we develop a pose
discrepancy spatial transformer module in DistSTN to characterize the intrinsic
transformation between the factors of two different targets with an explicit
geometric model. Furthermore, DistSTN develops an amortized inference scheme
that enables efficient feature extraction and recognition using an
encoder-decoder mechanism. Experimental results with the moving and stationary
target acquisition and recognition (MSTAR) benchmark demonstrate the
effectiveness of our proposed approach. Compared with the other ATR algorithms,
DistSTN can achieve higher recognition accuracy. | [
"cs.CV"
] |
We propose a new approach to value function approximation which combines
linear temporal difference reinforcement learning with subspace identification.
In practical applications, reinforcement learning (RL) is complicated by the
fact that state is either high-dimensional or partially observable. Therefore,
RL methods are designed to work with features of state rather than state
itself, and the success or failure of learning is often determined by the
suitability of the selected features. By comparison, subspace identification
(SSID) methods are designed to select a feature set which preserves as much
information as possible about state. In this paper we connect the two
approaches, looking at the problem of reinforcement learning with a large set
of features, each of which may only be marginally useful for value function
approximation. We introduce a new algorithm for this situation, called
Predictive State Temporal Difference (PSTD) learning. As in SSID for predictive
state representations, PSTD finds a linear compression operator that projects a
large set of features down to a small set that preserves the maximum amount of
predictive information. As in RL, PSTD then uses a Bellman recursion to
estimate a value function. We discuss the connection between PSTD and prior
approaches in RL and SSID. We prove that PSTD is statistically consistent,
perform several experiments that illustrate its properties, and demonstrate its
potential on a difficult optimal stopping problem. | [
"cs.LG",
"cs.AI"
] |
Efficient point cloud compression is fundamental to enable the deployment of
virtual and mixed reality applications, since the number of points to code can
range in the order of millions. In this paper, we present a novel data-driven
geometry compression method for static point clouds based on learned
convolutional transforms and uniform quantization. We perform joint
optimization of both rate and distortion using a trade-off parameter. In
addition, we cast the decoding process as a binary classification of the point
cloud occupancy map. Our method outperforms the MPEG reference solution in
terms of rate-distortion on the Microsoft Voxelized Upper Bodies dataset with
51.5% BDBR savings on average. Moreover, while octree-based methods face
exponential diminution of the number of points at low bitrates, our method
still produces high resolution outputs even at low bitrates. Code and
supplementary material are available at
https://github.com/mauriceqch/pcc_geo_cnn . | [
"cs.CV",
"cs.LG",
"eess.IV",
"stat.ML"
] |
Physics-informed neural networks (NN) are an emerging technique to improve
spatial resolution and enforce physical consistency of data from physics models
or satellite observations. A super-resolution (SR) technique is explored to
reconstruct high-resolution images ($4\times$) from lower resolution images in
an advection-diffusion model of atmospheric pollution plumes. SR performance is
generally increased when the advection-diffusion equation constrains the NN in
addition to conventional pixel-based constraints. The ability of SR techniques
to also reconstruct missing data is investigated by randomly removing image
pixels from the simulations and allowing the system to learn the content of
missing data. Improvements in S/N of $11\%$ are demonstrated when physics
equations are included in SR with $40\%$ pixel loss. Physics-informed NNs
accurately reconstruct corrupted images and generate better results compared to
the standard SR approaches. | [
"cs.CV",
"physics.geo-ph"
] |
The successful application of deep learning to many visual recognition tasks
relies heavily on the availability of a large amount of labeled data which is
usually expensive to obtain. The few-shot learning problem has attracted
increasing attention from researchers for building a robust model upon only a
few labeled samples. Most existing works tackle this problem under the
meta-learning framework by mimicking the few-shot learning task with an
episodic training strategy. In this paper, we propose a new transfer-learning
framework for semi-supervised few-shot learning to fully utilize the auxiliary
information from labeled base-class data and unlabeled novel-class data. The
framework consists of three components: 1) pre-training a feature extractor on
base-class data; 2) using the feature extractor to initialize the classifier
weights for the novel classes; and 3) further updating the model with a
semi-supervised learning method. Under the proposed framework, we develop a
novel method for semi-supervised few-shot learning called TransMatch by
instantiating the three components with Imprinting and MixMatch. Extensive
experiments on two popular benchmark datasets for few-shot learning,
CUB-200-2011 and miniImageNet, demonstrate that our proposed method can
effectively utilize the auxiliary information from labeled base-class data and
unlabeled novel-class data to significantly improve the accuracy of few-shot
learning task. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
We study data poisoning attacks in the online setting where training items
arrive sequentially, and the attacker may perturb the current item to
manipulate online learning. Importantly, the attacker has no knowledge of
future training items nor the data generating distribution. We formulate online
data poisoning attack as a stochastic optimal control problem, and solve it
with model predictive control and deep reinforcement learning. We also upper
bound the suboptimality suffered by the attacker for not knowing the data
generating distribution. Experiments validate our control approach in
generating near-optimal attacks on both supervised and unsupervised learning
tasks. | [
"cs.LG",
"cs.CR",
"stat.ML"
] |
Node injection attack on Graph Neural Networks (GNNs) is an emerging and
practical attack scenario that the attacker injects malicious nodes rather than
modifying original nodes or edges to affect the performance of GNNs. However,
existing node injection attacks ignore extremely limited scenarios, namely the
injected nodes might be excessive such that they may be perceptible to the
target GNN. In this paper, we focus on an extremely limited scenario of single
node injection evasion attack, i.e., the attacker is only allowed to inject one
single node during the test phase to hurt GNN's performance. The discreteness
of network structure and the coupling effect between network structure and node
features bring great challenges to this extremely limited scenario. We first
propose an optimization-based method to explore the performance upper bound of
single node injection evasion attack. Experimental results show that 100%,
98.60%, and 94.98% nodes on three public datasets are successfully attacked
even when only injecting one node with one edge, confirming the feasibility of
single node injection evasion attack. However, such an optimization-based
method needs to be re-optimized for each attack, which is computationally
unbearable. To solve the dilemma, we further propose a Generalizable Node
Injection Attack model, namely G-NIA, to improve the attack efficiency while
ensuring the attack performance. Experiments are conducted across three
well-known GNNs. Our proposed G-NIA significantly outperforms state-of-the-art
baselines and is 500 times faster than the optimization-based method when
inferring. | [
"cs.LG",
"cs.CR"
] |
Object tracking is the cornerstone of many visual analytics systems. While
considerable progress has been made in this area in recent years, robust,
efficient, and accurate tracking in real-world video remains a challenge. In
this paper, we present a hybrid tracker that leverages motion information from
the compressed video stream and a general-purpose semantic object detector
acting on decoded frames to construct a fast and efficient tracking engine. The
proposed approach is compared with several well-known recent trackers on the
OTB tracking dataset. The results indicate advantages of the proposed method in
terms of speed and/or accuracy.Other desirable features of the proposed method
are its simplicity and deployment efficiency, which stems from the fact that it
reuses the resources and information that may already exist in the system for
other reasons. | [
"cs.CV"
] |
Generative adversarial networks (GANs) are a class of generative models with
two antagonistic neural networks: a generator and a discriminator. These two
neural networks compete against each other through an adversarial process that
can be modeled as a stochastic Nash equilibrium problem. Since the associated
training process is challenging, it is fundamental to design reliable
algorithms to compute an equilibrium. In this paper, we propose a stochastic
relaxed forward-backward (SRFB) algorithm for GANs and we show convergence to
an exact solution when an increasing number of data is available. We also show
convergence of an averaged variant of the SRFB algorithm to a neighborhood of
the solution when only few samples are available. In both cases, convergence is
guaranteed when the pseudogradient mapping of the game is monotone. This
assumption is among the weakest known in the literature. Moreover, we apply our
algorithm to the image generation problem. | [
"cs.LG",
"cs.GT",
"math.OC"
] |
Many compelling video processing effects can be achieved if per-pixel depth
information and 3D camera calibrations are known. However, the success of such
methods is highly dependent on the accuracy of this "scene-space" information.
We present a novel, sampling-based framework for processing video that enables
high-quality scene-space video effects in the presence of inevitable errors in
depth and camera pose estimation. Instead of trying to improve the explicit 3D
scene representation, the key idea of our method is to exploit the high
redundancy of approximate scene information that arises due to most scene
points being visible multiple times across many frames of video. Based on this
observation, we propose a novel pixel gathering and filtering approach. The
gathering step is general and collects pixel samples in scene-space, while the
filtering step is application-specific and computes a desired output video from
the gathered sample sets. Our approach is easily parallelizable and has been
implemented on GPU, allowing us to take full advantage of large volumes of
video data and facilitating practical runtimes on HD video using a standard
desktop computer. Our generic scene-space formulation is able to
comprehensively describe a multitude of video processing applications such as
denoising, deblurring, super resolution, object removal, computational shutter
functions, and other scene-space camera effects. We present results for various
casually captured, hand-held, moving, compressed, monocular videos depicting
challenging scenes recorded in uncontrolled environments. | [
"cs.CV",
"cs.GR"
] |
Dense stereo matching with deep neural networks is of great interest to the
research community. Existing stereo matching networks typically use slow and
computationally expensive 3D convolutions to improve the performance, which is
not friendly to real-world applications such as autonomous driving. In this
paper, we propose the Efficient Stereo Network (ESNet), which achieves high
performance and efficient inference at the same time. ESNet relies only on 2D
convolution and computes multi-scale cost volume efficiently using a
warping-based method to improve the performance in regions with fine-details.
In addition, we address the matching ambiguity issue in the occluded region by
proposing ESNet-M, a variant of ESNet that additionally estimates an occlusion
mask without supervision. We further improve the network performance by
proposing a new training scheme that includes dataset scheduling and
unsupervised pre-training. Compared with other low-cost dense stereo depth
estimation methods, our proposed approach achieves state-of-the-art performance
on the Scene Flow [1], DrivingStereo [2], and KITTI-2015 dataset [3]. Our code
will be made available. | [
"cs.CV"
] |
Histological subtype of papillary (p) renal cell carcinoma (RCC), type 1 vs.
type 2, is an essential prognostic factor. The two subtypes of pRCC have a
similar pattern, i.e., the papillary architecture, yet some subtle differences,
including cellular and cell-layer level patterns. However, the cellular and
cell-layer level patterns almost cannot be captured by existing CNN-based
models in large-size histopathological images, which brings obstacles to
directly applying these models to such a fine-grained classification task. This
paper proposes a novel instance-based Vision Transformer (i-ViT) to learn
robust representations of histopathological images for the pRCC subtyping task
by extracting finer features from instance patches (by cropping around
segmented nuclei and assigning predicted grades). The proposed i-ViT takes
top-K instances as input and aggregates them for capturing both the cellular
and cell-layer level patterns by a position-embedding layer, a grade-embedding
layer, and a multi-head multi-layer self-attention module. To evaluate the
performance of the proposed framework, experienced pathologists are invited to
selected 1162 regions of interest from 171 whole slide images of type 1 and
type 2 pRCC. Experimental results show that the proposed method achieves better
performance than existing CNN-based models with a significant margin. | [
"cs.CV"
] |
This work proposes a novel method to robustly and accurately model time
series with heavy-tailed noise, in non-stationary scenarios. In many practical
application time series have heavy-tailed noise that significantly impacts the
performance of classical forecasting models; in particular, accurately modeling
a distribution over extreme events is crucial to performing accurate time
series anomaly detection. We propose a Spliced Binned-Pareto distribution which
is both robust to extreme observations and allows accurate modeling of the full
distribution. Our method allows the capture of time dependencies in the higher
order moments of the distribution such as the tail heaviness. We compare the
robustness and the accuracy of the tail estimation of our method to other state
of the art methods on Twitter mentions count time series. | [
"stat.ML",
"cs.LG",
"stat.CO"
] |
Neural networks used for multi-interaction trajectory reconstruction lack the
ability to estimate the uncertainty in their outputs, which would be useful to
better analyse and understand the systems they model. In this paper we extend
the Factorised Neural Relational Inference model to output both a mean and a
standard deviation for each component of the phase space vector, which together
with an appropriate loss function, can account for uncertainty. A variety of
loss functions are investigated including ideas from convexification and a
Bayesian treatment of the problem. We show that the physical meaning of the
variables is important when considering the uncertainty and demonstrate the
existence of pathological local minima that are difficult to avoid during
training. | [
"cs.LG",
"stat.ML"
] |
Despite substantial progress in applying neural networks (NN) to a wide
variety of areas, they still largely suffer from a lack of transparency and
interpretability. While recent developments in explainable artificial
intelligence attempt to bridge this gap (e.g., by visualizing the correlation
between input pixels and final outputs), these approaches are limited to
explaining low-level relationships, and crucially, do not provide insights on
error correction. In this work, we propose a framework (VRX) to interpret
classification NNs with intuitive structural visual concepts. Given a trained
classification model, the proposed VRX extracts relevant class-specific visual
concepts and organizes them using structural concept graphs (SCG) based on
pairwise concept relationships. By means of knowledge distillation, we show VRX
can take a step towards mimicking the reasoning process of NNs and provide
logical, concept-level explanations for final model decisions. With extensive
experiments, we empirically show VRX can meaningfully answer "why" and "why
not" questions about the prediction, providing easy-to-understand insights
about the reasoning process. We also show that these insights can potentially
provide guidance on improving NN's performance. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Machine learning is increasingly used to inform decision-making in sensitive
situations where decisions have consequential effects on individuals' lives. In
these settings, in addition to requiring models to be accurate and robust,
socially relevant values such as fairness, privacy, accountability, and
explainability play an important role for the adoption and impact of said
technologies. In this work, we focus on algorithmic recourse, which is
concerned with providing explanations and recommendations to individuals who
are unfavourably treated by automated decision-making systems. We first perform
an extensive literature review, and align the efforts of many authors by
presenting unified definitions, formulations, and solutions to recourse. Then,
we provide an overview of the prospective research directions towards which the
community may engage, challenging existing assumptions and making explicit
connections to other ethical challenges such as security, privacy, and
fairness. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Graph neural networks (GNNs) have been widely used to learn vector
representation of graph-structured data and achieved better task performance
than conventional methods. The foundation of GNNs is the message passing
procedure, which propagates the information in a node to its neighbors. Since
this procedure proceeds one step per layer, the range of the information
propagation among nodes is small in the lower layers, and it expands toward the
higher layers. Therefore, a GNN model has to be deep enough to capture global
structural information in a graph. On the other hand, it is known that deep GNN
models suffer from performance degradation because they lose nodes' local
information, which would be essential for good model performance, through many
message passing steps. In this study, we propose a multi-level attention
pooling (MLAP) for graph-level classification tasks, which can adapt to both
local and global structural information in a graph. It has an attention pooling
layer for each message passing step and computes the final graph representation
by unifying the layer-wise graph representations. The MLAP architecture allows
models to utilize the structural information of graphs with multiple levels of
localities because it preserves layer-wise information before losing them due
to oversmoothing. Results of our experiments show that the MLAP architecture
improves deeper models' performance in graph classification tasks compared to
the baseline architectures. In addition, analyses on the layer-wise graph
representations suggest that aggregating information from multiple levels of
localities indeed has the potential to improve the discriminability of learned
graph representations. | [
"cs.LG"
] |
The rapid spread of COVID-19 cases in recent months has strained hospital
resources, making rapid and accurate triage of patients presenting to emergency
departments a necessity. Machine learning techniques using clinical data such
as chest X-rays have been used to predict which patients are most at risk of
deterioration. We consider the task of predicting two types of patient
deterioration based on chest X-rays: adverse event deterioration (i.e.,
transfer to the intensive care unit, intubation, or mortality) and increased
oxygen requirements beyond 6 L per day. Due to the relative scarcity of
COVID-19 patient data, existing solutions leverage supervised pretraining on
related non-COVID images, but this is limited by the differences between the
pretraining data and the target COVID-19 patient data. In this paper, we use
self-supervised learning based on the momentum contrast (MoCo) method in the
pretraining phase to learn more general image representations to use for
downstream tasks. We present three results. The first is deterioration
prediction from a single image, where our model achieves an area under receiver
operating characteristic curve (AUC) of 0.742 for predicting an adverse event
within 96 hours (compared to 0.703 with supervised pretraining) and an AUC of
0.765 for predicting oxygen requirements greater than 6 L a day at 24 hours
(compared to 0.749 with supervised pretraining). We then propose a new
transformer-based architecture that can process sequences of multiple images
for prediction and show that this model can achieve an improved AUC of 0.786
for predicting an adverse event at 96 hours and an AUC of 0.848 for predicting
mortalities at 96 hours. A small pilot clinical study suggested that the
prediction accuracy of our model is comparable to that of experienced
radiologists analyzing the same information. | [
"cs.CV",
"cs.LG"
] |
The data scarcity problem in Electroencephalography (EEG) based affective
computing results into difficulty in building an effective model with high
accuracy and stability using machine learning algorithms especially deep
learning models. Data augmentation has recently achieved considerable
performance improvement for deep learning models: increased accuracy,
stability, and reduced over-fitting. In this paper, we propose a novel data
augmentation framework, namely Generative Adversarial Network-based
Self-supervised Data Augmentation (GANSER). As the first to combine adversarial
training with self-supervised learning for EEG-based emotion recognition, the
proposed framework can generate high-quality and high-diversity simulated EEG
samples. In particular, we utilize adversarial training to learn an EEG
generator and force the generated EEG signals to approximate the distribution
of real samples, ensuring the quality of augmented samples. A transformation
function is employed to mask parts of EEG signals and force the generator to
synthesize potential EEG signals based on the remaining parts, to produce a
wide variety of samples. The masking possibility during transformation is
introduced as prior knowledge to guide to extract distinguishable features for
simulated EEG signals and generalize the classifier to the augmented sample
space. Finally, extensive experiments demonstrate our proposed method can help
emotion recognition for performance gain and achieve state-of-the-art results. | [
"cs.LG",
"cs.AI",
"cs.HC"
] |
Efficient methods to evaluate new algorithms are critical for improving
interactive bandit and reinforcement learning systems such as recommendation
systems. A/B tests are reliable, but are time- and money-consuming, and entail
a risk of failure. In this paper, we develop an alternative method, which
predicts the performance of algorithms given historical data that may have been
generated by a different algorithm. Our estimator has the property that its
prediction converges in probability to the true performance of a counterfactual
algorithm at a rate of $\sqrt{N}$, as the sample size $N$ increases. We also
show a correct way to estimate the variance of our prediction, thus allowing
the analyst to quantify the uncertainty in the prediction. These properties
hold even when the analyst does not know which among a large number of
potentially important state variables are actually important. We validate our
method by a simulation experiment about reinforcement learning. We finally
apply it to improve advertisement design by a major advertisement company. We
find that our method produces smaller mean squared errors than state-of-the-art
methods. | [
"cs.LG",
"cs.AI",
"econ.EM",
"stat.ME",
"stat.ML"
] |
Generative Adversarial Networks (GANs) are an elegant mechanism for data
generation. However, a key challenge when using GANs is how to best measure
their ability to generate realistic data. In this paper, we demonstrate that an
intrinsic dimensional characterization of the data space learned by a GAN model
leads to an effective evaluation metric for GAN quality. In particular, we
propose a new evaluation measure, CrossLID, that assesses the local intrinsic
dimensionality (LID) of real-world data with respect to neighborhoods found in
GAN-generated samples. Intuitively, CrossLID measures the degree to which
manifolds of two data distributions coincide with each other. In experiments on
4 benchmark image datasets, we compare our proposed measure to several
state-of-the-art evaluation metrics. Our experiments show that CrossLID is
strongly correlated with the progress of GAN training, is sensitive to mode
collapse, is robust to small-scale noise and image transformations, and robust
to sample size. Furthermore, we show how CrossLID can be used within the GAN
training process to improve generation quality. | [
"cs.LG",
"stat.ML"
] |
Neural architecture search has attracted wide attentions in both academia and
industry. To accelerate it, researchers proposed weight-sharing methods which
first train a super-network to reuse computation among different operators,
from which exponentially many sub-networks can be sampled and efficiently
evaluated. These methods enjoy great advantages in terms of computational
costs, but the sampled sub-networks are not guaranteed to be estimated
precisely unless an individual training process is taken. This paper owes such
inaccuracy to the inevitable mismatch between assembled network layers, so that
there is a random error term added to each estimation. We alleviate this issue
by training a graph convolutional network to fit the performance of sampled
sub-networks so that the impact of random errors becomes minimal. With this
strategy, we achieve a higher rank correlation coefficient in the selected set
of candidates, which consequently leads to better performance of the final
architecture. In addition, our approach also enjoys the flexibility of being
used under different hardware constraints, since the graph convolutional
network has provided an efficient lookup table of the performance of
architectures in the entire search space. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Several applications such as autonomous driving, augmented reality and
virtual reality requires a precise prediction of the 3D human pose. Recently, a
new problem was introduced in the field to predict the 3D human poses from an
observed 2D poses. We propose Skeleton-Graph, a deep spatio-temporal graph CNN
model that predicts the future 3D skeleton poses in a single pass from the 2D
ones. Unlike prior works, Skeleton-Graph focuses on modeling the interaction
between the skeleton joints by exploiting their spatial configuration. This is
being achieved by formulating the problem as a graph structure while learning a
suitable graph adjacency kernel. By the design, Skeleton-Graph predicts the
future 3D poses without divergence on the long-term unlike prior works. We also
introduce a new metric that measures the divergence of predictions on the
long-term. Our results show an FDE improvement of at least 27% and an ADE of 4%
on both the GTA-IM and PROX datasets respectively in comparison with prior
works. Also, we are 88% and 93% less divergence on the long-term motion
prediction in comparison with prior works on both GTA-IM and PROX datasets.
https://github.com/abduallahmohamed/Skeleton-Graph.git | [
"cs.CV",
"cs.RO"
] |
In this work, we study the transfer learning problem under high-dimensional
generalized linear models (GLMs), which aim to improve the fit on target data
by borrowing information from useful source data. Given which sources to
transfer, we propose an oracle algorithm and derive its $\ell_2$-estimation
error bounds. The theoretical analysis shows that under certain conditions,
when the target and source are sufficiently close to each other, the estimation
error bound could be improved over that of the classical penalized estimator
using only target data. When we don't know which sources to transfer, an
algorithm-free transferable source detection approach is introduced to detect
informative sources. The detection consistency is proved under the
high-dimensional GLM transfer learning setting. Extensive simulations and a
real-data experiment verify the effectiveness of our algorithms. | [
"stat.ML",
"cs.LG",
"stat.ME"
] |
Transformers are widely used in natural language processing due to their
ability to model longer-term dependencies in text. Although these models
achieve state-of-the-art performance for many language related tasks, their
applicability outside of the natural language processing field has been
minimal. In this work, we propose the use of transformer models for the
prediction of dynamical systems representative of physical phenomena. The use
of Koopman based embeddings provide a unique and powerful method for projecting
any dynamical system into a vector representation which can then be predicted
by a transformer model. The proposed model is able to accurately predict
various dynamical systems and outperform classical methods that are commonly
used in the scientific machine learning literature. | [
"cs.LG",
"physics.comp-ph"
] |
The emergence of COVID-19 has necessitated many efforts by the scientific
community for its proper management. An urgent clinical reaction is required in
the face of the unending devastation being caused by the pandemic. These
efforts include technological innovations for improvement in screening,
treatment, vaccine development, contact tracing and, survival prediction. The
use of Deep Learning (DL) and Artificial Intelligence (AI) can be sought in all
of the above-mentioned spheres. This paper aims to review the role of Deep
Learning and Artificial intelligence in various aspects of the overall COVID-19
management and particularly for COVID-19 detection and classification. The DL
models are developed to analyze clinical modalities like CT scans and X-Ray
images of patients and predict their pathological condition. A DL model aims to
detect the COVID-19 pneumonia, classify and distinguish between COVID-19,
Community-Acquired Pneumonia (CAP), Viral and Bacterial pneumonia, and normal
conditions. Furthermore, sophisticated models can be built to segment the
affected area in the lungs and quantify the infection volume for a better
understanding of the extent of damage. Many models have been developed either
independently or with the help of pre-trained models like VGG19, ResNet50, and
AlexNet leveraging the concept of transfer learning. Apart from model
development, data preprocessing and augmentation are also performed to cope
with the challenge of insufficient data samples often encountered in medical
applications. It can be evaluated that DL and AI can be effectively implemented
to withstand the challenges posed by the global emergency | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
Head pose estimation and face alignment constitute a backbone preprocessing
for many applications relying on face analysis. While both are closely related
tasks, they are generally addressed separately, e.g. by deducing the head pose
from the landmark locations. In this paper, we propose to entwine face
alignment and head pose tasks inside an attentional cascade. This cascade uses
a geometry transfer network for integrating heterogeneous annotations to
enhance landmark localization accuracy. Furthermore, we propose a
doubly-conditional fusion scheme to select relevant feature maps, and regions
thereof, based on a current head pose and landmark localization estimate. We
empirically show the benefit of entwining head pose and landmark localization
objectives inside our architecture, and that the proposed AC-DC model enhances
the state-of-the-art accuracy on multiple databases for both face alignment and
head pose estimation tasks. | [
"cs.CV",
"cs.LG"
] |
Computational modeling of human multimodal language is an emerging research
area in natural language processing spanning the language, visual and acoustic
modalities. Comprehending multimodal language requires modeling not only the
interactions within each modality (intra-modal interactions) but more
importantly the interactions between modalities (cross-modal interactions). In
this paper, we propose the Recurrent Multistage Fusion Network (RMFN) which
decomposes the fusion problem into multiple stages, each of them focused on a
subset of multimodal signals for specialized, effective fusion. Cross-modal
interactions are modeled using this multistage fusion approach which builds
upon intermediate representations of previous stages. Temporal and intra-modal
interactions are modeled by integrating our proposed fusion approach with a
system of recurrent neural networks. The RMFN displays state-of-the-art
performance in modeling human multimodal language across three public datasets
relating to multimodal sentiment analysis, emotion recognition, and speaker
traits recognition. We provide visualizations to show that each stage of fusion
focuses on a different subset of multimodal signals, learning increasingly
discriminative multimodal representations. | [
"cs.LG",
"cs.AI",
"cs.CL",
"cs.NE",
"stat.ML"
] |
Semi-supervised learning, which has emerged from the beginning of this
century, is a new type of learning method between traditional supervised
learning and unsupervised learning. The main idea of semi-supervised learning
is to introduce unlabeled samples into the model training process to avoid
performance (or model) degeneration due to insufficiency of labeled samples.
Semi-supervised learning has been applied successfully in many fields. This
paper reviews the development process and main theories of semi-supervised
learning, as well as its recent advances and importance in solving real-world
problems demonstrated by typical application examples. | [
"cs.LG"
] |
Recent automotive vision work has focused almost exclusively on processing
forward-facing cameras. However, future autonomous vehicles will not be viable
without a more comprehensive surround sensing, akin to a human driver, as can
be provided by 360{\deg} panoramic cameras. We present an approach to adapt
contemporary deep network architectures developed on conventional rectilinear
imagery to work on equirectangular 360{\deg} panoramic imagery. To address the
lack of annotated panoramic automotive datasets availability, we adapt a
contemporary automotive dataset, via style and projection transformations, to
facilitate the cross-domain retraining of contemporary algorithms for panoramic
imagery. Following this approach we retrain and adapt existing architectures to
recover scene depth and 3D pose of vehicles from monocular panoramic imagery
without any panoramic training labels or calibration parameters. Our approach
is evaluated qualitatively on crowd-sourced panoramic images and quantitatively
using an automotive environment simulator to provide the first benchmark for
such techniques within panoramic imagery. | [
"cs.CV"
] |
DETR has been recently proposed to eliminate the need for many hand-designed
components in object detection while demonstrating good performance. However,
it suffers from slow convergence and limited feature spatial resolution, due to
the limitation of Transformer attention modules in processing image feature
maps. To mitigate these issues, we proposed Deformable DETR, whose attention
modules only attend to a small set of key sampling points around a reference.
Deformable DETR can achieve better performance than DETR (especially on small
objects) with 10 times less training epochs. Extensive experiments on the COCO
benchmark demonstrate the effectiveness of our approach. Code is released at
https://github.com/fundamentalvision/Deformable-DETR. | [
"cs.CV"
] |
We propose a framework for general probabilistic multi-step time series
regression. Specifically, we exploit the expressiveness and temporal nature of
Sequence-to-Sequence Neural Networks (e.g. recurrent and convolutional
structures), the nonparametric nature of Quantile Regression and the efficiency
of Direct Multi-Horizon Forecasting. A new training scheme,
*forking-sequences*, is designed for sequential nets to boost stability and
performance. We show that the approach accommodates both temporal and static
covariates, learning across multiple related series, shifting seasonality,
future planned event spikes and cold-starts in real life large-scale
forecasting. The performance of the framework is demonstrated in an application
to predict the future demand of items sold on Amazon.com, and in a public
probabilistic forecasting competition to predict electricity price and load. | [
"stat.ML"
] |
Global pooling, such as max- or sum-pooling, is one of the key ingredients in
deep neural networks used for processing images, texts, graphs and other types
of structured data. Based on the recent DeepSets architecture proposed by
Zaheer et al. (NIPS 2017), we introduce a Set Aggregation Network (SAN) as an
alternative global pooling layer. In contrast to typical pooling operators, SAN
allows to embed a given set of features to a vector representation of arbitrary
size. We show that by adjusting the size of embedding, SAN is capable of
preserving the whole information from the input. In experiments, we demonstrate
that replacing global pooling layer by SAN leads to the improvement of
classification accuracy. Moreover, it is less prone to overfitting and can be
used as a regularizer. | [
"cs.LG",
"cs.AI",
"cs.CV",
"stat.ML"
] |
Estimation of differential geometric quantities in discrete 3D data
representations is one of the crucial steps in the geometry processing
pipeline. Specifically, estimating normals and sharp feature lines from raw
point cloud helps improve meshing quality and allows us to use more precise
surface reconstruction techniques. When designing a learnable approach to such
problems, the main difficulty is selecting neighborhoods in a point cloud and
incorporating geometric relations between the points. In this study, we present
a geometric attention mechanism that can provide such properties in a learnable
fashion. We establish the usefulness of the proposed technique with several
experiments on the prediction of normal vectors and the extraction of feature
lines. | [
"cs.CV"
] |
The Fr\'echet Inception Distance (FID) has been used to evaluate hundreds of
generative models. We introduce FastFID, which can efficiently train generative
models with FID as a loss function. Using FID as an additional loss for
Generative Adversarial Networks improves their FID. | [
"cs.LG",
"stat.ML"
] |
The conventional methods for estimating camera poses and scene structures
from severely blurry or low resolution images often result in failure. The
off-the-shelf deblurring or super-resolution methods may show visually pleasing
results. However, applying each technique independently before matching is
generally unprofitable because this naive series of procedures ignores the
consistency between images. In this paper, we propose a pioneering unified
framework that solves four problems simultaneously, namely, dense depth
reconstruction, camera pose estimation, super-resolution, and deblurring. By
reflecting a physical imaging process, we formulate a cost minimization problem
and solve it using an alternating optimization technique. The experimental
results on both synthetic and real videos show high-quality depth maps derived
from severely degraded images that contrast the failures of naive multi-view
stereo methods. Our proposed method also produces outstanding deblurred and
super-resolved images unlike the independent application or combination of
conventional video deblurring, super-resolution methods. | [
"cs.CV"
] |
Self-supervised learning has been widely used to obtain transferrable
representations from unlabeled images. Especially, recent contrastive learning
methods have shown impressive performances on downstream image classification
tasks. While these contrastive methods mainly focus on generating invariant
global representations at the image-level under semantic-preserving
transformations, they are prone to overlook spatial consistency of local
representations and therefore have a limitation in pretraining for localization
tasks such as object detection and instance segmentation. Moreover,
aggressively cropped views used in existing contrastive methods can minimize
representation distances between the semantically different regions of a single
image.
In this paper, we propose a spatially consistent representation learning
algorithm (SCRL) for multi-object and location-specific tasks. In particular,
we devise a novel self-supervised objective that tries to produce coherent
spatial representations of a randomly cropped local region according to
geometric translations and zooming operations. On various downstream
localization tasks with benchmark datasets, the proposed SCRL shows significant
performance improvements over the image-level supervised pretraining as well as
the state-of-the-art self-supervised learning methods.
Code is available at https://github.com/kakaobrain/scrl | [
"cs.CV",
"cs.LG"
] |
In many real-world multi-agent cooperative tasks, due to high cost and risk,
agents cannot interact with the environment and collect experiences during
learning, but have to learn from offline datasets. However, the transition
probabilities calculated from the dataset can be much different from the
transition probabilities induced by the learned policies of other agents,
creating large errors in value estimates. Moreover, the experience
distributions of agents' datasets may vary wildly due to diverse behavior
policies, causing large difference in value estimates between agents.
Consequently, agents will learn uncoordinated suboptimal policies. In this
paper, we propose MABCQ, which exploits value deviation and transition
normalization to modify the transition probabilities. Value deviation
optimistically increases the transition probabilities of high-value next
states, and transition normalization normalizes the biased transition
probabilities of next states. They together encourage agents to discover
potential optimal and coordinated policies. Mathematically, we prove the
convergence of Q-learning under the non-stationary transition probabilities
after modification. Empirically, we show that MABCQ greatly outperforms
baselines and reduces the difference in value estimates between agents. | [
"cs.LG",
"cs.MA"
] |
Particle based optimization algorithms have recently been developed as
sampling methods that iteratively update a set of particles to approximate a
target distribution. In particular Stein variational gradient descent has
gained attention in the approximate inference literature for its flexibility
and accuracy. We empirically explore the ability of this method to sample from
multi-modal distributions and focus on two important issues: (i) the inability
of the particles to escape from local modes and (ii) the inefficacy in
reproducing the density of the different regions. We propose an annealing
schedule to solve these issues and show, through various experiments, how this
simple solution leads to significant improvements in mode coverage, without
invalidating any theoretical properties of the original algorithm. | [
"cs.LG"
] |
We propose a new model based on the deconvolutional networks and SAX
discretization to learn the representation for multivariate time series.
Deconvolutional networks fully exploit the advantage the powerful
expressiveness of deep neural networks in the manner of unsupervised learning.
We design a network structure specifically to capture the cross-channel
correlation with deconvolution, forcing the pooling operation to perform the
dimension reduction along each position in the individual channel.
Discretization based on Symbolic Aggregate Approximation is applied on the
feature vectors to further extract the bag of features. We show how this
representation and bag of features helps on classification. A full comparison
with the sequence distance based approach is provided to demonstrate the
effectiveness of our approach on the standard datasets. We further build the
Markov matrix from the discretized representation from the deconvolution to
visualize the time series as complex networks, which show more class-specific
statistical properties and clear structures with respect to different labels. | [
"cs.LG",
"cs.NE"
] |
The representation used for Facial Expression Recognition (FER) usually
contain expression information along with other variations such as identity and
illumination. In this paper, we propose a novel Disentangled Expression
learning-Generative Adversarial Network (DE-GAN) to explicitly disentangle
facial expression representation from identity information. In this learning by
reconstruction method, facial expression representation is learned by
reconstructing an expression image employing an encoder-decoder based
generator. This expression representation is disentangled from identity
component by explicitly providing the identity code to the decoder part of
DE-GAN. The process of expression image reconstruction and disentangled
expression representation learning is improved by performing expression and
identity classification in the discriminator of DE-GAN. The disentangled facial
expression representation is then used for facial expression recognition
employing simple classifiers like SVM or MLP. The experiments are performed on
publicly available and widely used face expression databases (CK+, MMI,
Oulu-CASIA). The experimental results show that the proposed technique produces
comparable results with state-of-the-art methods. | [
"cs.CV"
] |
Transfer learning across different reinforcement learning (RL) tasks is
becoming an increasingly valuable area of research. We consider a goal-based
multi-task RL framework and mechanisms by which previously solved tasks can
reduce sample complexity and regret when the agent is faced with a new task.
Specifically, we introduce two metrics on the state space that encode notions
of traversibility of the state space for an agent. Using these metrics a
topological covering is constructed by way of a set of landmark states in a
fully self-supervised manner. We show that these landmark coverings confer
theoretical advantages for transfer learning within the goal-based multi-task
RL setting. Specifically, we demonstrate three mechanisms by which landmark
coverings can be used for successful transfer learning. First, we extend the
Landmark Options Via Reflection (LOVR) framework to this new topological
covering; second, we use the landmark-centric value functions themselves as
features and define a greedy zombie policy that achieves near oracle
performance on a sequence of zero-shot transfer tasks; finally, motivated by
the second transfer mechanism, we introduce a learned reward function that
provides a more dense reward signal for goal-based RL. Our novel topological
landmark covering confers beneficial theoretical results, bounding the Q values
at each state-action pair. In doing so, we introduce a mechanism that performs
action-pruning at infeasible actions which cannot possibly be part of an
optimal policy for the current goal. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
We propose split-brain autoencoders, a straightforward modification of the
traditional autoencoder architecture, for unsupervised representation learning.
The method adds a split to the network, resulting in two disjoint sub-networks.
Each sub-network is trained to perform a difficult task -- predicting one
subset of the data channels from another. Together, the sub-networks extract
features from the entire input signal. By forcing the network to solve
cross-channel prediction tasks, we induce a representation within the network
which transfers well to other, unseen tasks. This method achieves
state-of-the-art performance on several large-scale transfer learning
benchmarks. | [
"cs.CV"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.