text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
In the past decade, model-free reinforcement learning (RL) has provided
solutions to challenging domains such as robotics. Model-based RL shows the
prospect of being more sample-efficient than model-free methods in terms of
agent-environment interactions, because the model enables to extrapolate to
unseen situations. In the more recent past, model-based methods have shown
superior results compared to model-free methods in some challenging domains
with non-linear state transitions. At the same time, it has become apparent
that RL is not market-ready yet and that many real-world applications are going
to require model-based approaches, because model-free methods are too
sample-inefficient and show poor performance in early stages of training. The
latter is particularly important in industry, e.g. in production systems that
directly impact a company's revenue. This demonstrates the necessity for a
toolbox to push the boundaries for model-based RL. While there is a plethora of
toolboxes for model-free RL, model-based RL has received little attention in
terms of toolbox development. Bellman aims to fill this gap and introduces the
first thoroughly designed and tested model-based RL toolbox using
state-of-the-art software engineering practices. Our modular approach enables
to combine a wide range of environment models with generic model-based agent
classes that recover state-of-the-art algorithms. We also provide an experiment
harness to compare both model-free and model-based agents in a systematic
fashion w.r.t. user-defined evaluation metrics (e.g. cumulative reward). This
paves the way for new research directions, e.g. investigating uncertainty-aware
environment models that are not necessarily neural-network-based, or developing
algorithms to solve industrially-motivated benchmarks that share
characteristics with real-world problems. | [
"cs.LG"
] |
As the development of neural networks, more and more deep neural networks are
adopted in various tasks, such as image classification. However, as the huge
computational overhead, these networks could not be applied on mobile devices
or other low latency scenes. To address this dilemma, multi-classifier
convolutional network is proposed to allow faster inference via early
classifiers with the corresponding classifiers. These networks utilize
sophisticated designing to increase the early classifier accuracy. However,
naively training the multi-classifier network could hurt the performance
(accuracy) of deep neural networks as early classifiers throughout interfere
with the feature generation process.
In this paper, we propose a general training framework named
multi-self-distillation learning (MSD), which mining knowledge of different
classifiers within the same network and increase every classifier accuracy. Our
approach can be applied not only to multi-classifier networks, but also modern
CNNs (e.g., ResNet Series) augmented with additional side branch classifiers.
We use sampling-based branch augmentation technique to transform a
single-classifier network into a multi-classifier network. This reduces the gap
of capacity between different classifiers, and improves the effectiveness of
applying MSD. Our experiments show that MSD improves the accuracy of various
networks: enhancing the accuracy of every classifier significantly for existing
multi-classifier network (MSDNet), improving vanilla single-classifier networks
with internal classifiers with high accuracy, while also improving the final
accuracy. | [
"cs.CV"
] |
Nowadays, deep learning can be employed to a wide ranges of fields including
medicine, engineering, etc. In deep learning, Convolutional Neural Network
(CNN) is extensively used in the pattern and sequence recognition, video
analysis, natural language processing, spam detection, topic categorization,
regression analysis, speech recognition, image classification, object
detection, segmentation, face recognition, robotics, and control. The benefits
associated with its near human level accuracies in large applications lead to
the growing acceptance of CNN in recent years. The primary contribution of this
paper is to analyze the impact of the pattern of the hidden layers of a CNN
over the overall performance of the network. To demonstrate this influence, we
applied neural network with different layers on the Modified National Institute
of Standards and Technology (MNIST) dataset. Also, is to observe the variations
of accuracies of the network for various numbers of hidden layers and epochs
and to make comparison and contrast among them. The system is trained utilizing
stochastic gradient and backpropagation algorithm and tested with feedforward
algorithm. | [
"cs.CV",
"cs.LG",
"cs.NE",
"stat.ML"
] |
Feature pyramids and iterative refinement have recently led to great progress
in optical flow estimation. However, downsampling in feature pyramids can cause
blending of foreground objects with the background, which will mislead
subsequent decisions in the iterative processing. The results are missing
details especially in the flow of thin and of small structures. We propose a
novel Residual Feature Pyramid Module (RFPM) which retains important details in
the feature map without changing the overall iterative refinement design of the
optical flow estimation. RFPM incorporates a residual structure between
multiple feature pyramids into a downsampling module that corrects the blending
of objects across boundaries. We demonstrate how to integrate our module with
two state-of-the-art iterative refinement architectures. Results show that our
RFPM visibly reduces flow errors and improves state-of-art performance in the
clean pass of Sintel, and is one of the top-performing methods in KITTI.
According to the particular modular structure of RFPM, we introduce a special
transfer learning approach that can dramatically decrease the training time
compared to a typical full optical flow training schedule on multiple datasets. | [
"cs.CV"
] |
Generative adversarial networks have been very successful in generative
modeling, however they remain relatively challenging to train compared to
standard deep neural networks. In this paper, we propose new visualization
techniques for the optimization landscapes of GANs that enable us to study the
game vector field resulting from the concatenation of the gradient of both
players. Using these visualization techniques we try to bridge the gap between
theory and practice by showing empirically that the training of GANs exhibits
significant rotations around Local Stable Stationary Points (LSSP), similar to
the one predicted by theory on toy examples. Moreover, we provide empirical
evidence that GAN training converge to a stable stationary point which is a
saddle point for the generator loss, not a minimum, while still achieving
excellent performance. | [
"cs.LG",
"stat.ML"
] |
While image captioning through machines requires structured learning and
basis for interpretation, improvement requires multiple context understanding
and processing in a meaningful way. This research will provide a novel concept
for context combination and will impact many applications to deal visual
features as an equivalence of descriptions of objects, activities and events.
There are three components of our architecture: Feature Distribution
Composition (FDC) Layer Attention, Multiple Role Representation Crossover
(MRRC) Attention Layer and the Language Decoder. FDC Layer Attention helps in
generating the weighted attention from RCNN features, MRRC Attention Layer acts
as intermediate representation processing and helps in generating the next word
attention, while Language Decoder helps in estimation of the likelihood for the
next probable word in the sentence. We demonstrated effectiveness of FDC, MRRC,
regional object feature attention and reinforcement learning for effective
learning to generate better captions from images. The performance of our model
enhanced previous performances by 35.3\% and created a new standard and theory
for representation generation based on logic, better interpretability and
contexts. | [
"cs.LG",
"stat.ML"
] |
This paper proposes a novel pretext task to address the self-supervised video
representation learning problem. Specifically, given an unlabeled video clip,
we compute a series of spatio-temporal statistical summaries, such as the
spatial location and dominant direction of the largest motion, the spatial
location and dominant color of the largest color diversity along the temporal
axis, etc. Then a neural network is built and trained to yield the statistical
summaries given the video frames as inputs. In order to alleviate the learning
difficulty, we employ several spatial partitioning patterns to encode rough
spatial locations instead of exact spatial Cartesian coordinates. Our approach
is inspired by the observation that human visual system is sensitive to rapidly
changing contents in the visual field, and only needs impressions about rough
spatial locations to understand the visual contents. To validate the
effectiveness of the proposed approach, we conduct extensive experiments with
four 3D backbone networks, i.e., C3D, 3D-ResNet, R(2+1)D and S3D-G. The results
show that our approach outperforms the existing approaches across these
backbone networks on four downstream video analysis tasks including action
recognition, video retrieval, dynamic scene recognition, and action similarity
labeling. The source code is publicly available at:
https://github.com/laura-wang/video_repres_sts. | [
"cs.CV"
] |
The Vector AutoRegressive (VAR) model is fundamental to the study of
multivariate time series. Although VAR models are intensively investigated by
many researchers, practitioners often show more interest in analyzing VARX
models that incorporate the impact of unmodeled exogenous variables (X) into
the VAR. However, since the parameter space grows quadratically with the number
of time series, estimation quickly becomes challenging. While several proposals
have been made to sparsely estimate large VAR models, the estimation of large
VARX models is under-explored. Moreover, typically these sparse proposals
involve a lasso-type penalty and do not incorporate lag selection into the
estimation procedure. As a consequence, the resulting models may be difficult
to interpret. In this paper, we propose a lag-based hierarchically sparse
estimator, called "HVARX", for large VARX models. We illustrate the usefulness
of HVARX on a cross-category management marketing application. Our results show
how it provides a highly interpretable model, and improves out-of-sample
forecast accuracy compared to a lasso-type approach. | [
"stat.ML",
"stat.AP"
] |
Inverse reinforcement learning (IRL) aims to estimate the reward function of
optimizing agents by observing their response (estimates or actions). This
paper considers IRL when noisy estimates of the gradient of a reward function
generated by multiple stochastic gradient agents are observed. We present a
generalized Langevin dynamics algorithm to estimate the reward function
$R(\theta)$; specifically, the resulting Langevin algorithm asymptotically
generates samples from the distribution proportional to $\exp(R(\theta))$. The
proposed IRL algorithms use kernel-based passive learning schemes. We also
construct multi-kernel passive Langevin algorithms for IRL which are suitable
for high dimensional data. The performance of the proposed IRL algorithms are
illustrated on examples in adaptive Bayesian learning, logistic regression
(high dimensional problem) and constrained Markov decision processes. We prove
weak convergence of the proposed IRL algorithms using martingale averaging
methods. We also analyze the tracking performance of the IRL algorithms in
non-stationary environments where the utility function $R(\theta)$ jump changes
over time as a slow Markov chain. | [
"cs.LG",
"cs.SY",
"eess.SY",
"stat.ML"
] |
We propose a new method for fusing a LIDAR point cloud and camera-captured
images in the deep convolutional neural network (CNN). The proposed method
constructs a new layer called non-homogeneous pooling layer to transform
features between bird view map and front view map. The sparse LIDAR point cloud
is used to construct the mapping between the two maps. The pooling layer allows
efficient fusion of the bird view and front view features at any stage of the
network. This is favorable for the 3D-object detection using camera-LIDAR
fusion in autonomous driving scenarios. A corresponding deep CNN is designed
and tested on the KITTI bird view object detection dataset, which produces 3D
bounding boxes from the bird view map. The fusion method shows particular
benefit for detection of pedestrians in the bird view compared to other
fusion-based object detection networks. | [
"cs.CV",
"cs.LG"
] |
Deep neural networks (DNNs) are vulnerable to adversarial examples where
inputs with imperceptible perturbations mislead DNNs to incorrect results.
Despite the potential risk they bring, adversarial examples are also valuable
for providing insights into the weakness and blind-spots of DNNs. Thus, the
interpretability of a DNN in the adversarial setting aims to explain the
rationale behind its decision-making process and makes deeper understanding
which results in better practical applications. To address this issue, we try
to explain adversarial robustness for deep models from a new perspective of
neuron sensitivity which is measured by neuron behavior variation intensity
against benign and adversarial examples. In this paper, we first draw the close
connection between adversarial robustness and neuron sensitivities, as
sensitive neurons make the most non-trivial contributions to model predictions
in the adversarial setting. Based on that, we further propose to improve
adversarial robustness by constraining the similarities of sensitive neurons
between benign and adversarial examples which stabilizes the behaviors of
sensitive neurons towards adversarial noises. Moreover, we demonstrate that
state-of-the-art adversarial training methods improve model robustness by
reducing neuron sensitivities which in turn confirms the strong connections
between adversarial robustness and neuron sensitivity as well as the
effectiveness of using sensitive neurons to build robust models. Extensive
experiments on various datasets demonstrate that our algorithm effectively
achieves excellent results. | [
"cs.CV"
] |
Recent advances in protecting node privacy on graph data and attacking graph
neural networks (GNNs) gain much attention. The eye does not bring these two
essential tasks together yet. Imagine an adversary can utilize the powerful
GNNs to infer users' private labels in a social network. How can we
adversarially defend against such privacy attacks while maintaining the utility
of perturbed graphs? In this work, we propose a novel research task,
adversarial defenses against GNN-based privacy attacks, and present a graph
perturbation-based approach, NetFense, to achieve the goal. NetFense can
simultaneously keep graph data unnoticeability (i.e., having limited changes on
the graph structure), maintain the prediction confidence of targeted label
classification (i.e., preserving data utility), and reduce the prediction
confidence of private label classification (i.e., protecting the privacy of
nodes). Experiments conducted on single- and multiple-target perturbations
using three real graph data exhibit that the perturbed graphs by NetFense can
effectively maintain data utility (i.e., model unnoticeability) on targeted
label classification and significantly decrease the prediction confidence of
private label classification (i.e., privacy protection). Extensive studies also
bring several insights, such as the flexibility of NetFense, preserving local
neighborhoods in data unnoticeability, and better privacy protection for
high-degree nodes. | [
"cs.LG",
"cs.CR",
"cs.SI"
] |
We extend the framework of Boltzmann machines to a network of complex-valued
neurons with variable amplitudes, referred to as Complex Amplitude-Phase
Boltzmann machine (CAP-BM). The model is capable of performing unsupervised
learning on the amplitude and relative phase distribution in complex data. The
sampling rule of the Gibbs distribution and the learning rules of the model are
presented. Learning in a Complex Amplitude-Phase restricted Boltzmann machine
(CAP-RBM) is demonstrated on synthetic complex-valued images, and handwritten
MNIST digits transformed by a complex wavelet transform. Specifically, we show
the necessity of a new amplitude-amplitude coupling term in our model. The
proposed model is potentially valuable for machine learning tasks involving
complex-valued data with amplitude variation, and for developing algorithms for
novel computation hardware, such as coupled oscillators and neuromorphic
hardware, on which Boltzmann sampling can be executed in the complex domain. | [
"stat.ML",
"cs.LG",
"cs.NE"
] |
We propose an algorithm for tabular episodic reinforcement learning with
constraints. We provide a modular analysis with strong theoretical guarantees
for settings with concave rewards and convex constraints, and for settings with
hard constraints (knapsacks). Most of the previous work in constrained
reinforcement learning is limited to linear constraints, and the remaining work
focuses on either the feasibility question or settings with a single episode.
Our experiments demonstrate that the proposed algorithm significantly
outperforms these approaches in existing constrained episodic environments. | [
"cs.LG",
"cs.AI",
"cs.DS",
"stat.ML"
] |
Recommender systems play a fundamental role in web applications in filtering
massive information and matching user interests. While many efforts have been
devoted to developing more effective models in various scenarios, the
exploration on the explainability of recommender systems is running behind.
Explanations could help improve user experience and discover system defects. In
this paper, after formally introducing the elements that are related to model
explainability, we propose a novel explainable recommendation model through
improving the transparency of the representation learning process.
Specifically, to overcome the representation entangling problem in traditional
models, we revise traditional graph convolution to discriminate information
from different layers. Also, each representation vector is factorized into
several segments, where each segment relates to one semantic aspect in data.
Different from previous work, in our model, factor discovery and representation
learning are simultaneously conducted, and we are able to handle extra
attribute information and knowledge. In this way, the proposed model can learn
interpretable and meaningful representations for users and items. Unlike
traditional methods that need to make a trade-off between explainability and
effectiveness, the performance of our proposed explainable model is not
negatively affected after considering explainability. Finally, comprehensive
experiments are conducted to validate the performance of our model as well as
explanation faithfulness. | [
"cs.LG",
"stat.ML"
] |
We propose a model-free reinforcement learning algorithm inspired by the
popular randomized least squares value iteration (RLSVI) algorithm as well as
the optimism principle. Unlike existing upper-confidence-bound (UCB) based
approaches, which are often computationally intractable, our algorithm drives
exploration by simply perturbing the training data with judiciously chosen
i.i.d. scalar noises. To attain optimistic value function estimation without
resorting to a UCB-style bonus, we introduce an optimistic reward sampling
procedure. When the value functions can be represented by a function class
$\mathcal{F}$, our algorithm achieves a worst-case regret bound of
$\widetilde{O}(\mathrm{poly}(d_EH)\sqrt{T})$ where $T$ is the time elapsed, $H$
is the planning horizon and $d_E$ is the $\textit{eluder dimension}$ of
$\mathcal{F}$. In the linear setting, our algorithm reduces to LSVI-PHE, a
variant of RLSVI, that enjoys an $\widetilde{\mathcal{O}}(\sqrt{d^3H^3T})$
regret. We complement the theory with an empirical evaluation across known
difficult exploration tasks. | [
"cs.LG",
"stat.ML"
] |
Comparing to image inpainting, image outpainting receives less attention due
to two challenges in it. The first challenge is how to keep the spatial and
content consistency between generated images and original input. The second
challenge is how to maintain high quality in generated results, especially for
multi-step generations in which generated regions are spatially far away from
the initial input. To solve the two problems, we devise some innovative
modules, named Skip Horizontal Connection and Recurrent Content Transfer, and
integrate them into our designed encoder-decoder structure. By this design, our
network can generate highly realistic outpainting prediction effectively and
efficiently. Other than that, our method can generate new images with very long
sizes while keeping the same style and semantic content as the given input. To
test the effectiveness of the proposed architecture, we collect a new scenery
dataset with diverse, complicated natural scenes. The experimental results on
this dataset have demonstrated the efficacy of our proposed network. The code
and dataset are available from https://github.com/z-x-yang/NS-Outpainting. | [
"cs.CV"
] |
Glioma constitutes 80% of malignant primary brain tumors and is usually
classified as HGG and LGG. The LGG tumors are less aggressive, with slower
growth rate as compared to HGG, and are responsive to therapy. Tumor biopsy
being challenging for brain tumor patients, noninvasive imaging techniques like
Magnetic Resonance Imaging (MRI) have been extensively employed in diagnosing
brain tumors. Therefore automated systems for the detection and prediction of
the grade of tumors based on MRI data becomes necessary for assisting doctors
in the framework of augmented intelligence. In this paper, we thoroughly
investigate the power of Deep ConvNets for classification of brain tumors using
multi-sequence MR images. We propose novel ConvNet models, which are trained
from scratch, on MRI patches, slices, and multi-planar volumetric slices. The
suitability of transfer learning for the task is next studied by applying two
existing ConvNets models (VGGNet and ResNet) trained on ImageNet dataset,
through fine-tuning of the last few layers. LOPO testing, and testing on the
holdout dataset are used to evaluate the performance of the ConvNets. Results
demonstrate that the proposed ConvNets achieve better accuracy in all cases
where the model is trained on the multi-planar volumetric dataset. Unlike
conventional models, it obtains a testing accuracy of 95% for the low/high
grade glioma classification problem. A score of 97% is generated for
classification of LGG with/without 1p/19q codeletion, without any additional
effort towards extraction and selection of features. We study the properties of
self-learned kernels/ filters in different layers, through visualization of the
intermediate layer outputs. We also compare the results with that of
state-of-the-art methods, demonstrating a maximum improvement of 7% on the
grading performance of ConvNets and 9% on the prediction of 1p/19q codeletion
status. | [
"cs.CV",
"eess.IV"
] |
In this paper, we present a new deep learning architecture for addressing the
problem of supervised learning with sparse and irregularly sampled multivariate
time series. The architecture is based on the use of a semi-parametric
interpolation network followed by the application of a prediction network. The
interpolation network allows for information to be shared across multiple
dimensions of a multivariate time series during the interpolation stage, while
any standard deep learning model can be used for the prediction network. This
work is motivated by the analysis of physiological time series data in
electronic health records, which are sparse, irregularly sampled, and
multivariate. We investigate the performance of this architecture on both
classification and regression tasks, showing that our approach outperforms a
range of baseline and recently proposed models. | [
"cs.LG",
"stat.ML"
] |
It has been shown that for automated PAP-smear image classification, nucleus
features can be very informative. Therefore, the primary step for automated
screening can be cell-nuclei detection followed by segmentation of nuclei in
the resulting single cell PAP-smear images. We propose a patch based approach
using CNN for segmentation of nuclei in single cell images. We then pose the
question of ion of segmentation for classification using representation
learning with CNN, and whether low-level CNN features may be useful for
classification. We suggest a CNN-based feature level analysis and a transfer
learning based approach for classification using both segmented as well full
single cell images. We also propose a decision-tree based approach for
classification. Experimental results demonstrate the effectiveness of the
proposed algorithms individually (with low-level CNN features), and
simultaneously proving the sufficiency of cell-nuclei detection (rather than
accurate segmentation) for classification. Thus, we propose a system for
analysis of multi-cell PAP-smear images consisting of a simple nuclei detection
algorithm followed by classification using transfer learning. | [
"cs.CV"
] |
Inferring objects and their relationships from an image in the form of a
scene graph is useful in many applications at the intersection of vision and
language. In this work, we consider a challenging problem of compositional
generalization that emerges in this task due to a long tail data distribution.
Current scene graph generation models are trained on a tiny fraction of the
distribution corresponding to the most frequent compositions, e.g. <cup, on,
table>. However, test images might contain zero- and few-shot compositions of
objects and relationships, e.g. <cup, on, surfboard>. Despite each of the
object categories and the predicate (e.g. 'on') being frequent in the training
data, the models often fail to properly understand such unseen or rare
compositions. To improve generalization, it is natural to attempt increasing
the diversity of the training distribution. However, in the graph domain this
is non-trivial. To that end, we propose a method to synthesize rare yet
plausible scene graphs by perturbing real ones. We then propose and empirically
study a model based on conditional generative adversarial networks (GANs) that
allows us to generate visual features of perturbed scene graphs and learn from
them in a joint fashion. When evaluated on the Visual Genome dataset, our
approach yields marginal, but consistent improvements in zero- and few-shot
metrics. We analyze the limitations of our approach indicating promising
directions for future research. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
Introducing semantically meaningful objects to visual Simultaneous
Localization And Mapping (SLAM) has the potential to improve both the accuracy
and reliability of pose estimates, especially in challenging scenarios with
significant view-point and appearance changes. However, how semantic objects
should be represented for an efficient inclusion in optimization-based SLAM
frameworks is still an open question. Superquadrics(SQs) are an efficient and
compact object representation, able to represent most common object types to a
high degree, and typically retrieved from 3D point-cloud data. However,
accurate 3D point-cloud data might not be available in all applications. Recent
advancements in machine learning enabled robust object recognition and semantic
mask measurements from camera images under many different appearance
conditions. We propose a pipeline to leverage such semantic mask measurements
to fit SQ parameters to multi-view camera observations using a multi-stage
initialization and optimization procedure. We demonstrate the system's ability
to retrieve randomly generated SQ parameters from multi-view mask observations
in preliminary simulation experiments and evaluate different initialization
stages and cost functions. | [
"cs.CV"
] |
Model quantization can reduce the model size and computational latency, it
has become an essential technique for the deployment of deep neural networks on
resourceconstrained hardware (e.g., mobile phones and embedded devices). The
existing quantization methods mainly consider the numerical elements of the
weights and activation values, ignoring the relationship between elements. The
decline of representation ability and information loss usually lead to the
performance degradation. Inspired by the characteristics of images in the
frequency domain, we propose a novel multiscale wavelet quantization (MWQ)
method. This method decomposes original data into multiscale frequency
components by wavelet transform, and then quantizes the components of different
scales, respectively. It exploits the multiscale frequency and spatial
information to alleviate the information loss caused by quantization in the
spatial domain. Because of the flexibility of MWQ, we demonstrate three
applications (e.g., model compression, quantized network optimization, and
information enhancement) on the ImageNet and COCO datasets. Experimental
results show that our method has stronger representation ability and can play
an effective role in quantized neural networks. | [
"cs.CV",
"cs.AR"
] |
Financial time series forecasting is, without a doubt, the top choice of
computational intelligence for finance researchers from both academia and
financial industry due to its broad implementation areas and substantial
impact. Machine Learning (ML) researchers came up with various models and a
vast number of studies have been published accordingly. As such, a significant
amount of surveys exist covering ML for financial time series forecasting
studies. Lately, Deep Learning (DL) models started appearing within the field,
with results that significantly outperform traditional ML counterparts. Even
though there is a growing interest in developing models for financial time
series forecasting research, there is a lack of review papers that were solely
focused on DL for finance. Hence, our motivation in this paper is to provide a
comprehensive literature review on DL studies for financial time series
forecasting implementations. We not only categorized the studies according to
their intended forecasting implementation areas, such as index, forex,
commodity forecasting, but also grouped them based on their DL model choices,
such as Convolutional Neural Networks (CNNs), Deep Belief Networks (DBNs),
Long-Short Term Memory (LSTM). We also tried to envision the future for the
field by highlighting the possible setbacks and opportunities, so the
interested researchers can benefit. | [
"cs.LG",
"q-fin.CP",
"stat.ML",
"I.1.2"
] |
The ability to efficiently and accurately detect objects plays a very crucial
role for many computer vision tasks. Recently, offline object detectors have
shown a tremendous success. However, one major drawback of offline techniques
is that a complete set of training data has to be collected beforehand. In
addition, once learned, an offline detector can not make use of newly arriving
data. To alleviate these drawbacks, online learning has been adopted with the
following objectives: (1) the technique should be computationally and storage
efficient; (2) the updated classifier must maintain its high classification
accuracy. In this paper, we propose an effective and efficient framework for
learning an adaptive online greedy sparse linear discriminant analysis (GSLDA)
model. Unlike many existing online boosting detectors, which usually apply
exponential or logistic loss, our online algorithm makes use of LDA's learning
criterion that not only aims to maximize the class-separation criterion but
also incorporates the asymmetrical property of training data distributions. We
provide a better alternative for online boosting algorithms in the context of
training a visual object detector. We demonstrate the robustness and efficiency
of our methods on handwriting digit and face data sets. Our results confirm
that object detection tasks benefit significantly when trained in an online
manner. | [
"cs.CV"
] |
We present a novel and practical deep fully convolutional neural network
architecture for semantic pixel-wise segmentation termed SegNet. This core
trainable segmentation engine consists of an encoder network, a corresponding
decoder network followed by a pixel-wise classification layer. The architecture
of the encoder network is topologically identical to the 13 convolutional
layers in the VGG16 network. The role of the decoder network is to map the low
resolution encoder feature maps to full input resolution feature maps for
pixel-wise classification. The novelty of SegNet lies is in the manner in which
the decoder upsamples its lower resolution input feature map(s). Specifically,
the decoder uses pooling indices computed in the max-pooling step of the
corresponding encoder to perform non-linear upsampling. This eliminates the
need for learning to upsample. The upsampled maps are sparse and are then
convolved with trainable filters to produce dense feature maps. We compare our
proposed architecture with the widely adopted FCN and also with the well known
DeepLab-LargeFOV, DeconvNet architectures. This comparison reveals the memory
versus accuracy trade-off involved in achieving good segmentation performance.
SegNet was primarily motivated by scene understanding applications. Hence, it
is designed to be efficient both in terms of memory and computational time
during inference. It is also significantly smaller in the number of trainable
parameters than other competing architectures. We also performed a controlled
benchmark of SegNet and other architectures on both road scenes and SUN RGB-D
indoor scene segmentation tasks. We show that SegNet provides good performance
with competitive inference time and more efficient inference memory-wise as
compared to other architectures. We also provide a Caffe implementation of
SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet/. | [
"cs.CV",
"cs.LG",
"cs.NE"
] |
Neural networks have now long been used for solving complex problems of image
domain, yet designing the same needs manual expertise. Furthermore, techniques
for automatically generating a suitable deep learning architecture for a given
dataset have frequently made use of reinforcement learning and evolutionary
methods which take extensive computational resources and time. We propose a new
framework for neural architecture search based on a hill-climbing procedure
using morphism operators that makes use of a novel gradient update scheme. The
update is based on the aging of neural network layers and results in the
reduction in the overall training time. This technique can search in a broader
search space which subsequently yields competitive results. We achieve a 4.96%
error rate on the CIFAR-10 dataset in 19.4 hours of a single GPU training. | [
"cs.LG"
] |
Recent years have witnessed the unprecedented rising of time series from
almost all kindes of academic and industrial fields. Various types of deep
neural network models have been introduced to time series analysis, but the
important frequency information is yet lack of effective modeling. In light of
this, in this paper we propose a wavelet-based neural network structure called
multilevel Wavelet Decomposition Network (mWDN) for building frequency-aware
deep learning models for time series analysis. mWDN preserves the advantage of
multilevel discrete wavelet decomposition in frequency learning while enables
the fine-tuning of all parameters under a deep neural network framework. Based
on mWDN, we further propose two deep learning models called Residual
Classification Flow (RCF) and multi-frequecy Long Short-Term Memory (mLSTM) for
time series classification and forecasting, respectively. The two models take
all or partial mWDN decomposed sub-series in different frequencies as input,
and resort to the back propagation algorithm to learn all the parameters
globally, which enables seamless embedding of wavelet-based frequency analysis
into deep learning frameworks. Extensive experiments on 40 UCR datasets and a
real-world user volume dataset demonstrate the excellent performance of our
time series models based on mWDN. In particular, we propose an importance
analysis method to mWDN based models, which successfully identifies those
time-series elements and mWDN layers that are crucially important to time
series analysis. This indeed indicates the interpretability advantage of mWDN,
and can be viewed as an indepth exploration to interpretable deep learning. | [
"cs.LG",
"eess.SP",
"stat.ML"
] |
Reinforcement learning (RL) algorithms based on high-dimensional function
approximation have achieved tremendous empirical success in large-scale
problems with an enormous number of states. However, most analysis of such
algorithms gives rise to error bounds that involve either the number of states
or the number of features. This paper considers the situation where the
function approximation is made either using the kernel method or the two-layer
neural network model, in the context of a fitted Q-iteration algorithm with
explicit regularization. We establish an $\tilde{O}(H^3|\mathcal
{A}|^{\frac14}n^{-\frac14})$ bound for the optimal policy with $Hn$ samples,
where $H$ is the length of each episode and $|\mathcal {A}|$ is the size of
action space. Our analysis hinges on analyzing the $L^2$ error of the
approximated Q-function using $n$ data points. Even though this result still
requires a finite-sized action space, the error bound is independent of the
dimensionality of the state space. | [
"cs.LG"
] |
Rectified linear units, or ReLUs, have become the preferred activation
function for artificial neural networks. In this paper we consider two basic
learning problems assuming that the underlying data follow a generative model
based on a ReLU-network -- a neural network with ReLU activations. As a
primarily theoretical study, we limit ourselves to a single-layer network. The
first problem we study corresponds to dictionary-learning in the presence of
nonlinearity (modeled by the ReLU functions). Given a set of observation
vectors $\mathbf{y}^i \in \mathbb{R}^d, i =1, 2, \dots , n$, we aim to recover
$d\times k$ matrix $A$ and the latent vectors $\{\mathbf{c}^i\} \subset
\mathbb{R}^k$ under the model $\mathbf{y}^i = \mathrm{ReLU}(A\mathbf{c}^i
+\mathbf{b})$, where $\mathbf{b}\in \mathbb{R}^d$ is a random bias. We show
that it is possible to recover the column space of $A$ within an error of
$O(d)$ (in Frobenius norm) under certain conditions on the probability
distribution of $\mathbf{b}$.
The second problem we consider is that of robust recovery of the signal in
the presence of outliers, i.e., large but sparse noise. In this setting we are
interested in recovering the latent vector $\mathbf{c}$ from its noisy
nonlinear sketches of the form $\mathbf{v} = \mathrm{ReLU}(A\mathbf{c}) +
\mathbf{e}+\mathbf{w}$, where $\mathbf{e} \in \mathbb{R}^d$ denotes the
outliers with sparsity $s$ and $\mathbf{w} \in \mathbb{R}^d$ denote the dense
but small noise. This line of work has recently been studied (Soltanolkotabi,
2017) without the presence of outliers. For this problem, we show that a
generalized LASSO algorithm is able to recover the signal $\mathbf{c} \in
\mathbb{R}^k$ within an $\ell_2$ error of $O(\sqrt{\frac{(k+s)\log d}{d}})$
when $A$ is a random Gaussian matrix. | [
"stat.ML",
"cs.IT",
"cs.LG",
"math.IT"
] |
We introduce a novel method for 3D object detection and pose estimation from
color images only. We first use segmentation to detect the objects of interest
in 2D even in presence of partial occlusions and cluttered background. By
contrast with recent patch-based methods, we rely on a "holistic" approach: We
apply to the detected objects a Convolutional Neural Network (CNN) trained to
predict their 3D poses in the form of 2D projections of the corners of their 3D
bounding boxes. This, however, is not sufficient for handling objects from the
recent T-LESS dataset: These objects exhibit an axis of rotational symmetry,
and the similarity of two images of such an object under two different poses
makes training the CNN challenging. We solve this problem by restricting the
range of poses used for training, and by introducing a classifier to identify
the range of a pose at run-time before estimating it. We also use an optional
additional step that refines the predicted poses. We improve the
state-of-the-art on the LINEMOD dataset from 73.7% to 89.3% of correctly
registered RGB frames. We are also the first to report results on the Occlusion
dataset using color images only. We obtain 54% of frames passing the Pose 6D
criterion on average on several sequences of the T-LESS dataset, compared to
the 67% of the state-of-the-art on the same sequences which uses both color and
depth. The full approach is also scalable, as a single network can be trained
for multiple objects simultaneously. | [
"cs.CV"
] |
Existing work in counterfactual Learning to Rank (LTR) has focussed on
optimizing feature-based models that predict the optimal ranking based on
document features. LTR methods based on bandit algorithms often optimize
tabular models that memorize the optimal ranking per query. These types of
model have their own advantages and disadvantages. Feature-based models provide
very robust performance across many queries, including those previously unseen,
however, the available features often limit the rankings the model can predict.
In contrast, tabular models can converge on any possible ranking through
memorization. However, memorization is extremely prone to noise, which makes
tabular models reliable only when large numbers of user interactions are
available. Can we develop a robust counterfactual LTR method that pursues
memorization-based optimization whenever it is safe to do? We introduce the
Generalization and Specialization (GENSPEC) algorithm, a robust feature-based
counterfactual LTR method that pursues per-query memorization when it is safe
to do so. GENSPEC optimizes a single feature-based model for generalization:
robust performance across all queries, and many tabular models for
specialization: each optimized for high performance on a single query. GENSPEC
uses novel relative high-confidence bounds to choose which model to deploy per
query. By doing so, GENSPEC enjoys the high performance of successfully
specialized tabular models with the robustness of a generalized feature-based
model. Our results show that GENSPEC leads to optimal performance on queries
with sufficient click data, while having robust behavior on queries with little
or noisy data. | [
"cs.LG",
"cs.IR"
] |
The resale price assessment of secondhand jewelry items relies heavily on the
individual knowledge and skill of domain experts. In this paper, we propose a
methodology for reconstructing an AI system that autonomously assesses the
resale prices of secondhand jewelry items without the need for professional
knowledge. As shown in recent studies on fashion items, multimodal approaches
combining specifications and visual information of items have succeeded in
obtaining fine-grained representations of fashion items, although they
generally apply simple vector operations through a multimodal fusion. We
similarly build a multimodal model using images and attributes of the product
and further employ state-of-the-art multimodal deep neural networks applied in
computer vision to achieve a practical performance level. In addition, we model
the pricing procedure of an expert using iterative co-attention networks in
which the appearance and attributes of the product are carefully and
iteratively observed. Herein, we demonstrate the effectiveness of our model
using a large dataset of secondhand no brand jewelry items received from a
collaborating fashion retailer, and show that the iterative co-attention
process operates effectively in the context of resale price prediction. Our
model architecture is widely applicable to other fashion items where appearance
and specifications are important aspects. | [
"cs.CV"
] |
Thanks to the rapid growth in wearable technologies, monitoring complex human
context becomes feasible, paving the way to develop human-in-the-loop IoT
systems that naturally evolve to adapt to the human and environment state
autonomously. Nevertheless, a central challenge in designing such personalized
IoT applications arises from human variability. Such variability stems from the
fact that different humans exhibit different behaviors when interacting with
IoT applications (intra-human variability), the same human may change the
behavior over time when interacting with the same IoT application (inter-human
variability), and human behavior may be affected by the behaviors of other
people in the same environment (multi-human variability). To that end, we
propose FaiR-IoT, a general reinforcement learning-based framework for adaptive
and fairness-aware human-in-the-loop IoT applications. In FaiR-IoT, three
levels of reinforcement learning agents interact to continuously learn human
preferences and maximize the system's performance and fairness while taking
into account the intra-, inter-, and multi-human variability. We validate the
proposed framework on two applications, namely (i) Human-in-the-Loop Automotive
Advanced Driver Assistance Systems and (ii) Human-in-the-Loop Smart House.
Results obtained on these two applications validate the generality of FaiR-IoT
and its ability to provide a personalized experience while enhancing the
system's performance by 40%-60% compared to non-personalized systems and
enhancing the fairness of the multi-human systems by 1.5 orders of magnitude. | [
"cs.LG",
"cs.MA"
] |
Contextual information is vital in visual understanding problems, such as
semantic segmentation and object detection. We propose a Criss-Cross Network
(CCNet) for obtaining full-image contextual information in a very effective and
efficient way. Concretely, for each pixel, a novel criss-cross attention module
harvests the contextual information of all the pixels on its criss-cross path.
By taking a further recurrent operation, each pixel can finally capture the
full-image dependencies. Besides, a category consistent loss is proposed to
enforce the criss-cross attention module to produce more discriminative
features. Overall, CCNet is with the following merits: 1) GPU memory friendly.
Compared with the non-local block, the proposed recurrent criss-cross attention
module requires 11x less GPU memory usage. 2) High computational efficiency.
The recurrent criss-cross attention significantly reduces FLOPs by about 85% of
the non-local block. 3) The state-of-the-art performance. We conduct extensive
experiments on semantic segmentation benchmarks including Cityscapes, ADE20K,
human parsing benchmark LIP, instance segmentation benchmark COCO, video
segmentation benchmark CamVid. In particular, our CCNet achieves the mIoU
scores of 81.9%, 45.76% and 55.47% on the Cityscapes test set, the ADE20K
validation set and the LIP validation set respectively, which are the new
state-of-the-art results. The source codes are available at
\url{https://github.com/speedinghzl/CCNet}. | [
"cs.CV"
] |
Crowd counting is to estimate the number of objects (e.g., people or
vehicles) in an image of unconstrained congested scenes. Designing a general
crowd counting algorithm applicable to a wide range of crowd images is
challenging, mainly due to the possibly large variation in object scales and
the presence of many isolated small clusters. Previous approaches based on
convolution operations with multi-branch architecture are effective for only
some narrow bands of scales and have not captured the long-range contextual
relationship due to isolated clustering. To address that, we propose SACANet, a
novel scale-adaptive long-range context-aware network for crowd counting.
SACANet consists of three major modules: the pyramid contextual module which
extracts long-range contextual information and enlarges the receptive field, a
scale-adaptive self-attention multi-branch module to attain high scale
sensitivity and detection accuracy of isolated clusters, and a hierarchical
fusion module to fuse multi-level self-attention features. With group
normalization, SACANet achieves better optimality in the training process. We
have conducted extensive experiments using the VisDrone2019 People dataset, the
VisDrone2019 Vehicle dataset, and some other challenging benchmarks. As
compared with the state-of-the-art methods, SACANet is shown to be effective,
especially for extremely crowded conditions with diverse scales and scattered
clusters, and achieves much lower MAE as compared with baselines. | [
"cs.CV"
] |
Social interaction is an important topic in human trajectory prediction to
generate plausible paths. In this paper, we present a novel insight of
group-based social interaction model to explore relationships among
pedestrians. We recursively extract social representations supervised by
group-based annotations and formulate them into a social behavior graph, called
Recursive Social Behavior Graph. Our recursive mechanism explores the
representation power largely. Graph Convolutional Neural Network then is used
to propagate social interaction information in such a graph. With the guidance
of Recursive Social Behavior Graph, we surpass state-of-the-art method on ETH
and UCY dataset for 11.1% in ADE and 10.8% in FDE in average, and successfully
predict complex social behaviors. | [
"cs.CV"
] |
Deep representation learning is a crucial procedure in multimedia analysis
and attracts increasing attention. Most of the popular techniques rely on
convolutional neural network and require a large amount of labeled data in the
training procedure. However, it is time consuming or even impossible to obtain
the label information in some tasks due to cost limitation. Thus, it is
necessary to develop unsupervised deep representation learning techniques. This
paper proposes a new network structure for unsupervised deep representation
learning based on spectral analysis, which is a popular technique with solid
theory foundations. Compared with the existing spectral analysis methods, the
proposed network structure has at least three advantages. Firstly, it can
identify the local similarities among images in patch level and thus more
robust against occlusion. Secondly, through multiple consecutive spectral
analysis procedures, the proposed network can learn more clustering-friendly
representations and is capable to reveal the deep correlations among data
samples. Thirdly, it can elegantly integrate different spectral analysis
procedures, so that each spectral analysis procedure can have their individual
strengths in dealing with different data sample distributions. Extensive
experimental results show the effectiveness of the proposed methods on various
image clustering tasks. | [
"cs.CV"
] |
Numerous control and learning problems face the situation where sequences of
high-dimensional highly dependent data are available but no or little feedback
is provided to the learner, which makes any inference rather challenging. To
address this challenge, we formulate the following problem. Given a series of
observations $X_0,\dots,X_n$ coming from a large (high-dimensional) space
$\mathcal X$, find a representation function $f$ mapping $\mathcal X$ to a
finite space $\mathcal Y$ such that the series $f(X_0),\dots,f(X_n)$ preserves
as much information as possible about the original time-series dependence in
$X_0,\dots,X_n$. We show that, for stationary time series, the function $f$ can
be selected as the one maximizing a certain information criterion that we call
time-series information. Some properties of this functions are investigated,
including its uniqueness and consistency of its empirical estimates.
Implications for the problem of optimal control are presented. | [
"cs.LG",
"q-bio.QM",
"stat.ML"
] |
AI heralds a step-change in the performance and capability of wireless
networks and other critical infrastructures. However, it may also cause
irreversible environmental damage due to their high energy consumption. Here,
we address this challenge in the context of 5G and beyond, where there is a
complexity explosion in radio resource management (RRM). On the one hand, deep
reinforcement learning (DRL) provides a powerful tool for scalable optimization
for high dimensional RRM problems in a dynamic environment. On the other hand,
DRL algorithms consume a high amount of energy over time and risk compromising
progress made in green radio research. This paper reviews and analyzes how to
achieve green DRL for RRM via both architecture and algorithm innovations.
Architecturally, a cloud based training and distributed decision-making DRL
scheme is proposed, where RRM entities can make lightweight deep local
decisions whilst assisted by on-cloud training and updating. On the algorithm
level, compression approaches are introduced for both deep neural networks and
the underlying Markov Decision Processes, enabling accurate low-dimensional
representations of challenges. To scale learning across geographic areas, a
spatial transfer learning scheme is proposed to further promote the learning
efficiency of distributed DRL entities by exploiting the traffic demand
correlations. Together, our proposed architecture and algorithms provide a
vision for green and on-demand DRL capability. | [
"cs.LG",
"cs.AI",
"cs.NI",
"eess.SP"
] |
We focus on the problem of teaching a robot to solve tasks presented
sequentially, i.e., in a continual learning scenario. The robot should be able
to solve all tasks it has encountered, without forgetting past tasks. We
provide preliminary work on applying Reinforcement Learning to such setting, on
2D navigation tasks for a 3 wheel omni-directional robot. Our approach takes
advantage of state representation learning and policy distillation. Policies
are trained using learned features as input, rather than raw observations,
allowing better sample efficiency. Policy distillation is used to combine
multiple policies into a single one that solves all encountered tasks. | [
"cs.LG",
"cs.RO",
"stat.ML"
] |
Despite recent progress in Graph Neural Networks (GNNs), explaining
predictions made by GNNs remains a challenging open problem. The leading method
independently addresses the local explanations (i.e., important subgraph
structure and node features) to interpret why a GNN model makes the prediction
for a single instance, e.g. a node or a graph. As a result, the explanation
generated is painstakingly customized for each instance. The unique explanation
interpreting each instance independently is not sufficient to provide a global
understanding of the learned GNN model, leading to a lack of generalizability
and hindering it from being used in the inductive setting. Besides, as it is
designed for explaining a single instance, it is challenging to explain a set
of instances naturally (e.g., graphs of a given class). In this study, we
address these key challenges and propose PGExplainer, a parameterized explainer
for GNNs. PGExplainer adopts a deep neural network to parameterize the
generation process of explanations, which enables PGExplainer a natural
approach to explaining multiple instances collectively. Compared to the
existing work, PGExplainer has better generalization ability and can be
utilized in an inductive setting easily. Experiments on both synthetic and
real-life datasets show highly competitive performance with up to 24.7\%
relative improvement in AUC on explaining graph classification over the leading
baseline. | [
"cs.LG",
"cs.AI"
] |
Underexposure regions are vital to construct a complete perception of the
surroundings for safe autonomous driving. The availability of thermal cameras
has provided an essential alternate to explore regions where other optical
sensors lack in capturing interpretable signals. A thermal camera captures an
image using the heat difference emitted by objects in the infrared spectrum,
and object detection in thermal images becomes effective for autonomous driving
in challenging conditions. Although object detection in the visible spectrum
domain imaging has matured, thermal object detection lacks effectiveness. A
significant challenge is scarcity of labeled data for the thermal domain which
is desiderata for SOTA artificial intelligence techniques. This work proposes a
domain adaptation framework which employs a style transfer technique for
transfer learning from visible spectrum images to thermal images. The framework
uses a generative adversarial network (GAN) to transfer the low-level features
from the visible spectrum domain to the thermal domain through style
consistency. The efficacy of the proposed method of object detection in thermal
images is evident from the improved results when used styled images from
publicly available thermal image datasets (FLIR ADAS and KAIST Multi-Spectral). | [
"cs.CV",
"cs.LG"
] |
Despite the great achievements of the modern deep neural networks (DNNs), the
vulnerability/robustness of state-of-the-art DNNs raises security concerns in
many application domains requiring high reliability. Various adversarial
attacks are proposed to sabotage the learning performance of DNN models. Among
those, the black-box adversarial attack methods have received special
attentions owing to their practicality and simplicity. Black-box attacks
usually prefer less queries in order to maintain stealthy and low costs.
However, most of the current black-box attack methods adopt the first-order
gradient descent method, which may come with certain deficiencies such as
relatively slow convergence and high sensitivity to hyper-parameter settings.
In this paper, we propose a zeroth-order natural gradient descent (ZO-NGD)
method to design the adversarial attacks, which incorporates the zeroth-order
gradient estimation technique catering to the black-box attack scenario and the
second-order natural gradient descent to achieve higher query efficiency. The
empirical evaluations on image classification datasets demonstrate that ZO-NGD
can obtain significantly lower model query complexities compared with
state-of-the-art attack methods. | [
"cs.LG",
"cs.CR",
"cs.CV",
"stat.ML"
] |
Neural networks are susceptible to small transformations including 2D
rotations and shifts, image crops, and even changes in object colors. This is
often attributed to biases in the training dataset, and the lack of 2D
shift-invariance due to not respecting the sampling theorem. In this paper, we
challenge this hypothesis by training and testing on unbiased datasets, and
showing that networks are brittle to both small 3D perspective changes and
lighting variations which cannot be explained by dataset bias or lack of
shift-invariance. To find these in-distribution errors, we introduce an
evolution strategies (ES) based approach, which we call CMA-Search. Despite
training with a large-scale (0.5 million images), unbiased dataset of camera
and light variations, in over 71% cases CMA-Search can find camera parameters
in the vicinity of a correctly classified image which lead to in-distribution
misclassifications with < 3.6% change in parameters. With lighting changes,
CMA-Search finds misclassifications in 33% cases with < 11.6% change in
parameters. Finally, we extend this method to find misclassifications in the
vicinity of ImageNet images for both ResNet and OpenAI's CLIP model. | [
"cs.CV",
"cs.LG"
] |
Graph Convolutional Network (GCN) has experienced great success in graph
analysis tasks. It works by smoothing the node features across the graph. The
current GCN models overwhelmingly assume that the node feature information is
complete. However, real-world graph data are often incomplete and containing
missing features. Traditionally, people have to estimate and fill in the
unknown features based on imputation techniques and then apply GCN. However,
the process of feature filling and graph learning are separated, resulting in
degraded and unstable performance. This problem becomes more serious when a
large number of features are missing. We propose an approach that adapts GCN to
graphs containing missing features. In contrast to traditional strategy, our
approach integrates the processing of missing features and graph learning
within the same neural network architecture. Our idea is to represent the
missing data by Gaussian Mixture Model (GMM) and calculate the expected
activation of neurons in the first hidden layer of GCN, while keeping the other
layers of the network unchanged. This enables us to learn the GMM parameters
and network weight parameters in an end-to-end manner. Notably, our approach
does not increase the computational complexity of GCN and it is consistent with
GCN when the features are complete. We demonstrate through extensive
experiments that our approach significantly outperforms the imputation-based
methods in node classification and link prediction tasks. We show that the
performance of our approach for the case with a low level of missing features
is even superior to GCN for the case with complete features. | [
"cs.LG",
"cs.CY",
"stat.ML"
] |
Explaining the decisions of models is becoming pervasive in the image
processing domain, whether it is by using post-hoc methods or by creating
inherently interpretable models. While the widespread use of surrogate
explainers is a welcome addition to inspect and understand black-box models,
assessing the robustness and reliability of the explanations is key for their
success. Additionally, whilst existing work in the explainability field
proposes various strategies to address this problem, the challenges of working
with data in the wild is often overlooked. For instance, in image
classification, distortions to images can not only affect the predictions
assigned by the model, but also the explanation. Given a clean and a distorted
version of an image, even if the prediction probabilities are similar, the
explanation may still be different. In this paper we propose a methodology to
evaluate the effect of distortions in explanations by embedding perceptual
distances that tailor the neighbourhoods used to training surrogate explainers.
We also show that by operating in this way, we can make the explanations more
robust to distortions. We generate explanations for images in the Imagenet-C
dataset and demonstrate how using a perceptual distances in the surrogate
explainer creates more coherent explanations for the distorted and reference
images. | [
"cs.CV",
"stat.ML"
] |
The computer vision community is witnessing an unprecedented rate of new
tasks being proposed and addressed, thanks to the deep convolutional networks'
capability to find complex mappings from X to Y. The advent of each task often
accompanies the release of a large-scale annotated dataset, for supervised
training of deep network. However, it is expensive and time-consuming to
manually label sufficient amount of training data. Therefore, it is important
to develop algorithms that can leverage off-the-shelf labeled dataset to learn
useful knowledge for the target task. While previous works mostly focus on
transfer learning from a single source, we study multi-source transfer across
domains and tasks (MS-DTT), in a semi-supervised setting. We propose GradMix, a
model-agnostic method applicable to any model trained with gradient-based
learning rule, to transfer knowledge via gradient descent by weighting and
mixing the gradients from all sources during training. GradMix follows a
meta-learning objective, which assigns layer-wise weights to the source
gradients, such that the combined gradient follows the direction that minimize
the loss for a small set of samples from the target dataset. In addition, we
propose to adaptively adjust the learning rate for each mini-batch based on its
importance to the target task, and a pseudo-labeling method to leverage the
unlabeled samples in the target domain. We conduct MS-DTT experiments on two
tasks: digit recognition and action recognition, and demonstrate the
advantageous performance of the proposed method against multiple baselines. | [
"cs.CV"
] |
Multi-epoch, small-batch, Stochastic Gradient Descent (SGD) has been the
method of choice for learning with large over-parameterized models. A popular
theory for explaining why SGD works well in practice is that the algorithm has
an implicit regularization that biases its output towards a good solution.
Perhaps the theoretically most well understood learning setting for SGD is that
of Stochastic Convex Optimization (SCO), where it is well known that SGD learns
at a rate of $O(1/\sqrt{n})$, where $n$ is the number of samples. In this
paper, we consider the problem of SCO and explore the role of implicit
regularization, batch size and multiple epochs for SGD. Our main contributions
are threefold:
(a) We show that for any regularizer, there is an SCO problem for which
Regularized Empirical Risk Minimzation fails to learn. This automatically rules
out any implicit regularization based explanation for the success of SGD.
(b) We provide a separation between SGD and learning via Gradient Descent on
empirical loss (GD) in terms of sample complexity. We show that there is an SCO
problem such that GD with any step size and number of iterations can only learn
at a suboptimal rate: at least $\widetilde{\Omega}(1/n^{5/12})$.
(c) We present a multi-epoch variant of SGD commonly used in practice. We
prove that this algorithm is at least as good as single pass SGD in the worst
case. However, for certain SCO problems, taking multiple passes over the
dataset can significantly outperform single pass SGD.
We extend our results to the general learning setting by showing a problem
which is learnable for any data distribution, and for this problem, SGD is
strictly better than RERM for any regularization function. We conclude by
discussing the implications of our results for deep learning, and show a
separation between SGD and ERM for two layer diagonal neural networks. | [
"cs.LG",
"cs.AI"
] |
In this work, we propose a modeling technique for jointly training image and
video generation models by simultaneously learning to map latent variables with
a fixed prior onto real images and interpolate over images to generate videos.
The proposed approach models the variations in representations using residual
vectors encoding the change at each time step over a summary vector for the
entire video. We utilize the technique to jointly train an image generation
model with a fixed prior along with a video generation model lacking
constraints such as disentanglement. The joint training enables the image
generator to exploit temporal information while the video generation model
learns to flexibly share information across frames. Moreover, experimental
results verify our approach's compatibility with pre-training on videos or
images and training on datasets containing a mixture of both. A comprehensive
set of quantitative and qualitative evaluations reveal the improvements in
sample quality and diversity over both video generation and image generation
baselines. We further demonstrate the technique's capabilities of exploiting
similarity in features across frames by applying it to a model based on
decomposing the video into motion and content. The proposed model allows minor
variations in content across frames while maintaining the temporal dependence
through latent vectors encoding the pose or motion features. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Actor-critic style two-time-scale algorithms are very popular in
reinforcement learning, and have seen great empirical success. However, their
performance is not completely understood theoretically. In this paper, we
characterize the global convergence of an online natural actor-critic algorithm
in the tabular setting using a single trajectory. Our analysis applies to very
general settings, as we only assume that the underlying Markov chain is ergodic
under all policies (the so-called Recurrence assumption). We employ
$\epsilon$-greedy sampling in order to ensure enough exploration.
For a fixed exploration parameter $\epsilon$, we show that the natural actor
critic algorithm is $\mathcal{O}(\frac{1}{\epsilon T^{1/4}}+\epsilon)$ close to
the global optimum after $T$ iterations of the algorithm.
By carefully diminishing the exploration parameter $\epsilon$ as the
iterations proceed, we also show convergence to the global optimum at a rate of
$\mathcal{O}(1/T^{1/6})$. | [
"cs.LG"
] |
Visual dialogue is a challenging task that needs to extract implicit
information from both visual (image) and textual (dialogue history) contexts.
Classical approaches pay more attention to the integration of the current
question, vision knowledge and text knowledge, despising the heterogeneous
semantic gaps between the cross-modal information. In the meantime, the
concatenation operation has become de-facto standard to the cross-modal
information fusion, which has a limited ability in information retrieval. In
this paper, we propose a novel Knowledge-Bridge Graph Network (KBGN) model by
using graph to bridge the cross-modal semantic relations between vision and
text knowledge in fine granularity, as well as retrieving required knowledge
via an adaptive information selection mode. Moreover, the reasoning clues for
visual dialogue can be clearly drawn from intra-modal entities and inter-modal
bridges. Experimental results on VisDial v1.0 and VisDial-Q datasets
demonstrate that our model outperforms existing models with state-of-the-art
results. | [
"cs.CV"
] |
From CNNs to attention mechanisms, encoding inductive biases into neural
networks has been a fruitful source of improvement in machine learning. Adding
auxiliary losses to the main objective function is a general way of encoding
biases that can help networks learn better representations. However, since
auxiliary losses are minimized only on training data, they suffer from the same
generalization gap as regular task losses. Moreover, by adding a term to the
loss function, the model optimizes a different objective than the one we care
about. In this work we address both problems: first, we take inspiration from
\textit{transductive learning} and note that after receiving an input but
before making a prediction, we can fine-tune our networks on any unsupervised
loss. We call this process {\em tailoring}, because we customize the model to
each input to ensure our prediction satisfies the inductive bias. Second, we
formulate {\em meta-tailoring}, a nested optimization similar to that in
meta-learning, and train our models to perform well on the task objective after
adapting them using an unsupervised loss. The advantages of tailoring and
meta-tailoring are discussed theoretically and demonstrated empirically on a
diverse set of examples. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Vision transformers (ViTs) have recently received explosive popularity, but
their enormous model sizes and training costs remain daunting. Conventional
post-training pruning often incurs higher training budgets. In contrast, this
paper aims to trim down both the training memory overhead and the inference
complexity, without sacrificing the achievable accuracy. We launch and report
the first-of-its-kind comprehensive exploration, on taking a unified approach
of integrating sparsity in ViTs "from end to end". Specifically, instead of
training full ViTs, we dynamically extract and train sparse subnetworks, while
sticking to a fixed small parameter budget. Our approach jointly optimizes
model parameters and explores connectivity throughout training, ending up with
one sparse network as the final output. The approach is seamlessly extended
from unstructured to structured sparsity, the latter by considering to guide
the prune-and-grow of self-attention heads inside ViTs. For additional
efficiency gains, we further co-explore data and architecture sparsity, by
plugging in a novel learnable token selector to adaptively determine the
currently most vital patches. Extensive results on ImageNet with diverse ViT
backbones validate the effectiveness of our proposals which obtain
significantly reduced computational cost and almost unimpaired generalization.
Perhaps most surprisingly, we find that the proposed sparse (co-)training can
even improve the ViT accuracy rather than compromising it, making sparsity a
tantalizing "free lunch". For example, our sparsified DeiT-Small at (5%, 50%)
sparsity for (data, architecture), improves 0.28% top-1 accuracy, and meanwhile
enjoys 49.32% FLOPs and 4.40% running time savings. Our codes are available at
https://github.com/VITA-Group/SViTE. | [
"cs.CV",
"cs.AI"
] |
Deep generative models for graphs have shown great promise in the area of
drug design, but have so far found little application beyond generating
graph-structured molecules. In this work, we demonstrate a proof of concept for
the challenging task of road network extraction from image data. This task can
be framed as image-conditioned graph generation, for which we develop the
Generative Graph Transformer (GGT), a deep autoregressive model that makes use
of attention mechanisms for image conditioning and the recurrent generation of
graphs. We benchmark GGT on the application of road network extraction from
semantic segmentation data. For this, we introduce the Toulouse Road Network
dataset, based on real-world publicly-available data. We further propose the
StreetMover distance: a metric based on the Sinkhorn distance for effectively
evaluating the quality of road network generation. The code and dataset are
publicly available. | [
"cs.LG",
"stat.ML"
] |
As one of the important functions of the intelligent transportation system
(ITS), supply-demand prediction for autonomous vehicles provides a decision
basis for its control. In this paper, we present two prediction models (i.e.
ARLP model and Advanced ARLP model) based on two system environments that only
the current day's historical data is available or several days' historical data
are available. These two models jointly consider the spatial, temporal, and
semantic relations. Spatial dependency is captured with residual network and
dimension reduction. Short term temporal dependency is captured with LSTM. Long
term temporal dependency and temporal shifting are captured with LSTM and
attention mechanism. Semantic dependency is captured with multi-attention
mechanism and autocorrelation coefficient method. Extensive experiments show
that our frameworks provide more accurate and stable prediction results than
the existing methods. | [
"cs.LG",
"stat.ML"
] |
In this paper, we address unsupervised pose-guided person image generation,
which is known challenging due to non-rigid deformation. Unlike previous
methods learning a rock-hard direct mapping between human bodies, we propose a
new pathway to decompose the hard mapping into two more accessible subtasks,
namely, semantic parsing transformation and appearance generation. Firstly, a
semantic generative network is proposed to transform between semantic parsing
maps, in order to simplify the non-rigid deformation learning. Secondly, an
appearance generative network learns to synthesize semantic-aware textures.
Thirdly, we demonstrate that training our framework in an end-to-end manner
further refines the semantic maps and final results accordingly. Our method is
generalizable to other semantic-aware person image generation tasks, eg,
clothing texture transfer and controlled image manipulation. Experimental
results demonstrate the superiority of our method on DeepFashion and
Market-1501 datasets, especially in keeping the clothing attributes and better
body shapes. | [
"cs.CV"
] |
We introduce the "inverse bandit" problem of estimating the rewards of a
multi-armed bandit instance from observing the learning process of a low-regret
demonstrator. Existing approaches to the related problem of inverse
reinforcement learning assume the execution of an optimal policy, and thereby
suffer from an identifiability issue. In contrast, our paradigm leverages the
demonstrator's behavior en route to optimality, and in particular, the
exploration phase, to obtain consistent reward estimates. We develop simple and
efficient reward estimation procedures for demonstrations within a class of
upper-confidence-based algorithms, showing that reward estimation gets
progressively easier as the regret of the algorithm increases. We match these
upper bounds with information-theoretic lower bounds that apply to any
demonstrator algorithm, thereby characterizing the optimal tradeoff between
exploration and reward estimation. Extensive empirical evaluations on both
synthetic data and simulated experimental design data from the natural sciences
corroborate our theoretical results. | [
"stat.ML",
"cs.AI",
"cs.IT",
"cs.LG",
"cs.RO",
"math.IT"
] |
Since AlphaGo and AlphaGo Zero have achieved breakground successes in the
game of Go, the programs have been generalized to solve other tasks.
Subsequently, AlphaZero was developed to play Go, Chess and Shogi. In the
literature, the algorithms are explained well. However, AlphaZero contains many
parameters, and for neither AlphaGo, AlphaGo Zero nor AlphaZero, there is
sufficient discussion about how to set parameter values in these algorithms.
Therefore, in this paper, we choose 12 parameters in AlphaZero and evaluate how
these parameters contribute to training. We focus on three objectives~(training
loss, time cost and playing strength). For each parameter, we train 3 models
using 3 different values~(minimum value, default value, maximum value). We use
the game of play 6$\times$6 Othello, on the AlphaZeroGeneral open source
re-implementation of AlphaZero. Overall, experimental results show that
different values can lead to different training results, proving the importance
of such a parameter sweep. We categorize these 12 parameters into
time-sensitive parameters and time-friendly parameters. Moreover, through
multi-objective analysis, this paper provides an insightful basis for further
hyper-parameter optimization. | [
"cs.LG",
"cs.AI"
] |
Disentanglement learning is crucial for obtaining disentangled
representations and controllable generation. Current disentanglement methods
face several inherent limitations: difficulty with high-resolution images,
primarily focusing on learning disentangled representations, and
non-identifiability due to the unsupervised setting. To alleviate these
limitations, we design new architectures and loss functions based on StyleGAN
(Karras et al., 2019), for semi-supervised high-resolution disentanglement
learning. We create two complex high-resolution synthetic datasets for
systematic testing. We investigate the impact of limited supervision and find
that using only 0.25%~2.5% of labeled data is sufficient for good
disentanglement on both synthetic and real datasets. We propose new metrics to
quantify generator controllability, and observe there may exist a crucial
trade-off between disentangled representation learning and controllable
generation. We also consider semantic fine-grained image editing to achieve
better generalization to unseen images. | [
"cs.CV",
"cs.LG"
] |
In multi-task reinforcement learning there are two main challenges: at
training time, the ability to learn different policies with a single model; at
test time, inferring which of those policies applying without an external
signal. In the case of continual reinforcement learning a third challenge
arises: learning tasks sequentially without forgetting the previous ones. In
this paper, we tackle these challenges by proposing DisCoRL, an approach
combining state representation learning and policy distillation. We experiment
on a sequence of three simulated 2D navigation tasks with a 3 wheel
omni-directional robot. Moreover, we tested our approach's robustness by
transferring the final policy into a real life setting. The policy can solve
all tasks and automatically infer which one to run. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Forecasting influenza-like illness (ILI) is of prime importance to
epidemiologists and health-care providers. Early prediction of epidemic
outbreaks plays a pivotal role in disease intervention and control. Most
existing work has either limited long-term prediction performance or lacks a
comprehensive ability to capture spatio-temporal dependencies in data. Accurate
and early disease forecasting models would markedly improve both epidemic
prevention and managing the onset of an epidemic. In this paper, we design a
cross-location attention based graph neural network (Cola-GNN) for learning
time series embeddings and location aware attentions. We propose a graph
message passing framework to combine learned feature embeddings and an
attention matrix to model disease propagation over time. We compare the
proposed method with state-of-the-art statistical approaches and deep learning
models on real-world epidemic-related datasets from United States and Japan.
The proposed method shows strong predictive performance and leads to
interpretable results for long-term epidemic predictions. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
In this paper we propose a fusion approach to continuous emotion recognition
that combines visual and auditory modalities in their representation spaces to
predict the arousal and valence levels. The proposed approach employs a
pre-trained convolution neural network and transfer learning to extract
features from video frames that capture the emotional content. For the auditory
content, a minimalistic set of parameters such as prosodic, excitation, vocal
tract, and spectral descriptors are used as features. The fusion of these two
modalities is carried out at a feature level, before training a single support
vector regressor (SVR) or at a prediction level, after training one SVR for
each modality. The proposed approach also includes preprocessing and
post-processing techniques which contribute favorably to improving the
concordance correlation coefficient (CCC). Experimental results for predicting
spontaneous and natural emotions on the RECOLA dataset have shown that the
proposed approach takes advantage of the complementary information of visual
and auditory modalities and provides CCCs of 0.749 and 0.565 for arousal and
valence, respectively. | [
"cs.LG",
"cs.SD",
"eess.AS",
"stat.ML"
] |
Modern methods for counting people in crowded scenes rely on deep networks to
estimate people densities in individual images. As such, only very few take
advantage of temporal consistency in video sequences, and those that do only
impose weak smoothness constraints across consecutive frames. In this paper, we
advocate estimating people flows across image locations between consecutive
images and inferring the people densities from these flows instead of directly
regressing them. This enables us to impose much stronger constraints encoding
the conservation of the number of people. As a result, it significantly boosts
performance without requiring a more complex architecture. Furthermore, it
allows us to exploit the correlation between people flow and optical flow to
further improve the results. We also show that leveraging people conservation
constraints in both a spatial and temporal manner makes it possible to train a
deep crowd counting model in an active learning setting with much fewer
annotations. This significantly reduces the annotation cost while still leading
to similar performance to the full supervision case. | [
"cs.CV",
"cs.LG"
] |
We propose UOLO, a novel framework for the simultaneous detection and
segmentation of structures of interest in medical images. UOLO consists of an
object segmentation module which intermediate abstract representations are
processed and used as input for object detection. The resulting system is
optimized simultaneously for detecting a class of objects and segmenting an
optionally different class of structures. UOLO is trained on a set of bounding
boxes enclosing the objects to detect, as well as pixel-wise segmentation
information, when available. A new loss function is devised, taking into
account whether a reference segmentation is accessible for each training image,
in order to suitably backpropagate the error. We validate UOLO on the task of
simultaneous optic disc (OD) detection, fovea detection, and OD segmentation
from retinal images, achieving state-of-the-art performance on public datasets. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
Single-task learning in artificial neural networks will be able to learn the
model very well, and the benefits brought by transferring knowledge thus become
limited. In this regard, when the number of tasks increases (e.g., semantic
segmentation, panoptic segmentation, monocular depth estimation, and 3D point
cloud), duplicate information may exist across tasks, and the improvement
becomes less significant. Multi-task learning has emerged as a solution to
knowledge-transfer issues and is an approach to scene understanding which
involves multiple related tasks each with potentially limited training data.
Multi-task learning improves generalization by leveraging the domain-specific
information contained in the training data of related tasks. In urban
management applications such as infrastructure development, traffic monitoring,
smart 3D cities, and change detection, automated multi-task data analysis for
scene understanding based on the semantic, instance, and panoptic annotation,
as well as monocular depth estimation, is required to generate precise urban
models. In this study, a common framework for the performance assessment of
multi-task learning methods from fixed-wing UAV images for 2D/3D city modeling
is presented. | [
"cs.CV",
"cs.AI"
] |
In this paper, we propose a natural and robust physical adversarial example
attack method targeting object detectors under real-world conditions. The
generated adversarial examples are robust to various physical constraints and
visually look similar to the original images, thus these adversarial examples
are natural to humans and will not cause any suspicions. First, to ensure the
robustness of the adversarial examples in real-world conditions, the proposed
method exploits different image transformation functions, to simulate various
physical changes during the iterative optimization of the adversarial examples
generation. Second, to construct natural adversarial examples, the proposed
method uses an adaptive mask to constrain the area and intensities of the added
perturbations, and utilizes the real-world perturbation score (RPS) to make the
perturbations be similar to those real noises in physical world. Compared with
existing studies, our generated adversarial examples can achieve a high success
rate with less conspicuous perturbations. Experimental results demonstrate
that, the generated adversarial examples are robust under various indoor and
outdoor physical conditions, including different distances, angles,
illuminations, and photographing. Specifically, the attack success rate of
generated adversarial examples indoors and outdoors is high up to 73.33% and
82.22%, respectively. Meanwhile, the proposed method ensures the naturalness of
the generated adversarial example, and the size of added perturbations is much
smaller than the perturbations in the existing works. Further, the proposed
physical adversarial attack method can be transferred from the white-box models
to other object detection models. | [
"cs.CV"
] |
Deploying machine learning systems in the real world requires both high
accuracy on clean data and robustness to naturally occurring corruptions. While
architectural advances have led to improved accuracy, building robust models
remains challenging. Prior work has argued that there is an inherent trade-off
between robustness and accuracy, which is exemplified by standard data augment
techniques such as Cutout, which improves clean accuracy but not robustness,
and additive Gaussian noise, which improves robustness but hurts accuracy. To
overcome this trade-off, we introduce Patch Gaussian, a simple augmentation
scheme that adds noise to randomly selected patches in an input image. Models
trained with Patch Gaussian achieve state of the art on the CIFAR-10 and
ImageNetCommon Corruptions benchmarks while also improving accuracy on clean
data. We find that this augmentation leads to reduced sensitivity to high
frequency noise(similar to Gaussian) while retaining the ability to take
advantage of relevant high frequency information in the image (similar to
Cutout). Finally, we show that Patch Gaussian can be used in conjunction with
other regularization methods and data augmentation policies such as
AutoAugment, and improves performance on the COCO object detection benchmark. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Machine Learning models become increasingly proficient in complex tasks.
However, even for experts in the field, it can be difficult to understand what
the model learned. This hampers trust and acceptance, and it obstructs the
possibility to correct the model. There is therefore a need for transparency of
machine learning models. The development of transparent classification models
has received much attention, but there are few developments for achieving
transparent Reinforcement Learning (RL) models. In this study we propose a
method that enables a RL agent to explain its behavior in terms of the expected
consequences of state transitions and outcomes. First, we define a translation
of states and actions to a description that is easier to understand for human
users. Second, we developed a procedure that enables the agent to obtain the
consequences of a single action, as well as its entire policy. The method
calculates contrasts between the consequences of a policy derived from a user
query, and of the learned policy of the agent. Third, a format for generating
explanations was constructed. A pilot survey study was conducted to explore
preferences of users for different explanation properties. Results indicate
that human users tend to favor explanations about policy rather than about
single actions. | [
"cs.LG",
"stat.ML"
] |
The virtual try-on task is so attractive that it has drawn considerable
attention in the field of computer vision. However, presenting the
three-dimensional (3D) physical characteristic (e.g., pleat and shadow) based
on a 2D image is very challenging. Although there have been several previous
studies on 2D-based virtual try-on work, most 1) required user-specified target
poses that are not user-friendly and may not be the best for the target
clothing, and 2) failed to address some problematic cases, including facial
details, clothing wrinkles and body occlusions. To address these two
challenges, in this paper, we propose an innovative template-free try-on image
synthesis (TF-TIS) network. The TF-TIS first synthesizes the target pose
according to the user-specified in-shop clothing. Afterward, given an in-shop
clothing image, a user image, and a synthesized pose, we propose a novel model
for synthesizing a human try-on image with the target clothing in the best
fitting pose. The qualitative and quantitative experiments both indicate that
the proposed TF-TIS outperforms the state-of-the-art methods, especially for
difficult cases. | [
"cs.CV"
] |
Snake robots, comprised of sequentially connected joint actuators, have
recently gained increasing attention in the industrial field, like life
detection in narrow space. Such robots can navigate through the complex
environment via the cooperation of multiple motors located on the backbone.
However, controlling the robots in an unknown environment is challenging, and
conventional control strategies can be energy inefficient or even fail to
navigate to the destination. In this work, a snake locomotion gait policy is
developed via deep reinforcement learning (DRL) for energy-efficient control.
We apply proximal policy optimization (PPO) to each joint motor parameterized
by angular velocity and the DRL agent learns the standard serpenoid curve at
each timestep. The robot simulator and task environment are built upon
PyBullet. Comparing to conventional control strategies, the snake robots
controlled by the trained PPO agent can achieve faster movement and more
energy-efficient locomotion gait. This work demonstrates that DRL provides an
energy-efficient solution for robot control. | [
"cs.LG",
"cs.RO"
] |
Incorporating various modes of information into the machine learning
procedure is becoming a new trend. And data from various source can provide
more information than single one no matter they are heterogeneous or
homogeneous. Existing deep learning based algorithms usually directly
concatenate features from each domain to represent the input data. Seldom of
them take the quality of data into consideration which is a key issue in
related multimodal problems. In this paper, we propose an efficient
quality-aware deep neural network to model the weight of data from each domain
using deep reinforcement learning (DRL). Specifically, we take the weighting of
each domain as a decision-making problem and teach an agent learn to interact
with the environment. The agent can tune the weight of each domain through
discrete action selection and obtain a positive reward if the saliency results
are improved. The target of the agent is to achieve maximum rewards after
finished its sequential action selection. We validate the proposed algorithms
on multimodal saliency detection in a coarse-to-fine way. The coarse saliency
maps are generated from an encoder-decoder framework which is trained with
content loss and adversarial loss. The final results can be obtained via
adaptive weighting of maps from each domain. Experiments conducted on two kinds
of salient object detection benchmarks validated the effectiveness of our
proposed quality-aware deep neural network. | [
"cs.CV"
] |
We propose an efficient algorithm for the generalized sparse coding (SC)
inference problem. The proposed framework applies to both the single dictionary
setting, where each data point is represented as a sparse combination of the
columns of one dictionary matrix, as well as the multiple dictionary setting as
given in morphological component analysis (MCA), where the goal is to separate
a signal into additive parts such that each part has distinct sparse
representation within a corresponding dictionary. Both the SC task and its
generalization via MCA have been cast as $\ell_1$-regularized least-squares
optimization problems. To accelerate traditional acquisition of sparse codes,
we propose a deep learning architecture that constitutes a trainable
time-unfolded version of the Split Augmented Lagrangian Shrinkage Algorithm
(SALSA), a special case of the Alternating Direction Method of Multipliers
(ADMM). We empirically validate both variants of the algorithm, that we refer
to as LSALSA (learned-SALSA), on image vision tasks and demonstrate that at
inference our networks achieve vast improvements in terms of the running time,
the quality of estimated sparse codes, and visual clarity on both classic SC
and MCA problems. Finally, we present a theoretical framework for analyzing
LSALSA network: we show that the proposed approach exactly implements a
truncated ADMM applied to a new, learned cost function with curvature modified
by one of the learned parameterized matrices. We extend a very recent
Stochastic Alternating Optimization analysis framework to show that a gradient
descent step along this learned loss landscape is equivalent to a modified
gradient descent step along the original loss landscape. In this framework, the
acceleration achieved by LSALSA could potentially be explained by the network's
ability to learn a correction to the gradient direction of steeper descent. | [
"cs.LG",
"stat.ML"
] |
With a small number of labeled samples for training, it can save considerable
manpower and material resources, especially when the amount of high spatial
resolution remote sensing images (HSR-RSIs) increases considerably. However,
many deep models face the problem of overfitting when using a small number of
labeled samples. This might degrade HSRRSI retrieval accuracy. Aiming at
obtaining more accurate HSR-RSI retrieval performance with small training
samples, we develop a deep metric learning approach with generative adversarial
network regularization (DML-GANR) for HSR-RSI retrieval. The DML-GANR starts
from a high-level feature extraction (HFE) to extract high-level features,
which includes convolutional layers and fully connected (FC) layers. Each of
the FC layers is constructed by deep metric learning (DML) to maximize the
interclass variations and minimize the intraclass variations. The generative
adversarial network (GAN) is adopted to mitigate the overfitting problem and
validate the qualities of extracted high-level features. DML-GANR is optimized
through a customized approach, and the optimal parameters are obtained. The
experimental results on the three data sets demonstrate the superior
performance of DML-GANR over state-of-the-art techniques in HSR-RSI retrieval. | [
"cs.CV",
"math.OC"
] |
Machine learning has recently been widely adopted to address the managerial
decision making problems, in which the decision maker needs to be able to
interpret the contributions of individual attributes in an explicit form.
However, there is a trade-off between performance and interpretability. Full
complexity models are non-traceable black-box, whereas classic interpretable
models are usually simplified with lower accuracy. This trade-off limits the
application of state-of-the-art machine learning models in management problems,
which requires high prediction performance, as well as the understanding of
individual attributes' contributions to the model outcome. Multiple criteria
decision aiding (MCDA) is a family of analytic approaches to depicting the
rationale of human decision. It is also limited by strong assumptions. To meet
the decision maker's demand for more interpretable machine learning models, we
propose a novel hybrid method, namely Neural Network-based Multiple Criteria
Decision Aiding, which combines an additive value model and a fully-connected
multilayer perceptron (MLP) to achieve good performance while capturing the
explicit relationships between individual attributes and the prediction.
NN-MCDA has a linear component to characterize such relationships through
providing explicit marginal value functions, and a nonlinear component to
capture the implicit high-order interactions between attributes and their
complex nonlinear transformations. We demonstrate the effectiveness of NN-MCDA
with extensive simulation studies and three real-world datasets. To the best of
our knowledge, this research is the first to enhance the interpretability of
machine learning models with MCDA techniques. The proposed framework also sheds
light on how to use machine learning techniques to free MCDA from strong
assumptions. | [
"cs.LG",
"stat.ML"
] |
Due to the advantages of real-time detection and improved performance,
single-shot detectors have gained great attention recently. To solve the
complex scale variations, single-shot detectors make scale-aware predictions
based on multiple pyramid layers. However, the features in the pyramid are not
scale-aware enough, which limits the detection performance. Two common problems
in single-shot detectors caused by object scale variations can be observed: (1)
small objects are easily missed; (2) the salient part of a large object is
sometimes detected as an object. With this observation, we propose a new
Neighbor Erasing and Transferring (NET) mechanism to reconfigure the pyramid
features and explore scale-aware features. In NET, a Neighbor Erasing Module
(NEM) is designed to erase the salient features of large objects and emphasize
the features of small objects in shallow layers. A Neighbor Transferring Module
(NTM) is introduced to transfer the erased features and highlight large objects
in deep layers. With this mechanism, a single-shot network called NETNet is
constructed for scale-aware object detection. In addition, we propose to
aggregate nearest neighboring pyramid features to enhance our NET. NETNet
achieves 38.5% AP at a speed of 27 FPS and 32.0% AP at a speed of 55 FPS on MS
COCO dataset. As a result, NETNet achieves a better trade-off for real-time and
accurate object detection. | [
"cs.CV"
] |
Arbitrary-oriented object detection has been a building block for rotation
sensitive tasks. We first show that the problem of discontinuous boundaries
suffered in existing dominant regression-based rotation detectors, is caused by
angular periodicity or corner ordering, according to the parameterization
protocol. We also show that the root cause is that the ideal predictions can be
out of the defined range. Accordingly, we transform the angular prediction task
from a regression problem to a classification one. For the resulting circularly
distributed angle classification problem, we first devise a Circular Smooth
Label (CSL) technique to handle the periodicity of angle and increase the error
tolerance to adjacent angles. To reduce the excessive model parameters by CSL,
we further design a Gray Coded Label (GCL), which greatly reduces the length of
the encoding. Finally, we further develop an object heading detection module,
which can be useful when the exact heading orientation information is needed
e.g. for ship and plane heading detection. We release our OHD-SJTU dataset and
OHDet detector for heading detection. Results on three large-scale public
datasets for aerial images i.e. DOTA, HRSC2016, OHD-SJTU, as well as scene text
dataset ICDAR2015 and MLT, show the effectiveness of our approach. | [
"cs.CV",
"cs.AI"
] |
Person re-identification (re-ID) has gained more and more attention due to
its widespread applications in intelligent video surveillance. Unfortunately,
the mainstream deep learning methods still need a large quantity of labeled
data to train models, and annotating data is an expensive work in real-world
scenarios. In addition, due to domain gaps between different datasets, the
performance is dramatically decreased when re-ID models pre-trained on
label-rich datasets (source domain) are directly applied to other unlabeled
datasets (target domain). In this paper, we attempt to remedy these problems
from two aspects, namely data and methodology. Firstly, we develop a data
collector to automatically generate synthetic re-ID samples in a computer game,
and construct a data labeler to simultaneously annotate them, which free humans
from heavy data collections and annotations. Based on them, we build two
synthetic person re-ID datasets with different scales, "GSPR" and "mini-GSPR"
datasets. Secondly, we propose a synthesis-based multi-domain collaborative
refinement (SMCR) network, which contains a synthetic pretraining module and
two collaborative-refinement modules to implement sufficient learning for the
valuable knowledge from multiple domains. Extensive experiments show that our
proposed framework obtains significant performance improvements over the
state-of-the-art methods on multiple unsupervised domain adaptation tasks of
person re-ID. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Considering the success of generative adversarial networks (GANs) for
image-to-image translation, researchers have attempted to translate remote
sensing images (RSIs) to maps (rs2map) through GAN for cartography. However,
these studies involved limited scales, which hinders multi-scale map creation.
By extending their method, multi-scale RSIs can be trivially translated to
multi-scale maps (multi-scale rs2map translation) through scale-wise rs2map
models trained for certain scales (parallel strategy). However, this strategy
has two theoretical limitations. First, inconsistency between various spatial
resolutions of multi-scale RSIs and object generalization on multi-scale maps
(RS-m inconsistency) increasingly complicate the extraction of geographical
information from RSIs for rs2map models with decreasing scale. Second, as
rs2map translation is cross-domain, generators incur high computation costs to
transform the RSI pixel distribution to that on maps. Thus, we designed a
series strategy of generators for multi-scale rs2map translation to address
these limitations. In this strategy, high-resolution RSIs are inputted to an
rs2map model to output large-scale maps, which are translated to multi-scale
maps through series multi-scale map translation models. The series strategy
avoids RS-m inconsistency as inputs are high-resolution large-scale RSIs, and
reduces the distribution gap in multi-scale map generation through similar
pixel distributions among multi-scale maps. Our experimental results showed
better quality multi-scale map generation with the series strategy, as shown by
average increases of 11.69%, 53.78%, 55.42%, and 72.34% in the structural
similarity index, edge structural similarity index, intersection over union
(road), and intersection over union (water) for data from Mexico City and Tokyo
at zoom level 17-13. | [
"cs.CV",
"eess.IV"
] |
Knowledge representation is a long-history topic in AI, which is very
important. A variety of models have been proposed for knowledge graph
embedding, which projects symbolic entities and relations into continuous
vector space. However, most related methods merely focus on the data-fitting of
knowledge graph, and ignore the interpretable semantic expression. Thus,
traditional embedding methods are not friendly for applications that require
semantic analysis, such as question answering and entity retrieval. To this
end, this paper proposes a semantic representation method for knowledge graph
\textbf{(KSR)}, which imposes a two-level hierarchical generative process that
globally extracts many aspects and then locally assigns a specific category in
each aspect for every triple. Since both aspects and categories are
semantics-relevant, the collection of categories in each aspect is treated as
the semantic representation of this triple. Extensive experiments show that our
model outperforms other state-of-the-art baselines substantially. | [
"cs.LG",
"cs.AI"
] |
In this paper, we propose a novel self-supervised representation learning
method, Self-EMD, for object detection. Our method directly trained on
unlabeled non-iconic image dataset like COCO, instead of commonly used
iconic-object image dataset like ImageNet. We keep the convolutional feature
maps as the image embedding to preserve spatial structures and adopt Earth
Mover's Distance (EMD) to compute the similarity between two embeddings. Our
Faster R-CNN (ResNet50-FPN) baseline achieves 39.8% mAP on COCO, which is on
par with the state of the art self-supervised methods pre-trained on ImageNet.
More importantly, it can be further improved to 40.4% mAP with more unlabeled
images, showing its great potential for leveraging more easily obtained
unlabeled data. Code will be made available. | [
"cs.CV"
] |
The manifold Helmholtzian (1-Laplacian) operator $\Delta_1$ elegantly
generalizes the Laplace-Beltrami operator to vector fields on a manifold
$\mathcal M$. In this work, we propose the estimation of the manifold
Helmholtzian from point cloud data by a weighted 1-Laplacian $\mathbf{\mathcal
L}_1$. While higher order Laplacians ave been introduced and studied, this work
is the first to present a graph Helmholtzian constructed from a simplicial
complex as an estimator for the continuous operator in a non-parametric
setting. Equipped with the geometric and topological information about
$\mathcal M$, the Helmholtzian is a useful tool for the analysis of flows and
vector fields on $\mathcal M$ via the Helmholtz-Hodge theorem. In addition, the
$\mathbf{\mathcal L}_1$ allows the smoothing, prediction, and feature
extraction of the flows. We demonstrate these possibilities on substantial sets
of synthetic and real point cloud datasets with non-trivial topological
structures; and provide theoretical results on the limit of $\mathbf{\mathcal
L}_1$ to $\Delta_1$. | [
"stat.ML",
"cs.LG"
] |
Recently, Fully Convolutional Network (FCN) seems to be the go-to
architecture for image segmentation, including semantic scene parsing. However,
it is difficult for a generic FCN to discriminate pixels around the object
boundaries, thus FCN based methods may output parsing results with inaccurate
boundaries. Meanwhile, level set based active contours are superior to the
boundary estimation due to the sub-pixel accuracy that they achieve. However,
they are quite sensitive to initial settings. To address these limitations, in
this paper we propose a novel Deep Multiphase Level Set (DMLS) method for
semantic scene parsing, which efficiently incorporates multiphase level sets
into deep neural networks. The proposed method consists of three modules, i.e.,
recurrent FCNs, adaptive multiphase level set, and deeply supervised learning.
More specifically, recurrent FCNs learn multi-level representations of input
images with different contexts. Adaptive multiphase level set drives the
discriminative contour for each semantic class, which makes use of the
advantages of both global and local information. In each time-step of the
recurrent FCNs, deeply supervised learning is incorporated for model training.
Extensive experiments on three public benchmarks have shown that our proposed
method achieves new state-of-the-art performances. | [
"cs.CV"
] |
Accurate polyp segmentation is of great importance for colorectal cancer
diagnosis. However, even with a powerful deep neural network, there still
exists three big challenges that impede the development of polyp segmentation.
(i) Samples collected under different conditions show inconsistent colors,
causing the feature distribution gap and overfitting issue; (ii) Due to
repeated feature downsampling, small polyps are easily degraded; (iii)
Foreground and background pixels are imbalanced, leading to a biased training.
To address the above issues, we propose the Shallow Attention Network (SANet)
for polyp segmentation. Specifically, to eliminate the effects of color, we
design the color exchange operation to decouple the image contents and colors,
and force the model to focus more on the target shape and structure.
Furthermore, to enhance the segmentation quality of small polyps, we propose
the shallow attention module to filter out the background noise of shallow
features. Thanks to the high resolution of shallow features, small polyps can
be preserved correctly. In addition, to ease the severe pixel imbalance for
small polyps, we propose a probability correction strategy (PCS) during the
inference phase. Note that even though PCS is not involved in the training
phase, it can still work well on a biased model and consistently improve the
segmentation performance. Quantitative and qualitative experimental results on
five challenging benchmarks confirm that our proposed SANet outperforms
previous state-of-the-art methods by a large margin and achieves a speed about
72FPS. | [
"cs.CV"
] |
\emph{Over-fitting} and \emph{over-smoothing} are two main obstacles of
developing deep Graph Convolutional Networks (GCNs) for node classification. In
particular, over-fitting weakens the generalization ability on small dataset,
while over-smoothing impedes model training by isolating output representations
from the input features with the increase in network depth. This paper proposes
DropEdge, a novel and flexible technique to alleviate both issues. At its core,
DropEdge randomly removes a certain number of edges from the input graph at
each training epoch, acting like a data augmenter and also a message passing
reducer. Furthermore, we theoretically demonstrate that DropEdge either reduces
the convergence speed of over-smoothing or relieves the information loss caused
by it. More importantly, our DropEdge is a general skill that can be equipped
with many other backbone models (e.g. GCN, ResGCN, GraphSAGE, and JKNet) for
enhanced performance. Extensive experiments on several benchmarks verify that
DropEdge consistently improves the performance on a variety of both shallow and
deep GCNs. The effect of DropEdge on preventing over-smoothing is empirically
visualized and validated as well. Codes are released
on~\url{https://github.com/DropEdge/DropEdge}. | [
"cs.LG",
"cs.NI",
"stat.ML"
] |
We propose the first approach to automatically and jointly synthesize both
the synchronous 3D conversational body and hand gestures, as well as 3D face
and head animations, of a virtual character from speech input. Our algorithm
uses a CNN architecture that leverages the inherent correlation between facial
expression and hand gestures. Synthesis of conversational body gestures is a
multi-modal problem since many similar gestures can plausibly accompany the
same input speech. To synthesize plausible body gestures in this setting, we
train a Generative Adversarial Network (GAN) based model that measures the
plausibility of the generated sequences of 3D body motion when paired with the
input audio features. We also contribute a new way to create a large corpus of
more than 33 hours of annotated body, hand, and face data from in-the-wild
videos of talking people. To this end, we apply state-of-the-art monocular
approaches for 3D body and hand pose estimation as well as dense 3D face
performance capture to the video corpus. In this way, we can train on orders of
magnitude more data than previous algorithms that resort to complex in-studio
motion capture solutions, and thereby train more expressive synthesis
algorithms. Our experiments and user study show the state-of-the-art quality of
our speech-synthesized full 3D character animations. | [
"cs.CV"
] |
This paper describes the approach proposed by the D2KLab team for the 2020
RecSys Challenge on the task of predicting user engagement facing tweets. This
approach relies on two distinct stages. First, relevant features are learned
from the challenge dataset. These features are heterogeneous and are the
results of different learning modules such as handcrafted features, knowledge
graph embeddings, sentiment analysis features and BERT word embeddings. Second,
these features are provided in input to an ensemble system based on XGBoost.
This approach, only trained on a subset of the entire challenge dataset, ranked
22 in the final leaderboard. | [
"cs.LG",
"cs.IR",
"stat.ML"
] |
In the last years machine learning (ML) has moved from a academic endeavor to
a pervasive technology adopted in almost every aspect of computing. ML-powered
products are now embedded in our digital lives: from recommendations of what to
watch, to divining our search intent, to powering virtual assistants in
consumer and enterprise settings. Recent successes in applying ML in natural
sciences revealed that ML can be used to tackle some of the hardest real-world
problems humanity faces today. For these reasons ML has become central in the
strategy of tech companies and has gathered even more attention from academia
than ever before. Despite these successes, what we have witnessed so far is
just the beginning. Right now the people training and using ML models are
expert developers working within large organizations, but we believe the next
wave of ML systems will allow a larger amount of people, potentially without
coding skills, to perform the same tasks. These new ML systems will not require
users to fully understand all the details of how models are trained and
utilized for obtaining predictions. Declarative interfaces are well suited for
this goal, by hiding complexity and favouring separation of interests, and can
lead to increased productivity. We worked on such abstract interfaces by
developing two declarative ML systems, Overton and Ludwig, that require users
to declare only their data schema (names and types of inputs) and tasks rather
then writing low level ML code. In this article we will describe how ML systems
are currently structured, highlight important factors for their success and
adoption, what are the issues current ML systems are facing and how the systems
we developed addressed them. Finally we will talk about learnings from the
development of ML systems throughout the years and how we believe the next
generation of ML systems will look like. | [
"cs.LG",
"cs.AI",
"cs.SE"
] |
The use of machine learning rapidly increases in high-risk scenarios where
decisions are required, for example in healthcare or industrial monitoring
equipment. In crucial situations, a model that can offer meaningful
explanations of its decision-making is essential. In industrial facilities, the
equipment's well-timed maintenance is vital to ensure continuous operation to
prevent money loss. Using machine learning, predictive and prescriptive
maintenance attempt to anticipate and prevent eventual system failures. This
paper introduces a visualisation tool incorporating interpretations to display
information derived from predictive maintenance models, trained on time-series
data. | [
"cs.LG",
"I.2.0; I.2.6; H.5.2"
] |
Prior work in multi-task learning has mainly focused on predictions on a
single image. In this work, we present a new approach for multi-task learning
from videos via efficient inter-frame local attention (MILA). Our approach
contains a novel inter-frame attention module which allows learning of
task-specific attention across frames. We embed the attention module in a
``slow-fast'' architecture, where the slower network runs on sparsely sampled
keyframes and the light-weight shallow network runs on non-keyframes at a high
frame rate. We also propose an effective adversarial learning strategy to
encourage the slow and fast network to learn similar features. Our approach
ensures low-latency multi-task learning while maintaining high quality
predictions. Experiments show competitive accuracy compared to state-of-the-art
on two multi-task learning benchmarks while reducing the number of floating
point operations (FLOPs) by up to 70\%. In addition, our attention based
feature propagation method (ILA) outperforms prior work in terms of task
accuracy while also reducing up to 90\% of FLOPs. | [
"cs.CV"
] |
Radiological imaging offers effective measurement of anatomy, which is useful
in disease diagnosis and assessment. Previous study has shown that the left
atrial wall remodeling can provide information to predict treatment outcome in
atrial fibrillation. Nevertheless, the segmentation of the left atrial
structures from medical images is still very time-consuming. Current advances
in neural network may help creating automatic segmentation models that reduce
the workload for clinicians. In this preliminary study, we propose automated,
two-stage, three-dimensional U-Nets with convolutional neural network, for the
challenging task of left atrial segmentation. Unlike previous two-dimensional
image segmentation methods, we use 3D U-Nets to obtain the heart cavity
directly in 3D. The dual 3D U-Net structure consists of, a first U-Net to
coarsely segment and locate the left atrium, and a second U-Net to accurately
segment the left atrium under higher resolution. In addition, we introduce a
Contour loss based on additional distance information to adjust the final
segmentation. We randomly split the data into training datasets (80 subjects)
and validation datasets (20 subjects) to train multiple models, with different
augmentation setting. Experiments show that the average Dice coefficients for
validation datasets are around 0.91 - 0.92, the sensitivity around 0.90-0.94
and the specificity 0.99. Compared with traditional Dice loss, models trained
with Contour loss in general offer smaller Hausdorff distance with similar Dice
coefficient, and have less connected components in predictions. Finally, we
integrate several trained models in an ensemble prediction to segment testing
datasets. | [
"cs.CV"
] |
To synthesize high-quality person images with arbitrary poses is challenging.
In this paper, we propose a novel Multi-scale Conditional Generative
Adversarial Networks (MsCGAN), aiming to convert the input conditional person
image to a synthetic image of any given target pose, whose appearance and the
texture are consistent with the input image. MsCGAN is a multi-scale
adversarial network consisting of two generators and two discriminators. One
generator transforms the conditional person image into a coarse image of the
target pose globally, and the other is to enhance the detailed quality of the
synthetic person image through a local reinforcement network. The outputs of
the two generators are then merged into a synthetic, discriminant and
high-resolution image. On the other hand, the synthetic image is downsampled to
multiple resolutions as the input to multi-scale discriminator networks. The
proposed multi-scale generators and discriminators handling different levels of
visual features can benefit to synthesizing high-resolution person images with
realistic appearance and texture. Experiments are conducted on the Market-1501
and DeepFashion datasets to evaluate the proposed model, and both qualitative
and quantitative results demonstrate the superior performance of the proposed
MsCGAN. | [
"cs.CV"
] |
We consider model-free reinforcement learning (RL) in non-stationary Markov
decision processes. Both the reward functions and the state transition
functions are allowed to vary arbitrarily over time as long as their cumulative
variations do not exceed certain variation budgets. We propose Restarted
Q-Learning with Upper Confidence Bounds (RestartQ-UCB), the first model-free
algorithm for non-stationary RL, and show that it outperforms existing
solutions in terms of dynamic regret. Specifically, RestartQ-UCB with
Freedman-type bonus terms achieves a dynamic regret bound of
$\widetilde{O}(S^{\frac{1}{3}} A^{\frac{1}{3}} \Delta^{\frac{1}{3}} H
T^{\frac{2}{3}})$, where $S$ and $A$ are the numbers of states and actions,
respectively, $\Delta>0$ is the variation budget, $H$ is the number of time
steps per episode, and $T$ is the total number of time steps. We further
present a parameter-free algorithm named Double-Restart Q-UCB that does not
require prior knowledge of the variation budget. We show that our algorithms
are \emph{nearly optimal} by establishing an information-theoretical lower
bound of $\Omega(S^{\frac{1}{3}} A^{\frac{1}{3}} \Delta^{\frac{1}{3}}
H^{\frac{2}{3}} T^{\frac{2}{3}})$, the first lower bound in non-stationary RL.
Numerical experiments validate the advantages of RestartQ-UCB in terms of both
cumulative rewards and computational efficiency. We demonstrate the power of
our results in examples of multi-agent RL and inventory control across related
products. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
The first step toward Seed Phenotyping i.e. the comprehensive assessment of
complex seed traits such as growth, development, tolerance, resistance,
ecology, yield, and the measurement of pa-rameters that form more complex
traits is the identification of seed type. Generally, a plant re-searcher
inspects the visual attributes of a seed such as size, shape, area, color and
texture to identify the seed type, a process that is tedious and
labor-intensive. Advances in the areas of computer vision and deep learning
have led to the development of convolutional neural networks (CNN) that aid in
classification using images. While they classify efficiently, a key bottleneck
is the need for an extensive amount of labelled data to train the CNN before it
can be put to the task of classification. The work leverages the concepts of
Contrastive Learning and Domain Randomi-zation in order to achieve the same.
Briefly, domain randomization is the technique of applying models trained on
images containing simulated objects to real-world objects. The use of synthetic
images generated from a representational sample crop of real-world images
alleviates the need for a large volume of test subjects. As part of the work,
synthetic image datasets of five different types of seed images namely, canola,
rough rice, sorghum, soy and wheat are applied to three different
self-supervised learning frameworks namely, SimCLR, Momentum Contrast (MoCo)
and Build Your Own Latent (BYOL) where ResNet-50 is used as the backbone in
each of the networks. When the self-supervised models are fine-tuned with only
5% of the labels from the synthetic dataset, results show that MoCo, the model
that yields the best performance of the self-supervised learning frameworks in
question, achieves an accuracy of 77% on the test dataset which is only ~13%
less than the accuracy of 90% achieved by ResNet-50 trained on 100% of the
labels. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
With the advantage of low storage cost and high efficiency, hashing learning
has received much attention in the domain of Big Data. In this paper, we
propose a novel unsupervised hashing learning method to cope with this open
problem to directly preserve the manifold structure by hashing. To address this
problem, both the semantic correlation in textual space and the locally
geometric structure in the visual space are explored simultaneously in our
framework. Besides, the `2;1-norm constraint is imposed on the projection
matrices to learn the discriminative hash function for each modality. Extensive
experiments are performed to evaluate the proposed method on the three publicly
available datasets and the experimental results show that our method can
achieve superior performance over the state-of-the-art methods. | [
"cs.CV",
"cs.MM"
] |
We present a method to populate an unknown environment with models of
previously seen objects, placed in a Euclidean reference frame that is inferred
causally and on-line using monocular video along with inertial sensors. The
system we implement returns a sparse point cloud for the regions of the scene
that are visible but not recognized as a previously seen object, and a detailed
object model and its pose in the Euclidean frame otherwise. The system includes
bottom-up and top-down components, whereby deep networks trained for detection
provide likelihood scores for object hypotheses provided by a nonlinear filter,
whose state serves as memory. Additional networks provide likelihood scores for
edges, which complements detection networks trained to be invariant to small
deformations. We test our algorithm on existing datasets, and also introduce
the VISMA dataset, that provides ground truth pose, point-cloud map, and object
models, along with time-stamped inertial measurements. | [
"cs.CV",
"cs.RO"
] |
State-of-the-art object detectors usually learn multi-scale representations
to get better results by employing feature pyramids. However, the current
designs for feature pyramids are still inefficient to integrate the semantic
information over different scales. In this paper, we begin by investigating
current feature pyramids solutions, and then reformulate the feature pyramid
construction as the feature reconfiguration process. Finally, we propose a
novel reconfiguration architecture to combine low-level representations with
high-level semantic features in a highly-nonlinear yet efficient way. In
particular, our architecture which consists of global attention and local
reconfigurations, is able to gather task-oriented features across different
spatial locations and scales, globally and locally. Both the global attention
and local reconfiguration are lightweight, in-place, and end-to-end trainable.
Using this method in the basic SSD system, our models achieve consistent and
significant boosts compared with the original model and its other variations,
without losing real-time processing speed. | [
"cs.CV"
] |
With the advances of data-driven machine learning research, a wide variety of
prediction problems have been tackled. It has become critical to explore how
machine learning and specifically deep learning methods can be exploited to
analyse healthcare data. A major limitation of existing methods has been the
focus on grid-like data; however, the structure of physiological recordings are
often irregular and unordered which makes it difficult to conceptualise them as
a matrix. As such, graph neural networks have attracted significant attention
by exploiting implicit information that resides in a biological system, with
interactive nodes connected by edges whose weights can be either temporal
associations or anatomical junctions. In this survey, we thoroughly review the
different types of graph architectures and their applications in healthcare. We
provide an overview of these methods in a systematic manner, organized by their
domain of application including functional connectivity, anatomical structure
and electrical-based analysis. We also outline the limitations of existing
techniques and discuss potential directions for future research. | [
"cs.LG",
"cs.CV",
"q-bio.QM"
] |
We provide a new model for texture synthesis based on a multiscale,
multilayer feature extractor. Within the model, textures are represented by a
set of statistics computed from ReLU wavelet coefficients at different layers,
scales and orientations. A new image is synthesized by matching the target
statistics via an iterative projection algorithm. We explain the necessity of
the different types of pre-defined wavelet filters used in our model and the
advantages of multilayer structures for image synthesis. We demonstrate the
power of our model by generating samples of high quality textures and providing
insights into deep representations for texture images. | [
"cs.CV"
] |
Batch normalization (BN) has been very effective for deep learning and is
widely used. However, when training with small minibatches, models using BN
exhibit a significant degradation in performance. In this paper we study this
peculiar behavior of BN to gain a better understanding of the problem, and
identify a cause. We propose 'EvalNorm' to address the issue by estimating
corrected normalization statistics to use for BN during evaluation. EvalNorm
supports online estimation of the corrected statistics while the model is being
trained, and does not affect the training scheme of the model. As a result,
EvalNorm can also be used with existing pre-trained models allowing them to
benefit from our method. EvalNorm yields large gains for models trained with
smaller batches. Our experiments show that EvalNorm performs 6.18% (absolute)
better than vanilla BN for a batchsize of 2 on ImageNet validation set and from
1.5 to 7.0 points (absolute) gain on the COCO object detection benchmark across
a variety of setups. | [
"cs.CV",
"cs.LG"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.